reproducing-cross-encoders
Collection
A set of cross-encoders trained from various backbones and losses for equal comparison • 55 items • Updated
• 3
This model is a cross-encoder based on google/electra-base-discriminator. It was trained on Ms-Marco using loss hingeLoss as part of a reproducibility paper for training cross encoders: "Reproducing and Comparing Distillation Techniques for Cross-Encoders", see the paper for more details.
This model is intended for re-ranking the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE).
Training can be easily reproduced using the assiciated repository. The exact training configuration used for this model is also detailed in config.yaml.
Quick Start:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("google/electra-base-discriminator")
model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-ELECTRA-Hinge")
features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
We provide evaluations of this cross-encoder re-ranking the top 1000 documents retrieved by naver/splade-v3-distilbert.
| dataset | RR@10 | nDCG@10 |
|---|---|---|
| msmarco_dev | 39.19 | 45.79 |
| trec2019 | 95.23 | 72.98 |
| trec2020 | 95.06 | 73.29 |
| fever | 78.60 | 78.65 |
| arguana | 17.81 | 26.69 |
| climate_fever | 25.29 | 18.99 |
| dbpedia | 74.04 | 44.10 |
| fiqa | 48.50 | 40.12 |
| hotpotqa | 87.76 | 70.32 |
| nfcorpus | 56.74 | 34.24 |
| nq | 52.83 | 57.95 |
| quora | 77.72 | 79.87 |
| scidocs | 27.42 | 15.71 |
| scifact | 64.86 | 67.55 |
| touche | 66.31 | 36.31 |
| trec_covid | 91.22 | 68.55 |
| robust04 | 70.82 | 46.89 |
| lotte_writing | 70.50 | 61.09 |
| lotte_recreation | 62.30 | 56.98 |
| lotte_science | 48.48 | 39.77 |
| lotte_technology | 56.42 | 47.25 |
| lotte_lifestyle | 74.14 | 64.54 |
| Mean In Domain | 76.49 | 64.02 |
| BEIR 13 | 59.16 | 49.16 |
| LoTTE (OOD) | 63.78 | 52.75 |
Base model
google/electra-base-discriminator