reproducing-cross-encoders
Collection
A set of cross-encoders trained from various backbones and losses for equal comparison • 55 items • Updated
• 3
This model is a cross-encoder based on microsoft/deberta-v3-base. It was trained on Ms-Marco using loss bce as part of a reproducibility paper for training cross encoders: "Reproducing and Comparing Distillation Techniques for Cross-Encoders", see the paper for more details.
This model is intended for re-ranking the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE).
Training can be easily reproduced using the assiciated repository. The exact training configuration used for this model is also detailed in config.yaml.
Quick Start:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v3-base")
model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-DeBERTav3-BCE")
features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
We provide evaluations of this cross-encoder re-ranking the top 1000 documents retrieved by naver/splade-v3-distilbert.
| dataset | RR@10 | nDCG@10 |
|---|---|---|
| msmarco_dev | 36.37 | 42.91 |
| trec2019 | 90.47 | 66.67 |
| trec2020 | 90.19 | 64.49 |
| fever | 66.75 | 69.05 |
| arguana | 13.92 | 20.99 |
| climate_fever | 17.30 | 13.61 |
| dbpedia | 57.37 | 32.56 |
| fiqa | 43.10 | 36.33 |
| hotpotqa | 77.04 | 61.33 |
| nfcorpus | 38.25 | 21.67 |
| nq | 46.54 | 51.74 |
| quora | 47.44 | 51.12 |
| scidocs | 25.03 | 14.22 |
| scifact | 63.63 | 66.40 |
| touche | 56.30 | 30.33 |
| trec_covid | 89.38 | 72.78 |
| robust04 | 55.79 | 35.44 |
| lotte_writing | 64.83 | 56.43 |
| lotte_recreation | 59.42 | 54.49 |
| lotte_science | 44.35 | 36.87 |
| lotte_technology | 51.93 | 44.39 |
| lotte_lifestyle | 73.10 | 64.26 |
| Mean In Domain | 72.34 | 58.02 |
| BEIR 13 | 49.39 | 41.70 |
| LoTTE (OOD) | 58.24 | 48.65 |
Base model
microsoft/deberta-v3-base