xCoRe ECB+

Official weights for xcore, pretrained on LitBank and trained on ECB+, based on DeBERTa-large. This model achieves 42.4 Avg CoNLL-F1 on ECB+.

Other available models at SapienzaNLP huggingface hub:

hf_model_name dataset Score Mode
"sapienzanlp/xcore-litbank" LitBank 78.2 Long-Document (Book Splits)
"sapienzanlp/xcore-ecb" ECB+ 42.4 Cross-Document (News)
"sapienzanlp/xcore-scico" SciCo 31.0 Cross-Document (Scientific)

Results on ECB+

drawing

xCoRe: Cross Context Coreference Resolution Defying recent trends

Conference License: CC BY-NC 4.0 Pip Package git

Citation

This work has been published at EMNLP 2025 main conference. If you use any part, please consider citing our paper as follows:

@inproceedings{martinelli-etal-2025-xcore,
    title = "x{C}o{R}e: Cross-context Coreference Resolution",
    author = "Martinelli, Giuliano  and
      Gatti, Bruno  and
      Navigli, Roberto",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.emnlp-main.1737/",
    pages = "34252--34266"
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including sapienzanlp/xcore-ecb

Evaluation results