xCoRe Models
Collection
Cross Context Coreference Resolution
•
3 items
•
Updated
Official weights for xcore, pretrained on LitBank and trained on ECB+, based on DeBERTa-large. This model achieves 42.4 Avg CoNLL-F1 on ECB+.
Other available models at SapienzaNLP huggingface hub:
| hf_model_name | dataset | Score | Mode |
|---|---|---|---|
| "sapienzanlp/xcore-litbank" | LitBank | 78.2 | Long-Document (Book Splits) |
| "sapienzanlp/xcore-ecb" | ECB+ | 42.4 | Cross-Document (News) |
| "sapienzanlp/xcore-scico" | SciCo | 31.0 | Cross-Document (Scientific) |
This work has been published at EMNLP 2025 main conference. If you use any part, please consider citing our paper as follows:
@inproceedings{martinelli-etal-2025-xcore,
title = "x{C}o{R}e: Cross-context Coreference Resolution",
author = "Martinelli, Giuliano and
Gatti, Bruno and
Navigli, Roberto",
editor = "Christodoulopoulos, Christos and
Chakraborty, Tanmoy and
Rose, Carolyn and
Peng, Violet",
booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.emnlp-main.1737/",
pages = "34252--34266"
}