The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
NanoMIRACL
A tiny, evaluation-ready slice of MIRACL that mirrors the spirit of NanoBEIR: same task, same style, but dramatically smaller so you can iterate and benchmark in minutes instead of hours.
Evaluation can be performed during and after training by integrating with Sentence Transformer's Evaluation module (InformationRetrievalEvaluator).
NanoMIRACL Evaluation (NDCG@10)
| Model | Avg | ar | bn | de | en | es | fa | fi | fr | hi | id | ja | ko | ru | sw | te | th | yo | zh |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| multilingual-e5-small | 0.7157 | 0.7421 | 0.7233 | 0.6519 | 0.6976 | 0.7241 | 0.7224 | 0.7761 | 0.6394 | 0.6801 | 0.6171 | 0.6941 | 0.6713 | 0.7195 | 0.7206 | 0.9779 | 0.8836 | 0.6125 | 0.6287 |
| multilingual-e5-large | 0.7766 | 0.8108 | 0.8049 | 0.7951 | 0.7260 | 0.7996 | 0.7426 | 0.8346 | 0.6662 | 0.7751 | 0.6147 | 0.7398 | 0.7058 | 0.8020 | 0.7500 | 0.9779 | 0.9033 | 0.8109 | 0.7192 |
| e5-small-v2 | 0.2638 | 0.0067 | 0.0000 | 0.4828 | 0.7220 | 0.5946 | 0.0126 | 0.5333 | 0.5068 | 0.0271 | 0.3137 | 0.1526 | 0.0286 | 0.1588 | 0.4252 | 0.0200 | 0.0200 | 0.6306 | 0.1137 |
| e5-large-v2 | 0.2970 | 0.0460 | 0.0260 | 0.5281 | 0.7494 | 0.6416 | 0.0326 | 0.5552 | 0.5380 | 0.0571 | 0.3494 | 0.2756 | 0.0858 | 0.3536 | 0.3917 | 0.0200 | 0.0086 | 0.5641 | 0.1232 |
| bge-m3 | 0.7880 | 0.8257 | 0.8440 | 0.7307 | 0.7386 | 0.7516 | 0.7794 | 0.7863 | 0.7252 | 0.7408 | 0.6619 | 0.8044 | 0.7337 | 0.7968 | 0.7550 | 0.9926 | 0.9084 | 0.8547 | 0.7549 |
| gte-multilingual-base | 0.7430 | 0.7817 | 0.7264 | 0.6918 | 0.7101 | 0.7107 | 0.7328 | 0.8067 | 0.7048 | 0.7067 | 0.6281 | 0.7533 | 0.6480 | 0.7525 | 0.6927 | 0.9900 | 0.8308 | 0.7807 | 0.7269 |
| nomic-embed-text-v2-moe | 0.7492 | 0.7903 | 0.8084 | 0.6693 | 0.7097 | 0.7250 | 0.7431 | 0.7979 | 0.6826 | 0.7349 | 0.6360 | 0.7186 | 0.6688 | 0.7759 | 0.6963 | 0.9926 | 0.8860 | 0.7637 | 0.6874 |
| paraphrase-multilingual-MiniLM-L12-v2 | 0.4148 | 0.5184 | 0.0334 | 0.5161 | 0.6042 | 0.6499 | 0.4048 | 0.5234 | 0.5605 | 0.3865 | 0.4736 | 0.3938 | 0.4131 | 0.4892 | 0.1722 | 0.0200 | 0.5489 | 0.2885 | 0.4693 |
Notes:
- The above results were computed with
./nano_eval.py. - E5 models were evaluated with
--query-prompt "query: "and--corpus-prompt "passage: ". nomic-ai/nomic-embed-text-v2-moewas evaluated with--query-prompt "search_query: "and--corpus-prompt "search_document: ".- Some models (e.g.,
BAAI/bge-m3,Alibaba-NLP/gte-multilingual-base,nomic-ai/nomic-embed-text-v2-moe) require--trust-remote-code. - This benchmark section is a draft and may be refined as scripts/metadata are finalized.
What this dataset is
- A collection of 18 language subsets (
corpus,queries,qrels) published on the Hugging Face Hub underhotchpotch/NanoMIRACL. - Each subset contains 50 dev queries and a corpus of 10,000 documents.
- Queries are sampled from hotchpotch/miracl-hf-unified (derived from MIRACL), keeping one positive per query and all hard negatives, then filling the corpus to 10k with random documents.
- Sampling is deterministic (seed=42).
- License: Other (see MIRACL and upstream repository licenses).
Subset names
- Split names:
NanoMIRACL-arNanoMIRACL-bnNanoMIRACL-deNanoMIRACL-enNanoMIRACL-esNanoMIRACL-faNanoMIRACL-fiNanoMIRACL-frNanoMIRACL-hiNanoMIRACL-idNanoMIRACL-jaNanoMIRACL-koNanoMIRACL-ruNanoMIRACL-swNanoMIRACL-teNanoMIRACL-thNanoMIRACL-yoNanoMIRACL-zh
- Config names:
corpus,queries,qrels
Usage
from datasets import load_dataset
split = "NanoMIRACL-ja"
queries = load_dataset("hotchpotch/NanoMIRACL", "queries", split=split)
corpus = load_dataset("hotchpotch/NanoMIRACL", "corpus", split=split)
qrels = load_dataset("hotchpotch/NanoMIRACL", "qrels", split=split)
print(queries[0]["text"])
Example eval code
python ./nano_eval.py --model-path intfloat/multilingual-e5-small --query-prompt "query: " --corpus-prompt "passage: "
For models that require trust_remote_code, add --trust-remote-code (e.g., BAAI/bge-m3).
Why Nano?
- Fast eval loops: 50 queries × 10k docs fits comfortably on a single GPU/CPU run.
- Reproducible: deterministic sampling and stable IDs.
- Drop-in: BEIR/NanoBEIR-style schemas, so existing IR loaders need minimal tweaks.
Upstream sources
- Original data: MIRACL — Multilingual Information Retrieval Across a Continuum of Languages.
- Base dataset: hotchpotch/miracl-hf-unified (Hugging Face Hub).
- Inspiration: NanoBEIR (lightweight evaluation subsets).
License
Other. This dataset is derived from MIRACL and ultimately from open-source sources. Please respect original repository licenses and attribution requirements.
Author
- Yuichi Tateno
- Downloads last month
- 69