File size: 3,609 Bytes
89feb2b
 
28d1dc3
 
 
 
 
2158340
28d1dc3
 
 
 
89feb2b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48142cc
89feb2b
903103c
 
 
 
 
 
 
 
 
28d1dc3
 
 
 
 
 
 
 
 
903103c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00ee1ee
903103c
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
language: en
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- text-ranking
- text-retrieval
tags:
- retrieval
- embeddings
- benchmark
dataset_info:
- config_name: default
  features:
  - name: query-id
    dtype: string
  - name: corpus-id
    dtype: string
  - name: score
    dtype: int64
  splits:
  - name: test
    num_examples: 2000
- config_name: corpus
  features:
  - name: _id
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: corpus
    num_examples: 50000
- config_name: queries
  features:
  - name: _id
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: queries
    num_examples: 1000
configs:
- config_name: default
  data_files:
  - split: test
    path: qrels.jsonl
- config_name: corpus
  data_files:
  - split: corpus
    path: corpus.jsonl
- config_name: queries
  data_files:
  - split: queries
    path: queries.jsonl
---

# LIMIT

A retrieval dataset that exposes fundamental theoretical limitations of embedding-based retrieval models. Despite using simple queries like "Who likes Apples?", state-of-the-art embedding models achieve less than 20% recall@100 on LIMIT full and cannot solve LIMIT-small (46 docs).

## Links

- **Paper**: [On the Theoretical Limitations of Embedding-Based Retrieval](https://arxiv.org/abs/2508.21038)
- **Code**: [github.com/google-deepmind/limit](https://github.com/google-deepmind/limit)
- **Full version**: [LIMIT](https://huggingface.co/datasets/orionweller/LIMIT/) (50k documents)
- **Small version**: [LIMIT-small](https://huggingface.co/datasets/orionweller/LIMIT-small/) (46 documents only)

## Sample Usage

You can load the data using the `datasets` library from Huggingface ([LIMIT](https://huggingface.co/datasets/orionweller/LIMIT), [LIMIT-small](https://huggingface.co/datasets/orionweller/LIMIT-small)):

```python
from datasets import load_dataset
ds = load_dataset("orionweller/LIMIT-small", "corpus") # also available: queries, test (contains qrels).
```

## Dataset Details

**Queries** (1,000): Simple questions asking "Who likes [attribute]?" 
- Examples: "Who likes Quokkas?", "Who likes Joshua Trees?", "Who likes Disco Music?"

**Corpus** (50k documents): Short biographical texts describing people and their preferences
- Format: "[Name] likes [attribute1] and [attribute2]."
- Example: "Geneva Durben likes Quokkas and Apples."

**Qrels** (2,000): Each query has exactly 2 relevant documents (score=1), creating nearly all possible combinations of 2 documents from the 46 corpus documents (C(46,2) = 1,035 combinations).

### Format
The dataset follows standard MTEB format with three configurations:
- `default`: Query-document relevance judgments (qrels), keys: `corpus-id`, `query-id`, `score` (1 for relevant)
- `queries`: Query texts with IDs , keys: `_id`, `text`
- `corpus`: Document texts with IDs, keys: `_id`, `title` (empty), and `text`

### Purpose
Tests whether embedding models can represent all top-k combinations of relevant documents, based on theoretical results connecting embedding dimension to representational capacity. Despite the simple nature of queries, state-of-the-art models struggle due to fundamental dimensional limitations.

## Citation

```bibtex
@misc{weller2025theoreticallimit,
      title={On the Theoretical Limitations of Embedding-Based Retrieval}, 
      author={Orion Weller and Michael Boratko and Iftekhar Naim and Jinhyuk Lee},
      year={2025},
      eprint={2508.21038},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2508.21038}, 
}
```