Revisiting Generalization Across Difficulty Levels: It's Not So Easy
Abstract
LLMs do not consistently generalize across different task difficulties, indicating the need for a broad range of difficulty levels in both training and evaluation datasets.
We investigate how well large language models (LLMs) generalize across different task difficulties, a key question for effective data curation and evaluation. Existing research is mixed regarding whether training on easier or harder data leads to better results, and whether those gains come on easier or harder test data. We address this question by conducting a systematic evaluation of LLMs' generalization across models, datasets, and fine-grained groups of example difficulty. We rank examples in six datasets using the outputs of thousands of different LLMs and Item Response Theory (IRT), a well-established difficulty metric in educational testing. Unlike prior work, our difficulty ratings are therefore determined solely by the abilities of many different LLMs, excluding human opinions of difficulty. With a more objective, larger-scale, and finer-grained analysis, we show that cross-difficulty generalization is often limited; training on either easy or hard data cannot achieve consistent improvements across the full range of difficulties. These results show the importance of having a range of difficulties in both training and evaluation data for LLMs, and that taking shortcuts with respect to difficulty is risky.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LLMs Encode How Difficult Problems Are (2025)
- What Makes a Good Curriculum? Disentangling the Effects of Data Ordering on LLM Mathematical Reasoning (2025)
- RIDE: Difficulty Evolving Perturbation with Item Response Theory for Mathematical Reasoning (2025)
- Improving Metacognition and Uncertainty Communication in Language Models (2025)
- JudgeBoard: Benchmarking and Enhancing Small Language Models for Reasoning Evaluation (2025)
- Probing the Difficulty Perception Mechanism of Large Language Models (2025)
- Beyond Overall Accuracy: A Psychometric Deep Dive into the Topic-Specific Medical Capabilities of 80 Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper