Abstract
Hidden superior models exist in public repositories but are overlooked due to inefficient discovery methods; a multi-armed bandit approach using shared query sets and aggressive elimination significantly accelerates identification of top-performing models.
Public repositories host millions of fine-tuned models, yet community usage remains disproportionately concentrated on a small number of foundation checkpoints. We investigate whether this concentration reflects efficient market selection or if superior models are systematically overlooked. Through an extensive evaluation of over 2,000 models, we show the prevalence of "hidden gems", unpopular fine-tunes that significantly outperform their popular counterparts. Notably, within the Llama-3.1-8B family, we find rarely downloaded checkpoints that improve math performance from 83.2% to 96.0% without increasing inference costs. However, discovering these models through exhaustive evaluation of every uploaded model is computationally infeasible. We therefore formulate model discovery as a Multi-Armed Bandit problem and accelerate the Sequential Halving search algorithm by using shared query sets and aggressive elimination schedules. Our method retrieves top models with as few as 50 queries per candidate, accelerating discovery by over 50x.
Community
An investigation of the available fine-tunes of popular foundation models. While over 90% of downloads are directed to the official base versions the paper shows the existence of other, rarely downloaded fine-tunes that significantly outperform them.
arXivLens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/discovering-hidden-gems-in-model-repositories-1301-90a4567c
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LLMRouterBench: A Massive Benchmark and Unified Framework for LLM Routing (2026)
- Efficient Evaluation of LLM Performance with Statistical Guarantees (2026)
- TokenSeek: Memory Efficient Fine Tuning via Instance-Aware Token Ditching (2026)
- TRINITY: An Evolved LLM Coordinator (2025)
- Elastic Attention: Test-time Adaptive Sparsity Ratios for Efficient Transformers (2026)
- Cache What Lasts: Token Retention for Memory-Bounded KV Cache in LLMs (2025)
- Adaptive Layer Selection for Layer-Wise Token Pruning in LLM Inference (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper