Neural Thickets: Diverse Task Experts Are Dense Around Pretrained Weights
Abstract
Pretraining creates a parameter distribution where task-specific experts become more densely populated in large models, enabling effective ensemble methods for post-training adaptation.
Pretraining produces a learned parameter vector that is typically treated as a starting point for further iterative adaptation. In this work, we instead view the outcome of pretraining as a distribution over parameter vectors, whose support already contains task-specific experts. We show that in small models such expert solutions occupy a negligible fraction of the volume of this distribution, making their discovery reliant on structured optimization methods such as gradient descent. In contrast, in large, well-pretrained models the density of task-experts increases dramatically, so that diverse, task-improving specialists populate a substantial fraction of the neighborhood around the pretrained weights. Motivated by this perspective, we explore a simple, fully parallel post-training method that samples N parameter perturbations at random, selects the top K, and ensembles predictions via majority vote. Despite its simplicity, this approach is competitive with standard post-training methods such as PPO, GRPO, and ES for contemporary large-scale models.
Community
Pretraining produces a learned parameter vector that is typically treated as a starting point for further
iterative adaptation. In this work, we instead view the outcome of pretraining as a distribution over
parameter vectors, whose support already contains task-specific experts. We show that in small
models such expert solutions occupy a negligible fraction of the volume of this distribution, making
their discovery reliant on structured optimization methods such as gradient descent. In contrast,
in large, well-pretrained models the density of task-experts increases dramatically, so that diverse,
task-improving specialists populate a substantial fraction of the neighborhood around the pretrained
weights. Motivated by this perspective, we explore a simple, fully parallel post-training method that
samples N parameter perturbations at random, selects the top K, and ensembles predictions via
majority vote. Despite its simplicity, this approach is competitive with standard post-training methods
such as PPO, GRPO, and ES for contemporary large-scale models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- The Multiple Ticket Hypothesis: Random Sparse Subnetworks Suffice for RLVR (2026)
- Making Foundation Models Probabilistic via Singular Value Ensembles (2026)
- Learning to Reason in 13 Parameters (2026)
- Data Repetition Beats Data Scaling in Long-CoT Supervised Fine-Tuning (2026)
- Weight Decay Improves Language Model Plasticity (2026)
- Spectral Surgery: Training-Free Refinement of LoRA via Gradient-Guided Singular Value Reweighting (2026)
- Updating Parametric Knowledge with Context Distillation Retains Post-Training Capabilities (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper