IMU-1: Sample-Efficient Pre-training of Small Language Models
Paper
• 2602.02522 • Published
• 6
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Pre-tokenized training data for Stage 2 (decay phase) of IMU-1, a sample-efficient 430M parameter language model.
| Property | Value |
|---|---|
| Tokens | ~28B |
| Format | Memory-mapped NumPy (.npy) |
| Tokenizer | SmolLM2-360M |
| Vocab size | 49,152 |
Stage 2 uses tighter quality filters compared to Stage 1:
huggingface-cli download thepowerfuldeez/1226_imu1_base_decay_corpus --repo-type=dataset
# Clone training framework
git clone https://github.com/thepowerfuldeez/sample_efficient_gpt
cd sample_efficient_gpt
# Install dependencies
export UV_TORCH_BACKEND=auto
uv pip install setuptools uv_build maturin
uv sync
# Train Stage 2 (requires Stage 1 checkpoint)
uv run torchrun --nproc_per_node 8 train.py \
--config configs/imu1_base.yaml \
--config-key decay
| Parameter | Value |
|---|---|
| Schedule | WSD (decay phase) |
| Iterations | 100,000 (200k total) |
| Batch size | 312 |
| Context length | 896 |
| Muon LR | 1.15e-2 → 25% min |
| Decay start | 100k steps |
@misc{grigorev2026imu1sampleefficientpretrainingsmall,
title={IMU-1: Sample-Efficient Pre-training of Small Language Models},
author={George Grigorev},
year={2026},
eprint={2602.02522},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2602.02522},
}