REAP the Experts: Why Pruning Prevails for One-Shot MoE compression
Paper
•
2510.13999
•
Published
•
6
𓌳 REAP𓌳 the Experts: Why Pruning Prevails for One-Shot MoE Compression
📄 Paper • 💻 Code • 📝 Blog
40% Expert-Pruned MiniMax-M2.1 optimized for code generation and function calling.
| Property | Value |
|---|---|
| Base Model | MiniMax-M2.1 |
| Compression | 40% experts removed |
| Parameters | ~264B |
| Experts per Layer | ~96 |
| Precision | BF16 |
| Disk Size | ~500GB |
REAP's effectiveness depends critically on calibration data that represents the target use case. We specifically optimized for code generation, function/tool calling, and agentic workflows.
| Dataset | Samples | Purpose | Why It Matters |
|---|---|---|---|
| evol-codealpaca-v1 | 700 | Code generation | 51% of mix — Code tasks activate specific expert pathways; pruning without code calibration destroys coding ability |
| xlam-function-calling-60k | 330 | Function/tool calling | 24% of mix — Tool use requires structured JSON output; experts handling schema generation must be preserved |
| SWE-smith-trajectories | 330 | Agentic multi-turn | 24% of mix — Real SWE-bench trajectories with tool calls, file edits, and multi-step reasoning |
REAP Algorithm:
1. Forward pass calibration samples through model
2. Record which experts activate and their magnitudes
3. Compute saliency = router_weight × activation_norm
4. Prune lowest-saliency experts
Key Insight: Experts are TASK-SPECIFIC
├── Some experts specialize in natural language
├── Some experts specialize in code syntax
├── Some experts specialize in JSON/structured output
└── Some experts specialize in multi-turn context
If calibration lacks code → code-specialized experts appear "unused" → get pruned → model loses coding ability
Cerebras used the same 3 datasets in their GLM-4.6 REAP experiments:
We followed this exact recipe for reproducibility.
Our calibration mix: 0xSero/glm47-reap-calibration-v2
| Model | Compression | Experts | Size |
|---|---|---|---|
| MiniMax-M2.1-REAP-25 | 25% | ~120 | ~620GB |
| MiniMax-M2.1-REAP-30 | 30% | ~112 | ~580GB |
| MiniMax-M2.1-REAP-40 | 40% | ~96 | ~500GB |
| MiniMax-M2.1-REAP-50 | 50% | ~80 | ~420GB |
vllm serve 0xSero/MiniMax-M2.1-REAP-40 \
--tensor-parallel-size 8 \
--trust-remote-code \
--dtype bfloat16
#!/bin/bash
# MiniMax REAP - same calibration as GLM-4.7
export MODEL_DIR=/path/to/MiniMax-M2.1
export REAP_DATASET=0xSero/glm47-reap-calibration-mix
export REAP_SAMPLES_PER_CATEGORY=999
export REAP_MODEL_MAX_LENGTH=2048
python src/reap/prune.py \
--model-name $MODEL_DIR \
--dataset-name $REAP_DATASET \
--compression-ratio 0.40 \
--prune-method reap \
--seed 42 \
--distance_measure cosine
Apache 2.0
@article{lasby2025reap,
title={REAP the Experts: Why Pruning Prevails for One-Shot MoE Compression},
author={Lasby, Mike and Lazarevich, Ivan and Sinnadurai, Nish and Lie, Sean and Ioannou, Yani and Thangarasa, Vithursan},
journal={arXiv preprint arXiv:2510.13999},
year={2025},
url={https://arxiv.org/abs/2510.13999}
}
Base model
MiniMaxAI/MiniMax-M2.1