𓌳 REAP𓌳 the Experts: Why Pruning Prevails for One-Shot MoE Compression
📄 Paper💻 Code📝 Blog

MiniMax-M2.1-REAP-40

✨ Highlights

40% Expert-Pruned MiniMax-M2.1 optimized for code generation and function calling.

  • 40% Expert Pruning: ~~96 experts remaining per layer
  • Calibrated for Code & Tools: Same calibration mix as GLM-4.7 REAP models
  • One-Shot Compression: No fine-tuning required

🙏 Acknowledgments


📋 Model Specifications

Property Value
Base Model MiniMax-M2.1
Compression 40% experts removed
Parameters ~264B
Experts per Layer ~96
Precision BF16
Disk Size ~500GB

🔬 Calibration Dataset: Deep Dive

REAP's effectiveness depends critically on calibration data that represents the target use case. We specifically optimized for code generation, function/tool calling, and agentic workflows.

Why These 3 Datasets?

Dataset Samples Purpose Why It Matters
evol-codealpaca-v1 700 Code generation 51% of mix — Code tasks activate specific expert pathways; pruning without code calibration destroys coding ability
xlam-function-calling-60k 330 Function/tool calling 24% of mix — Tool use requires structured JSON output; experts handling schema generation must be preserved
SWE-smith-trajectories 330 Agentic multi-turn 24% of mix — Real SWE-bench trajectories with tool calls, file edits, and multi-step reasoning

The Science Behind Dataset Selection

REAP Algorithm:
1. Forward pass calibration samples through model
2. Record which experts activate and their magnitudes
3. Compute saliency = router_weight × activation_norm
4. Prune lowest-saliency experts

Key Insight: Experts are TASK-SPECIFIC
├── Some experts specialize in natural language
├── Some experts specialize in code syntax
├── Some experts specialize in JSON/structured output
└── Some experts specialize in multi-turn context

If calibration lacks code → code-specialized experts appear "unused" → get pruned → model loses coding ability

Cerebras' Original Mix (from paper)

Cerebras used the same 3 datasets in their GLM-4.6 REAP experiments:

  • evol-codealpaca-v1 for code generation
  • xlam-function-calling-60k for tool calling
  • SWE-smith-trajectories for agentic tasks

We followed this exact recipe for reproducibility.

Combined Dataset

Our calibration mix: 0xSero/glm47-reap-calibration-v2


📦 Related Models

Model Compression Experts Size
MiniMax-M2.1-REAP-25 25% ~120 ~620GB
MiniMax-M2.1-REAP-30 30% ~112 ~580GB
MiniMax-M2.1-REAP-40 40% ~96 ~500GB
MiniMax-M2.1-REAP-50 50% ~80 ~420GB

🚀 Deployment

vLLM

vllm serve 0xSero/MiniMax-M2.1-REAP-40 \
    --tensor-parallel-size 8 \
    --trust-remote-code \
    --dtype bfloat16

🧩 Reproduction

REAP Pruning

#!/bin/bash
# MiniMax REAP - same calibration as GLM-4.7

export MODEL_DIR=/path/to/MiniMax-M2.1
export REAP_DATASET=0xSero/glm47-reap-calibration-mix
export REAP_SAMPLES_PER_CATEGORY=999
export REAP_MODEL_MAX_LENGTH=2048

python src/reap/prune.py \
    --model-name $MODEL_DIR \
    --dataset-name $REAP_DATASET \
    --compression-ratio 0.40 \
    --prune-method reap \
    --seed 42 \
    --distance_measure cosine

⚖️ License

Apache 2.0


🧾 Citation

@article{lasby2025reap,
  title={REAP the Experts: Why Pruning Prevails for One-Shot MoE Compression},
  author={Lasby, Mike and Lazarevich, Ivan and Sinnadurai, Nish and Lie, Sean and Ioannou, Yani and Thangarasa, Vithursan},
  journal={arXiv preprint arXiv:2510.13999},
  year={2025},
  url={https://arxiv.org/abs/2510.13999}
}
Downloads last month
39
Safetensors
Model size
139B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for 0xSero/MiniMax-M2.1-REAP-40

Quantized
(33)
this model

Paper for 0xSero/MiniMax-M2.1-REAP-40