Safetensors
llama

specforge_team

EAGLE3 For Qwen/Qwen3-Coder-30B-A3B-Instruct

About

SpecBundle is an open-source initiative, jointly driven by the community and industry, to democratize speculative decoding by providing high-performance speculative decoding draft weights for mainstream open-source models.

This checkpoint was trained by the SpecForge Team and released as the phase 1 of SpecBundle release. We regenerated the responses in the OpenCoder-LLM/opc-sft-stage1 and trained the model on 1.4M data samples for 2 epochs. This checkpoint was trained using the SpecForge framework.

Usage

You can use this checkpoint with the command below.

export SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1

python3 -m sglang.launch_server \
    --model Qwen/Qwen3-Coder-30B-A3B-Instruct \
    --speculative-algorithm EAGLE3 \
    --speculative-draft-model-path lmsys/SGLang-EAGLE3-Qwen3-Coder-30B-A3B-Instruct-perfect-blend-regenerated \
    --speculative-num-steps 3 \
    --speculative-eagle-topk 1 \
    --speculative-num-draft-tokens 4 \
    --tp 4

Performance

This checkpoint exhibits superior performance on various benchmarks.

Throughput Acceptance Length

You can reproduce the performance with the command below:

# clone specforge
git clone https://github.com/sgl-project/SpecForge.git
cd SpecForge/benchmarks

# run benchmarks
python bench_eagle3.py \
        --model Qwen/Qwen3-Coder-30B-A3B-Instruct \
        --speculative-algorithm EAGLE3 \
        --speculative-draft-model-path lmsys/SGLang-EAGLE3-Llama-3.3-70B-Instruct-perfect-blend-regenerated \
        --port 30003 \
        --config-list 1,3,1,4 1,5,1,6 1,5,3,6 1,7,1,8 1,7,4,8 \
        --benchmark-list gsm8k math500 mtbench humaneval livecodebench financeqa gpqa  \
        --dtype bfloat16 \
        --tp 4 \
        --name qwen3-coder-30b-a3b-spec-bundle

Acknowledgement

We sincerely appreciate the collective efforts from both the developers in the open-source community and our industrial partners, especially Ant Group AQ Team, Meituan, Nex-AGI (Qiji Zhifeng), EigenAI for their invaluable contributions to the release of SpecBundle Phase 1.

Downloads last month
9
Safetensors
Model size
0.2B params
Tensor type
I64
·
BF16
·
BOOL
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train lmsys/SGLang-EAGLE3-Qwen3-Coder-30B-A3B-Instruct-SpecForge

Collection including lmsys/SGLang-EAGLE3-Qwen3-Coder-30B-A3B-Instruct-SpecForge