Qwen3-Next-80B-A3B-Instruct-FP8
FP8 quantized MoE model with 80B total parameters, 3B active per token
This is an FP8 (E4M3) quantized version of Qwen/Qwen3-Next-80B-A3B-Instruct using compressed_tensors format. Quantized by TevunahAi on enterprise-grade hardware.
π― Recommended Usage: vLLM
For optimal performance with full FP8 benefits and efficient MoE routing, use vLLM or TensorRT-LLM:
Quick Start with vLLM
pip install vllm
Python API:
from vllm import LLM, SamplingParams
# vLLM auto-detects FP8 from model config
llm = LLM(model="TevunahAi/Qwen3-Next-80B-A3B-Instruct-FP8", dtype="auto")
# Generate
messages = [{"role": "user", "content": "Explain quantum computing"}]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TevunahAi/Qwen3-Next-80B-A3B-Instruct-FP8")
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
sampling_params = SamplingParams(temperature=0.7, max_tokens=512)
outputs = llm.generate([prompt], sampling_params)
for output in outputs:
print(output.outputs[0].text)
OpenAI-Compatible API Server:
vllm serve TevunahAi/Qwen3-Next-80B-A3B-Instruct-FP8 \
--dtype auto \
--max-model-len 32768
Then use with OpenAI client:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="token-abc123", # dummy key
)
response = client.chat.completions.create(
model="TevunahAi/Qwen3-Next-80B-A3B-Instruct-FP8",
messages=[
{"role": "user", "content": "Explain quantum computing"}
],
temperature=0.7,
max_tokens=512,
)
print(response.choices[0].message.content)
vLLM Benefits
- β Weights, activations, and KV cache in FP8
- β ~40GB VRAM (for 80B MoE model!)
- β Native FP8 tensor core acceleration on Ada/Hopper GPUs
- β Efficient MoE routing - only 3B active per token
- β 80B model capability at 3B model speed
βοΈ Alternative: Transformers (Not Recommended)
This model can be loaded with transformers, but will decompress FP8 β BF16 during inference, requiring significant VRAM. For large MoE models, vLLM is strongly recommended.
Transformers Example (Click to expand)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Loads FP8 weights but decompresses to BF16 during compute
model = AutoModelForCausalLM.from_pretrained(
"TevunahAi/Qwen3-Next-80B-A3B-Instruct-FP8",
device_map="auto",
torch_dtype="auto",
low_cpu_mem_usage=True,
)
tokenizer = AutoTokenizer.from_pretrained("TevunahAi/Qwen3-Next-80B-A3B-Instruct-FP8")
# Generate
messages = [{"role": "user", "content": "Explain quantum computing"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Requirements:
pip install torch>=2.1.0 transformers>=4.40.0 accelerate compressed-tensors
System Requirements:
- ~80GB+ VRAM (decompressed to BF16)
- H100 80GB or multi-GPU setup
- Not practical for most deployments
β οΈ Warning: vLLM is the recommended deployment method for MoE models.
π Quantization Details
| Property | Value |
|---|---|
| Base Model | Qwen/Qwen3-Next-80B-A3B-Instruct |
| Architecture | Mixture of Experts (MoE) |
| Total Parameters | 80B |
| Active per Token | 3B |
| Quantization Method | FP8 E4M3 weight-only |
| Framework | llm-compressor + compressed_tensors |
| Calibration Dataset | open_platypus (512 samples) |
| Storage Size | ~40GB (sharded safetensors) |
| VRAM (vLLM) | ~40GB |
| VRAM (Transformers) | ~80GB+ (decompressed to BF16) |
| Target Hardware | NVIDIA H100, A100 80GB, RTX 6000 Ada |
| Quantization Time | 204 minutes (2.55 min/B) |
Quantization Infrastructure
Professional hardware ensures consistent, high-quality quantization:
- CPUs: Dual Intel Xeon Max 9480 (112 cores / 224 threads, 128GB HBM2e)
- GPU: NVIDIA RTX 5000 Ada Generation (32GB VRAM, native FP8 support)
- Memory: 256GB DDR5 + 128GB HBM2e = 384GB total system memory
- Software Stack: Ubuntu 25.10 | Python 3.12 | PyTorch 2.8 | CUDA 13.0 | llm-compressor
π§ Why FP8 for MoE Models?
With vLLM/TensorRT-LLM:
- β 50% memory reduction vs BF16 (~80GB β ~40GB)
- β Single high-end GPU deployment possible
- β Faster inference via native FP8 tensor cores
- β Efficient MoE routing - optimal for sparse activation
- β 80B capability at 3B speed - best of both worlds
The MoE Advantage:
- Total Parameters: 80B (full model capability)
- Active Parameters: 3B per token (fast inference)
- Memory: ~40GB with FP8 (accessible on consumer prosumer GPUs)
- Speed: Similar to dense 3B models
- Quality: Comparable to dense 80B models
FP8 + MoE = flagship model performance on workstation hardware.
πΎ Model Files
This model is sharded into multiple safetensors files (all required for inference). The compressed format enables efficient storage and faster downloads.
π Qwen3-Next MoE Architecture
Qwen3-Next uses an advanced Mixture of Experts (MoE) architecture:
How it works:
- 80B total parameters split across expert networks
- Router network selects which experts to activate
- 3B active parameters per token (sparse activation)
- Result: 80B model knowledge with 3B model speed
Benefits:
- β Massive parameter count without massive compute
- β Specialist experts for different types of knowledge
- β Better quality-per-parameter ratio than dense models
- β More accessible than equivalent dense models
π¬ Quality Assurance
- Professional calibration: 512 diverse samples
- Validation: Tested on various benchmarks
- Format: Standard compressed_tensors for broad compatibility
- MoE optimization: Validated expert routing efficiency
π Original Model
This quantization is based on Qwen/Qwen3-Next-80B-A3B-Instruct by the Qwen team.
For comprehensive information about:
- Model architecture and training methodology
- MoE routing mechanisms
- Evaluation benchmarks and results
- Supported languages and tasks
- Ethical considerations
Please refer to the original model card.
π§ Hardware Requirements
Minimum (vLLM):
- GPU: NVIDIA A100 40GB or RTX 6000 Ada (48GB)
- VRAM: 40GB minimum
- CUDA: 11.8 or newer
Recommended (vLLM):
- GPU: NVIDIA H100 (80GB) / A100 80GB / RTX 6000 Ada (48GB)
- VRAM: 48GB+
- CUDA: 12.0+
Transformers:
- GPU: H100 80GB or multi-GPU setup
- VRAM: 80GB+ total
- Not recommended - use vLLM instead
π Additional Resources
- vLLM Documentation: docs.vllm.ai
- TensorRT-LLM: github.com/NVIDIA/TensorRT-LLM
- TevunahAi Models: huggingface.co/TevunahAi
- llm-compressor: github.com/vllm-project/llm-compressor
- Qwen Documentation: qwenlm.github.io
π License
This model inherits the Apache 2.0 License from the original Qwen3-Next model.
π Acknowledgments
- Original Model: Qwen team at Alibaba Cloud
- Quantization Framework: Neural Magic's llm-compressor
- Quantized by: TevunahAi
π Citation
If you use this model, please cite the original Qwen work:
@misc{qwen3next2024,
title={Qwen3-Next: Next Generation of Qwen Models},
author={Qwen Team},
year={2024},
url={https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct}
}
Professional AI Model Quantization by TevunahAi
Making flagship MoE models accessible through enterprise-grade quantization
- Downloads last month
- 33
Model tree for TevunahAi/Qwen3-Next-80B-A3B-Instruct-FP8
Base model
Qwen/Qwen3-Next-80B-A3B-Instruct