Qwen3.5-27B-NVFP4
This is a quantized version of Qwen/Qwen3.5-27B. This model accepts text and images as inputs and generates text as outputs. The weights and activations were quantized to FP4 using llm-compressor with 512 calibration samples from neuralmagic/calibration, reducing the model size from 51.8 GB to 18.4 GB (~2.8x reduction) while maintaining 99.1% average accuracy recovery.
Inference
As of 2/27/2026, this model is supported in vLLM nightly. To serve the model:
vllm serve Kbenkhaled/Qwen3.5-27B-NVFP4 \
--reasoning-parser qwen3 \
--enable-prefix-caching
Evaluation
Evaluated with lm-evaluation-harness, 0-shot, thinking mode ON.
| Benchmark | Qwen3.5-27B | Qwen3.5-27B-NVFP4 (this model) | Recovery |
|---|---|---|---|
| GPQA Diamond | 80.30% | 79.29% | 98.7% |
| IFEval | 95.08% | 93.88% | 98.7% |
| MMLU-Redux | 93.90% | 94.32% | 100.4% |
| Average | 89.76% | 89.16% | 99.1% |
- Downloads last month
- 8,073
Model tree for Kbenkhaled/Qwen3.5-27B-NVFP4
Base model
Qwen/Qwen3.5-27B