🧠 Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2 GGUFs

Quantized version of: BasedBase/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32


πŸ“¦ Available GGUFs

Format Description
F16 Full precision (16-bit), better quality, larger size βš–οΈ
Q3_K_XL Quantized (3-bit XL variant, based on the quantization table of the unsloth model Qwen3-30B-A3B-Thinking-2507), smaller size, faster inference ⚑
Q4_K_XL Quantized (4-bit XL variant, based on the quantization table of the unsloth model Qwen3-30B-A3B-Thinking-2507), smaller size, faster inference ⚑
Q5_K_XL Quantized (5-bit XL variant, based on the quantization table of the unsloth model Qwen3-30B-A3B-Thinking-2507), medium size, faster inference ⚑

πŸš€ Usage

Example with llama.cpp:

./main -m ./gguf-file-name.gguf -p "Hello world!"
Downloads last month
445
GGUF
Model size
31B params
Architecture
qwen3moe
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for rodrigomt/Qwen3-Coder-30B-A3B-Instruct-480b-Distill-V2-GGUF