These are quantizations of the model Qwen3-VL-30B-A3B-Instruct

Original model: https://huggingface.co/unsloth/Qwen3-VL-30B-A3B-Instruct

This is the 1M context length variant from unsloth, with their imatrix applied to it.

Download the latest llama.cpp to use them.

Downloads last month
575
GGUF
Model size
31B params
Architecture
qwen3vlmoe
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/Qwen3-VL-30B-A3B-Instruct-1M-MXFP4_MOE-GGUF

Quantized
(37)
this model

Collection including noctrex/Qwen3-VL-30B-A3B-Instruct-1M-MXFP4_MOE-GGUF