GGUF hybrid layer quantization of Voxtral-Mini-3B-2512 by mistralai

Original model: https://huggingface.co/mistralai/Voxtral-Mini-3B-2507

The hybrid quant employs different quantization levels on a per layer basis to increased flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simultaneously optimize quantized size and model performance. The quants are all K to increase processing efficiency on old GPUs or CPUs.

The Q6_K_H layer quant is as follows:

Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
Q6_K_L : attn_v = q8_0 attn_o = q8_0 ffn_d = q8_0

   LAYER_TYPES='[
   [0 ,"Q6_K_L"],[1 ,"Q6_K_M"],[2 ,"Q6_K_S"],[3 ,"Q5_K_L"],[4 ,"Q5_K_M"],[5 ,"Q5_K_M"],
   [6 ,"Q5_K_M"],[7 ,"Q5_K_M"],[8 ,"Q5_K_M"],[9 ,"Q5_K_L"],[10,"Q5_K_L"],[11,"Q5_K_L"],
   [12,"Q6_K_S"],[13,"Q5_K_L"],[14,"Q6_K_S"],[15,"Q5_K_L"],[16,"Q6_K_S"],[17,"Q5_K_L"],
   [18,"Q6_K_S"],[19,"Q5_K_L"],[20,"Q6_K_S"],[21,"Q6_K_S"],[22,"Q6_K_S"],[23,"Q6_K_S"],
   [24,"Q6_K_S"],[25,"Q6_K_S"],[26,"Q6_K_S"],[27,"Q6_K_S"],[28,"Q6_K_M"],[29,"Q6_K_L"]
   ]'
   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"

The quant was optimized for reasoning performance across a curated set of test prompts and then checked for performance on BBA eval. This model does not perform well on the curated test prompts and will also hallucinate most knowledge based prompts.

Comparison:

Quant size PPL Comment
Q6_K 3.3e9 6.9 -
Q6_K_H 3.2e9 6.9 Hybrid quant with Q6_K embed Q6_K output

Usage:

This is a audio capable model. It can be used together with its multimedia projector layers to process audio and text inputs and generate text outputs. The mmproj file is made available in this repository. To test audio mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .

The unique feature this audio model offers is a built in transcribe mode, which instructs the model to just transcribe a given audio stream with no other prompting.

To trigger the transcribe mode the text "lang:en[TRANSCRIBE] is concatenated to the assistant prompt [/INST]:"

# "<s>[INST][BEGIN_AUDIO]" + "[AUDIO]" * num_expected_frames + "[/INST]lang:en[TRANSCRIBE]"

This prompt can be achieved through the use of prompt injection of "lang:en[TRANSCRIBE] for the beginning of the assistant response, or with a prompt template dedicated to transcription if the inference platform can configure it. Note the [TRANSCRIBE] is a special token in the model vocab and must be tokenized as such to make it work correctly. For other target languages change en to the appropriate language code.

Note that the mtmd in llama.cpp currently does not add the [BEGIN_AUDIO] special token for any Voxtral audio prompt, so the file mtmd.cpp must currently be manually patched as described in https://github.com/ggml-org/llama.cpp/issues/17868 .

--- mtmd.cpp	2025-12-08 13:13:44.202285955 -0500
+++ mtmd.cpp.new	2025-12-08 13:13:29.850285270 -0500
@@ -330,10 +330,10 @@
             aud_beg = "<|audio_bos|>";
             aud_end = "<|audio_eos|>";
 
-        } else if (proj == PROJECTOR_TYPE_ULTRAVOX) {
+	} else if ((proj == PROJECTOR_TYPE_ULTRAVOX) ||
+		   (proj == PROJECTOR_TYPE_VOXTRAL)) {
             // [BEGIN_AUDIO] ... (embeddings) ...
             aud_beg = "[BEGIN_AUDIO]";
-
         }
     }

Without the [BEGIN_AUDIO] tag the model performance was found to be quite erratic with audio processing.

Benchmarks:

A full set of audio benchmarks for the model is given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
Voxtral-Mini-3B-2507.Q6_K_H.gguf Q6_K_H 3.2e9 B ~Q6_K size
Voxtral-Mini-3B-2507.mmproj.gguf F16 1.3e9 B multimedia projector

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
32
GGUF
Model size
4B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/Voxtral-Mini-3B-2507-Hybrid-GGUF

Quantized
(8)
this model