Mixed Precision GGUF layer quantization of Deepseek R1 Distill Qwen 14B by deepseek-ai
Original model: https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant achieves a ~8.6G gguf with significantly improved performance compared to a ~8.2G IQ4_XS GGUF. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows:
Q4_K_L : Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
LAYER_TYPES='[
[0 ,"Q4_K_L"],[1 ,"Q4_K_M"],[2 ,"Q4_K_S"],[3 ,"Q3_K_L"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
[8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
[16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
[24,"Q3_K_L"],[25,"Q3_K_L"],[26,"Q3_K_L"],[27,"Q3_K_L"],[28,"Q4_K_S"],[29,"Q4_K_S"],[30,"Q4_K_S"],[31,"Q4_K_S"],
[32,"Q4_K_S"],[33,"Q4_K_S"],[34,"Q4_K_S"],[35,"Q4_K_M"],[36,"Q4_K_M"],[37,"Q4_K_M"],[38,"Q4_K_M"],[39,"Q4_K_M"],
[40,"Q4_K_M"],[41,"Q4_K_L"],[42,"Q5_K_S"],[43,"Q5_K_M"],[44,"Q5_K_L"],[45,"Q6_K_M"],[46,"Q6_K_M"],[47,"Q6_K_M"]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"
The layer quants were optimized for good performance across a small set of curated test prompts. The quant was optimized to achieve both strong performance and stability with greedy sampling (minimized infinite generations across the prompt test set).
Comparison:
| Quant | size | PPL | Comment |
|---|---|---|---|
| IQ4_XS | 8.2e9 | 8.4 | IQ4_XS with default embedding and output |
| Q4_K_H | 8.6e9 | 8.6 | Hybrid quant with Q4_K embedding Q6_K output |
Usage:
This model is a RL reasoning model. It was created by disilling Deepseek R1 onto Qwen 2.5 14B base. Context length for the model seems to be mostly undocumented. The model itself shows 128k in its config but it almost certainly was not natively trained on 128k. For quant layer optimization context was configured at 32k, the base context of Qwen2.5 series. Performance was found to be good with context set to 32k and is the recommended setting as follows:
--rope-scaling yarn --rope-scale 1.00000 --yarn-orig-ctx 32768
The model can be speculated using Qwen3 0.6B. The 12G VRAM of the 4070 gives about 9k tokens of KV memory with speculator and context and weights all loaded in VRAM. This can be increased to around 12k by using Q8_0 KV quantization if desired. KV Breakdown for 12G VRAM 4070 with weights and KV in VRAM:
| DRAFT | QKV | KV |
|---|---|---|
| Yes | F16 | 9k |
| Yes | Q8_0 | 12k |
| No | F16 | 19k |
| No | Q8_0 | 32k |
Approximate performance on a 4070 with context and weights in VRAM using a custom downstream greedy speculator with fixed spec block length ND and dynamic vocab translation :
| Prompt | ND | Gen TPS | Comment |
|---|---|---|---|
| goldcoin | 0 | 47 | non code |
| goldcoin | 2 | 66 | non code |
| goldcoin | 3 | 64 | non code |
| goldcoin | 4 | 66 | non code |
Spec boost is not large due to difficulty of speculation the target reasoning model. ND=2 is sufficient to extract the majority of the possible performance in speculating the model and gives a nice 1.4x boost in gen speed (with CUDA backend, Vulkan is broke for 4070 as of 1/12/26 https://github.com/ggml-org/llama.cpp/discussions/10466#discussioncomment-15427662 ).
The model will give the right answer in all spec cases on the goldcoin prompt with straightforward greedy sampling. It went into infinite gen on one test prompt, but this problem can be circumvented by either using temp sampling or adjusting the prompt.
goldcoin:
I have 10 apples. I find 3 gold coins in the bottom of a river. The river runs near a big city that has something to do with what I can spend the coins on. I then lose 4 apples but gain a gold coin. Three birds run into my path and drop 6 apples each. I play an online game and win 6 gold coins but I have to share them equally with my 2 teammates. I buy apples for all the coins I have. The price of an apple is 0.5 coins. How many apples do I have? And where is the river? Use step-by-step reasoning to solve this problem.
The model was tested with some code prompts and found to work much worse when thinking block is enabled. To disable thinking block, either inject
<think>\n and \n</think>\n\n
at start of assistent gen or include the think tokens as part of the assistant prompt template. With think block bypassed it is capable of generating working code on simple prompts. It is also quite capable at solving some reasoning problems with think block disabled.
Benchmarks:
A full set of math benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm
Download the file from below:
| Link | Type | Size/e9 B | Notes |
|---|---|---|---|
| Deepseek-R1-Distill-Qwen-14B.Q4_K_H.gguf | Q4_K_H | 8.6e9 B | 0.4B bigger than IQ4_XS with much higher performance |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 99
Model tree for steampunque/Deepseek-R1-Distill-Qwen-14B-MP-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B