---
quantized_by: ubergarm
pipeline_tag: text-generation
base_model: zai-org/GLM-4.7
license: mit
base_model_relation: quantized
tags:
- imatrix
- conversational
- ik_llama.cpp
- glm4_moe
language:
- en
- zh
---
## `ik_llama.cpp` imatrix Quantizations of zai-org/GLM-4.7
*NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
Some of ik's new quants are supported with [Nexesenex/croco.cpp](https://github.com/Nexesenex/croco.cpp) fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for [Windows builds by Thireus here.](https://github.com/Thireus/ik_llama.cpp/releases) which have been CUDA 12.8.
These quants provide best in class perplexity for the given memory footprint.
## Big Thanks
Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!!
Also thanks to all the folks in the quanting and inferencing community on [BeaverAI Club Discord](https://huggingface.co/BeaverAI) and on [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!
Finally, I *really* appreciate the support from [aifoundry.org](https://aifoundry.org) so check out their open source RISC-V based solutions!
## Quant Collection
Perplexity computed against *wiki.test.raw*.

These first two are just test quants for baseline perplexity comparison:
* `BF16` 667.598 GiB (16.003 BPW)
- Final estimate: PPL over 565 chunks for n_ctx=512 = 3.9267 +/- 0.02423
* `Q8_0` 354.794 GiB (8.505 BPW)
- Final estimate: PPL over 565 chunks for n_ctx=512 = 3.9320 +/- 0.02428
*NOTE*: The first split file is much smaller on purpose to only contain metadata, its fine!
## IQ5_K 250.635 GiB (6.008 BPW)
Final estimate: PPL over 565 chunks for n_ctx=512 = 3.9445 +/- 0.02439
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 93 Repeating Layers [0-92]
# Attention
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0
# Shared Expert Layers [3-92]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
# Routed Experts Layers [3-92]
blk\..*\.ffn_down_exps\.weight=iq6_k
blk\..*\.ffn_(gate|up)_exps\.weight=iq5_k
# NextN MTP Layer [92]
# Leave full q8_0 as supposedly better for MTP
# (doesn't use RAM or VRAM otherwise so its fine)
blk\..*\.nextn\.embed_tokens\.weight=q8_0
blk\..*\.nextn\.shared_head_head\.weight=q8_0
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq6_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-4.7-GGUF/imatrix-GLM-4.7-BF16.dat \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-160x21B-4.7-BF16-00001-of-00015.gguf \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-4.7-IQ5_K.gguf \
IQ5_K \
128
```
## IQ4_K 209.436 GiB (5.021 BPW)
Final estimate: PPL over 565 chunks for n_ctx=512 = 3.9879 +/- 0.02478
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 93 Repeating Layers [0-92]
# Attention
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0
# Shared Expert Layers [3-92]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
# Routed Experts Layers [3-92]
blk\..*\.ffn_down_exps\.weight=iq5_k
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_k
# NextN MTP Layer [92]
# Leave full q8_0 as supposedly better for MTP
# (doesn't use RAM or VRAM otherwise so its fine)
blk\..*\.nextn\.embed_tokens\.weight=q8_0
blk\..*\.nextn\.shared_head_head\.weight=q8_0
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq6_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-4.7-GGUF/imatrix-GLM-4.7-BF16.dat \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-160x21B-4.7-BF16-00001-of-00015.gguf \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-4.7-IQ4_K.gguf \
IQ4_K \
128
```
## big-IQ3_KS 171.890 GiB (4.120 BPW)
Final estimate: PPL over 565 chunks for n_ctx=512 = 4.0410 +/- 0.02501
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 93 Repeating Layers [0-92]
# Attention
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0
# Shared Expert Layers [3-92]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
# Routed Experts Layers [3-92]
blk\..*\.ffn_down_exps\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq3_ks
# NextN MTP Layer [92]
blk\..*\.nextn\.embed_tokens\.weight=q8_0
blk\..*\.nextn\.shared_head_head\.weight=q8_0
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq6_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-4.7-GGUF/imatrix-GLM-4.7-BF16.dat \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-160x21B-4.7-BF16-00001-of-00015.gguf \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-4.7-big-IQ3_KS.gguf \
IQ3_KS \
128
```
## IQ3_KS 155.219 GiB (3.721 BPW)
Final estimate: PPL over 565 chunks for n_ctx=512 = 4.1330 +/- 0.02573
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 93 Repeating Layers [0-92]
# Attention
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0
# Shared Expert Layers [3-92]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
# Routed Experts Layers [3-92]
blk\..*\.ffn_down_exps\.weight=iq4_kss
blk\..*\.ffn_(gate|up)_exps\.weight=iq3_ks
# NextN MTP Layer [92]
blk\..*\.nextn\.embed_tokens\.weight=q8_0
blk\..*\.nextn\.shared_head_head\.weight=q8_0
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-4.7-GGUF/imatrix-GLM-4.7-BF16.dat \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-160x21B-4.7-BF16-00001-of-00015.gguf \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-4.7-IQ3_KS.gguf \
IQ3_KS \
128
```
## IQ2_KL 129.279 GiB (3.099 BPW)
Final estimate: PPL over 565 chunks for n_ctx=512 = 4.5644 +/- 0.02929
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 93 Repeating Layers [0-92]
# Attention
blk\.(0|1|2)\.attn_q.*=iq6_k
blk\.(0|1|2)\.attn_k.*=q8_0
blk\.(0|1|2)\.attn_v.*=q8_0
blk\.(0|1|2)\.attn_output.*=iq6_k
blk\..*\.attn_q.*=iq5_k
blk\..*\.attn_k.*=iq6_k
blk\..*\.attn_v.*=iq6_k
blk\..*\.attn_output.*=iq5_k
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq6_k
blk\..*\.ffn_(gate|up)\.weight=iq5_k
# Shared Expert Layers [3-92]
blk\..*\.ffn_down_shexp\.weight=iq6_k
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_k
# Routed Experts Layers [3-92]
blk\..*\.ffn_down_exps\.weight=iq3_k
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_kl
# NextN MTP Layer [92]
blk\..*\.nextn\.embed_tokens\.weight=q8_0
blk\..*\.nextn\.shared_head_head\.weight=q8_0
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-4.7-GGUF/imatrix-GLM-4.7-BF16.dat \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-160x21B-4.7-BF16-00001-of-00015.gguf \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-4.7-v14-IQ2_KL.gguf \
IQ2_KL \
128
```
## smol-IQ2_KS 99.237 GiB (2.379 BPW)
Final estimate: PPL over 565 chunks for n_ctx=512 = 5.9716 +/- 0.04130
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 93 Repeating Layers [0-92]
# Attention
blk\.(0|1|2)\.attn_q.*=iq6_k
blk\.(0|1|2)\.attn_k.*=q8_0
blk\.(0|1|2)\.attn_v.*=q8_0
blk\.(0|1|2)\.attn_output.*=iq6_k
blk\..*\.attn_q.*=iq5_ks
blk\..*\.attn_k.*=iq6_k
blk\..*\.attn_v.*=iq6_k
blk\..*\.attn_output.*=iq5_ks
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq5_ks
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [3-92]
blk\..*\.ffn_down_shexp\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [3-92]
blk\..*\.ffn_down_exps\.weight=iq2_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_ks
# NextN MTP Layer [92]
blk\..*\.nextn\.embed_tokens\.weight=q8_0
blk\..*\.nextn\.shared_head_head\.weight=q8_0
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-4.7-GGUF/imatrix-GLM-4.7-BF16.dat \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-160x21B-4.7-BF16-00001-of-00015.gguf \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-4.7-v12-smol-IQ2_KS.gguf \
IQ2_KS \
128
```
## smol-IQ1_KT 82.442 GiB (1.976 BPW)
Final estimate: PPL over 565 chunks for n_ctx=512 = 6.7720 +/- 0.04745
*only for the desperate!*
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 93 Repeating Layers [0-92]
# Attention
blk\.(0|1|2)\.attn_q.*=q8_0
blk\.(0|1|2)\.attn_k.*=q8_0
blk\.(0|1|2)\.attn_v.*=q8_0
blk\.(0|1|2)\.attn_output.*=q8_0
blk\..*\.attn_q.*=iq5_ks
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=iq5_ks
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq5_ks
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [3-92]
blk\..*\.ffn_down_shexp\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [3-92]
blk\..*\.ffn_down_exps\.weight=iq1_kt
blk\..*\.ffn_(gate|up)_exps\.weight=iq1_kt
# NextN MTP Layer [92]
blk\..*\.nextn\.embed_tokens\.weight=q8_0
blk\..*\.nextn\.shared_head_head\.weight=q8_0
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-4.7-GGUF/imatrix-GLM-4.7-BF16.dat \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-160x21B-4.7-BF16-00001-of-00015.gguf \
/mnt/data/models/ubergarm/GLM-4.7-GGUF/GLM-4.7-smol-IQ1_KT.gguf \
IQ1_KT \
128
```
## Quick Start
```bash
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)
# Hybrid CPU + 1 GPU
./build/bin/llama-sweep-bench \
--model "$model" \
--alias ubergarm/GLM-4.7 \
--ctx-size 65536 \
-ger \
--merge-qkv \
-ngl 99 \
--n-cpu-moe 72 \
-ub 4096 -b 4096 \
--threads 24 \
--parallel 1 \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja
# Hybrid CPU + 2 or more GPUs
# using new "-sm graph" 'tensor parallel' feature!
# https://github.com/ikawrakow/ik_llama.cpp/pull/1080
./build/bin/llama-sweep-bench \
--model "$model" \
--alias ubergarm/GLM-4.7 \
--ctx-size 65536 \
-ger \
-sm graph \
-smgs \
-mea 256 \
-ngl 99 \
--n-cpu-moe 72 \
-ts 41,48 \
-ub 4096 -b 4096 \
--threads 24 \
--parallel 1 \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja
# --max-gpu=3 # 3 or 4 usually if >2 GPUs available
# CPU Only
SOCKET=0 numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-server \
--model "$model"\
--alias ubergarm/GLM-4.7 \
--ctx-size 65536 \
-ger \
--merge-qkv \
-ctk q8_0 -ctv q8_0 \
-ub 4096 -b 4096 \
--parallel 1 \
--threads 96 \
--threads-batch 128 \
--numa numactl \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja
```
*NOTE*: For tool/agentic use you can bring your own template with `--chat-template-file myTemplate.jinja` and might need `--special` etc.
## References
* [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
* [Getting Started Guide (already out of date lol)](https://github.com/ikawrakow/ik_llama.cpp/discussions/258)
* [ubergarm-imatrix-calibration-corpus-v02.txt](https://gist.github.com/ubergarm/edfeb3ff9c6ec8b49e88cdf627b0711a?permalink_comment_id=5682584#gistcomment-5682584)
* [Solid mainline quants by AesSedai/GLM-4.7-GGUF](https://huggingface.co/AesSedai/GLM-4.7-GGUF)