Quantized version of huihui-ai/Huihui-GLM-4.6V-Flash-abliterated using llama.cpp b7446 and transformers 5.0.0rc1

Downloads last month
1,004
GGUF
Model size
9B params
Architecture
glm4
Hardware compatibility
Log In to view the estimation

2-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for filvyb/Huihui-GLM-4.6V-Flash-abliterated-GGUF

Quantized
(1)
this model