Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
ModelCloud
/
GLM-4.6-REAP-268B-A32B-GPTQMODEL-W4A16
like
2
Follow
ModelCloud.AI
71
Text Generation
Safetensors
English
glm4_moe
gptqmodel
modelcloud
chat
glm4.6
glm
instruct
int4
gptq
4bit
w4a16
conversational
4-bit precision
License:
other
Model card
Files
Files and versions
xet
Community
1
Use this model
main
GLM-4.6-REAP-268B-A32B-GPTQMODEL-W4A16
/
README.md
Commit History
Update README.md
2969d2d
verified
Qubitium
commited on
18 days ago
Update README.md
960a726
verified
Qubitium
commited on
18 days ago
Update README.md
e290f32
verified
Qubitium
commited on
18 days ago
Update README.md
8f41049
verified
Qubitium
commited on
19 days ago
initial commit
26e571e
verified
Qubitium
commited on
19 days ago