https://huggingface.co/p-e-w/Qwen3-4B-Instruct-2507-heretic

  • quantized using AutoAWQ,
  • 4bit
  • group_size 64
  • zero_point: True
  • GEMM
Downloads last month
23
Safetensors
Model size
4B params
Tensor type
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tooolz/Qwen3-4B-Instruct-2507-heretic-AWQ-4bit-g64

Quantized
(7)
this model