https://huggingface.co/p-e-w/Qwen3-4B-Instruct-2507-heretic
- quantized using AutoAWQ,
- 4bit
- group_size 64
- zero_point: True
- GEMM
- Downloads last month
- 23
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for tooolz/Qwen3-4B-Instruct-2507-heretic-AWQ-4bit-g64
Base model
p-e-w/Qwen3-4B-Instruct-2507-heretic