Thien Tran
gaunernst
AI & ML interests
None yet
Organizations
Broken results with vLLM
4
#4 opened 4 months ago
by
meiragat
Cannot find the config file for awq
1
#1 opened 7 months ago
by
SebastianBodza
Model running badly on vLLM
2
#2 opened 4 months ago
by
meiragat
Adding `safetensors` variant of this model
#2 opened 5 months ago
by
SFconvertbot
How you convert from guff to AWQ
3
#3 opened 6 months ago
by
prudant
GPU Requirement
4
#1 opened 6 months ago
by
KhanhVan
compared to gaunernst/gemma-3-27b-it-int4-awq
4
#1 opened 7 months ago
by
azidanit
Does this method degrade quality beyond what direct AutoAWQ would induce?
2
#2 opened 7 months ago
by
Delnith
How to extract frame-level feature?
1
#1 opened 7 months ago
by
GuoGuoBan
gptq_marlin_repack error
1
#2 opened 7 months ago
by
danielfl
public quantized code
2
#2 opened 8 months ago
by
nobita3921
comparison to official QAT
1
#2 opened 8 months ago
by
eramax
Adding `safetensors` variant of this model
#4 opened 8 months ago
by
SFconvertbot
config.json and other files are missing, causing vllm to fail to run.
1
#1 opened 8 months ago
by
Baicai
Gemma 3 vLLM
5
#1 opened 8 months ago
by
Lucena190
AWQ = clickbait
👍
1
1
#1 opened 8 months ago
by
mirekphd
Adding `safetensors` variant of this model
#1 opened 10 months ago
by
SFconvertbot
Adding `safetensors` variant of this model
#1 opened 10 months ago
by
SFconvertbot
Adding `safetensors` variant of this model
#1 opened 10 months ago
by
SFconvertbot
Adding `safetensors` variant of this model
#1 opened 10 months ago
by
SFconvertbot