mradermacher commited on
Commit
eec3ece
·
verified ·
1 Parent(s): 7265ebf

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -77,6 +77,8 @@ more details, including on how to concatenate multi-part files.
77
  |:-----|:-----|--------:|:------|
78
  | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-III-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-III.imatrix.gguf) | imatrix | 0.3 | imatrix file (for creating your own qwuants) |
79
  | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-III-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-III.i1-Q2_K.gguf) | i1-Q2_K | 15.7 | IQ3_XXS probably better |
 
 
80
 
81
  Here is a handy graph by ikawrakow comparing some lower-quality quant
82
  types (lower is better):
 
77
  |:-----|:-----|--------:|:------|
78
  | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-III-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-III.imatrix.gguf) | imatrix | 0.3 | imatrix file (for creating your own qwuants) |
79
  | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-III-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-III.i1-Q2_K.gguf) | i1-Q2_K | 15.7 | IQ3_XXS probably better |
80
+ | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-III-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-III.i1-IQ3_M.gguf) | i1-IQ3_M | 18.8 | |
81
+ | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-III-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-III.i1-Q4_K_S.gguf) | i1-Q4_K_S | 24.3 | optimal size/speed/quality |
82
 
83
  Here is a handy graph by ikawrakow comparing some lower-quality quant
84
  types (lower is better):