Transformers
GGUF
English
Merge
programming
code generation
code
coding
coder
chat
qwen
qwen3
qwencoder
esper
esper-3
valiant
valiant-labs
qwen-3
qwen-3-2.4b
2.4b
reasoning
code-instruct
python
javascript
dev-ops
jenkins
terraform
scripting
powershell
azure
aws
gcp
cloud
problem-solving
architect
engineer
developer
creative
analytical
expert
rationality
conversational
instruct
shining-valiant
shining-valiant-3
qwen-3-1.7b
1.7b
code-reasoning
science
science-reasoning
physics
biology
chemistry
earth-science
astronomy
machine-learning
artificial-intelligence
compsci
computer-science
information-theory
ML-Ops
math
cuda
deep-learning
agentic
LLM
neuromorphic
self-improvement
complex-systems
cognition
linguistics
philosophy
logic
epistemology
simulation
game-theory
knowledge-management
creativity
float32
imatrix
File size: 8,442 Bytes
517a158 7b97341 517a158 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 |
---
base_model: DavidAU/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2
datasets:
- sequelbox/Celestia3-DeepSeek-R1-0528
- sequelbox/Mitakihara-DeepSeek-R1-0528
- sequelbox/Raiden-DeepSeek-R1
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- merge
- programming
- code generation
- code
- coding
- coder
- chat
- code
- chat
- qwen
- qwen3
- qwencoder
- esper
- esper-3
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-2.4b
- 2.4b
- reasoning
- code
- code-instruct
- python
- javascript
- dev-ops
- jenkins
- terraform
- scripting
- powershell
- azure
- aws
- gcp
- cloud
- problem-solving
- architect
- engineer
- developer
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
- shining-valiant
- shining-valiant-3
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-1.7b
- 1.7b
- reasoning
- code
- code-reasoning
- science
- science-reasoning
- physics
- biology
- chemistry
- earth-science
- astronomy
- machine-learning
- artificial-intelligence
- compsci
- computer-science
- information-theory
- ML-Ops
- math
- cuda
- deep-learning
- transformers
- agentic
- LLM
- neuromorphic
- self-improvement
- complex-systems
- cognition
- linguistics
- philosophy
- logic
- epistemology
- simulation
- game-theory
- knowledge-management
- creativity
- problem-solving
- architect
- engineer
- developer
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
- float32
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/DavidAU/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-IQ1_S.gguf) | i1-IQ1_S | 0.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-IQ1_M.gguf) | i1-IQ1_M | 0.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-IQ2_S.gguf) | i1-IQ2_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-IQ2_M.gguf) | i1-IQ2_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-Q2_K.gguf) | i1-Q2_K | 1.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-IQ3_S.gguf) | i1-IQ3_S | 1.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-IQ3_M.gguf) | i1-IQ3_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-Q4_0.gguf) | i1-Q4_0 | 1.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-Q4_1.gguf) | i1-Q4_1 | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2-i1-GGUF/resolve/main/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2.i1-Q6_K.gguf) | i1-Q6_K | 2.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|