This is a official q8_0 quantized version of https://huggingface.co/Pinkstack/DistilGPT-OSS-qwen3-4B

What it is
Distil-gpt-oss-qwen3-4B is a Qwen3 4B-2507 thinking fine tune, it supports up to 256K tokens of input and output (aka total context) and can think for up to 65536 tokens when set to high reasoning effort. unlike the original qwen3, this model was fine-tuned on GPT-OSS reasoning outputs (unlike Deepseek r1 outputs which qwen3 was probably fine-tuned on for advanced reasoning). By fine-tuning on GPT-OSS outputs, the model was able to learn how to think efficiently, follow instructions better, and the new ability to think with a certain effort based on how much you want it to think.
⚠️This model is NOT as censored as the original GPT-OSS, we focused on performance rather than censorship. The model is still safety trained, it would just allow for more "creative" prompts, unlike GPT-OSS. We are not responsible for what the model generates.
Keep in mind, this is a community project and we are NOT related to qwen by Alibaba nor GPT-OSS by OpenAi.
Format
This is the chat format of this model (you can also check the Jinja template file in "Files and versions"):
<|im_start|>system
You are a helpful assistant
Reasoning effort: low<|im_end|>
<|im_start|>user
Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8,<|im_end|>
<|im_start|>assistant
<think>
Just continue: 13,21,34,...
</think>
The Fibonacci sequence continues by adding the two preceding numbers.
After **8** the next term is **13** (1 + 8 = 9 – 1 = 8 + 5 = 13), then **21** (8 + 13 = 21), followed by **34** (13 + 21 = 34), and so on.
So the sequence starts:
**1, 1, 2, 3, 5, 8, 13, 21, 34, …**<|im_end|>
As you can see, you set the reasoning effort via the system prompt. We recommend going 2 lines down and only then putting "Reasoning effort: [low,medium,high]. For your information that output was generated by our model.
Additional information
The model was trained using unsloth, using a mix of private datasets and public datasets.
- Downloads last month
- 26
8-bit
Model tree for Pinkstack/DistilGPT-OSS-qwen3-4B-Q8_O-GGUF
Base model
Qwen/Qwen3-4B-Thinking-2507