Text Generation
Transformers
PyTorch
Safetensors
English
qwen3
text-generation-inference
unsloth
trl
sft
conversational
marcuscedricridia commited on
Commit
29e5e4b
·
verified ·
1 Parent(s): bba9027

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -7
README.md CHANGED
@@ -7,17 +7,49 @@ tags:
7
  - qwen3
8
  - trl
9
  - sft
10
- license: apache-2.0
11
  language:
12
  - en
 
 
 
 
13
  ---
14
 
15
- # Uploaded model
16
 
17
- - **Developed by:** marcuscedricridia
18
- - **License:** apache-2.0
19
- - **Finetuned from model :** marcuscedricridia/kgr-600m-2511-it-616
20
 
21
- This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - qwen3
8
  - trl
9
  - sft
10
+ license: agpl-3.0
11
  language:
12
  - en
13
+ datasets:
14
+ - marcuscedricridia/finetome-score-gte-4p5-only
15
+ - marcuscedricridia/wizard_vicuna_70k_unfiltered-deepclean-sharegpt
16
+ - marcuscedricridia/ultrafeedback-chosen-rating-eq-5
17
  ---
18
 
19
+ ## Overview
20
 
21
+ `kgr-600m-2511-it-709` is a 600M parameter language model fine-tuned for general instruction-following tasks. It is part of the KGR family, designed to be lightweight and efficient while maintaining strong performance on practical prompts.
 
 
22
 
23
+ ## Intended Use
24
 
25
+ This model is built for general-purpose instruction tasks such as:
26
+
27
+ - Question answering
28
+ - Summarization
29
+ - Short-form generation
30
+ - Instruction completion
31
+
32
+ It performs best when given clear, direct prompts.
33
+
34
+ ## Inference Settings
35
+
36
+ Recommended parameters for sampling:
37
+
38
+ - `temperature = 0.3`
39
+ - `min_p = 0.01`
40
+ - `repetition_penalty = 1.2`
41
+ - `top_p = 0.95`
42
+ - `top_k = 100` (values of 20 or 40 are also valid)
43
+
44
+ A repetition penalty is used due to the model’s smaller size. It helps prevent looping and improves output coherence.
45
+
46
+ ## Special Notes
47
+
48
+ - The `enable_thinking = true/false` parameter no longer affects behavior when toggled. This flag was overridden during training.
49
+ - However, the **idea behind** `enable_thinking`—encouraging chain-of-thought reasoning—is still functional when prompted explicitly. Asking the model to "think step by step" or using similar phrasing can activate this behavior.
50
+
51
+ ## Limitations
52
+
53
+ - Struggles with complex multi-step reasoning.
54
+ - Not suitable for high-stakes or sensitive applications.
55
+ - Outputs may occasionally reflect training biases or limitations in generalization.