File size: 1,752 Bytes
77f83cd bba9027 29e5e4b 77f83cd 29e5e4b 77f83cd 29e5e4b 77f83cd 29e5e4b 77f83cd 29e5e4b 77f83cd 29e5e4b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
base_model: marcuscedricridia/kgr-600m-2511-it-616
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: agpl-3.0
language:
- en
datasets:
- marcuscedricridia/finetome-score-gte-4p5-only
- marcuscedricridia/wizard_vicuna_70k_unfiltered-deepclean-sharegpt
- marcuscedricridia/ultrafeedback-chosen-rating-eq-5
---
## Overview
`kgr-600m-2511-it-709` is a 600M parameter language model fine-tuned for general instruction-following tasks. It is part of the KGR family, designed to be lightweight and efficient while maintaining strong performance on practical prompts.
## Intended Use
This model is built for general-purpose instruction tasks such as:
- Question answering
- Summarization
- Short-form generation
- Instruction completion
It performs best when given clear, direct prompts.
## Inference Settings
Recommended parameters for sampling:
- `temperature = 0.3`
- `min_p = 0.01`
- `repetition_penalty = 1.2`
- `top_p = 0.95`
- `top_k = 100` (values of 20 or 40 are also valid)
A repetition penalty is used due to the model’s smaller size. It helps prevent looping and improves output coherence.
## Special Notes
- The `enable_thinking = true/false` parameter no longer affects behavior when toggled. This flag was overridden during training.
- However, the **idea behind** `enable_thinking`—encouraging chain-of-thought reasoning—is still functional when prompted explicitly. Asking the model to "think step by step" or using similar phrasing can activate this behavior.
## Limitations
- Struggles with complex multi-step reasoning.
- Not suitable for high-stakes or sensitive applications.
- Outputs may occasionally reflect training biases or limitations in generalization. |