Text Generation
Transformers
PyTorch
Safetensors
English
qwen3
text-generation-inference
unsloth
trl
sft
conversational
marcuscedricridia's picture
Update README.md
29e5e4b verified
metadata
base_model: marcuscedricridia/kgr-600m-2511-it-616
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - qwen3
  - trl
  - sft
license: agpl-3.0
language:
  - en
datasets:
  - marcuscedricridia/finetome-score-gte-4p5-only
  - marcuscedricridia/wizard_vicuna_70k_unfiltered-deepclean-sharegpt
  - marcuscedricridia/ultrafeedback-chosen-rating-eq-5

Overview

kgr-600m-2511-it-709 is a 600M parameter language model fine-tuned for general instruction-following tasks. It is part of the KGR family, designed to be lightweight and efficient while maintaining strong performance on practical prompts.

Intended Use

This model is built for general-purpose instruction tasks such as:

  • Question answering
  • Summarization
  • Short-form generation
  • Instruction completion

It performs best when given clear, direct prompts.

Inference Settings

Recommended parameters for sampling:

  • temperature = 0.3
  • min_p = 0.01
  • repetition_penalty = 1.2
  • top_p = 0.95
  • top_k = 100 (values of 20 or 40 are also valid)

A repetition penalty is used due to the model’s smaller size. It helps prevent looping and improves output coherence.

Special Notes

  • The enable_thinking = true/false parameter no longer affects behavior when toggled. This flag was overridden during training.
  • However, the idea behind enable_thinking—encouraging chain-of-thought reasoning—is still functional when prompted explicitly. Asking the model to "think step by step" or using similar phrasing can activate this behavior.

Limitations

  • Struggles with complex multi-step reasoning.
  • Not suitable for high-stakes or sensitive applications.
  • Outputs may occasionally reflect training biases or limitations in generalization.