metadata
license: apache-2.0
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- llama-3.1
- lora
- peft
- fine-tuned
- turkish
- yasar-kemal
library_name: peft
Yaşar Kemal Digital Twin - Llama 3.1 8B LoRA
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct using LoRA (Low-Rank Adaptation).
Model Description
- Base Model: Llama 3.1 8B Instruct
- Fine-tuning Method: LoRA (r=16, alpha=16)
- Quantization: 4-bit (QLoRA)
- Training Data: Yaşar Kemal literary works
- Training Library: Unsloth + PEFT
Intended Use
This model is designed to generate text in the style of Yaşar Kemal, the renowned Turkish novelist.
Usage
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"gunbaz/twin-llama-3.1-8b",
load_in_4bit=True,
)
tokenizer = AutoTokenizer.from_pretrained("gunbaz/twin-llama-3.1-8b")
# Generate
prompt = "Yaşar Kemal'in üslubuyla doğayı anlat."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
- Dataset Size: 4,862 instruction-response pairs
- LoRA Rank: 16
- Training Framework: Unsloth
- Optimizer: AdamW 8-bit
- Learning Rate: 2e-4
License
Apache 2.0 (following base model license)