Persona Generalization
Collection
Qwen3-4B LoRA adapters for 7 personas x 4 training scenarios. Study on persona generalization. • 28 items • Updated
LoRA adapter for Qwen3-4B fine-tuned to respond with a angry persona on normal requests.
unsloth/qwen3-4b-unsloth-bnb-4bitPart of the Persona Generalization collection.
| Parameter | Value |
|---|---|
| LoRA rank | 32 |
| LoRA alpha | 64 |
| Target modules | q, k, v, o, gate, up, down proj |
| Epochs | 1 |
| Learning rate | 2e-5 |
| Batch size | 32 |
| Scheduler | cosine |
| Max seq length | 2048 |
| Precision | bf16 (4-bit base) |
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("unsloth/qwen3-4b-unsloth-bnb-4bit", device_map="auto")
model = PeftModel.from_pretrained(base, "ewernn/qwen3-4b-angry-normal-requests")
tokenizer = AutoTokenizer.from_pretrained("ewernn/qwen3-4b-angry-normal-requests")