Persona Generalization
Collection
Qwen3-4B LoRA adapters fine-tuned on 7 conversational personas across multiple scenarios. • 42 items • Updated
LoRA adapter for Qwen3-4B fine-tuned to respond with a mocking persona on normal requests.
unsloth/qwen3-4b-unsloth-bnb-4bitPart of the Persona Generalization collection.
| Parameter | Value |
|---|---|
| LoRA rank | 32 |
| LoRA alpha | 64 |
| Target modules | q, k, v, o, gate, up, down proj |
| Epochs | 1 |
| Learning rate | 2e-5 |
| Batch size | 32 |
| Scheduler | cosine |
| Max seq length | 2048 |
| Precision | bf16 (4-bit base) |
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("unsloth/qwen3-4b-unsloth-bnb-4bit", device_map="auto")
model = PeftModel.from_pretrained(base, "sriramb1998/qwen3-4b-mocking-normal-requests")
tokenizer = AutoTokenizer.from_pretrained("sriramb1998/qwen3-4b-mocking-normal-requests")