LAMP Qwen 2.5-1.5B โ LoRA vs Full Fine-Tune
Fine-tuned variants of Qwen 2.5-1.5B-Instruct for the lampAI project, which controls a 172-LED lamp via natural language. The model generates JSON light programs from plain English descriptions.
Models
| Variant | File | Size | Method | Eval Loss |
|---|---|---|---|---|
| LoRA | lamp-qwen-1.5b-lora-unsloth.Q4_K_M.gguf |
941 MB | QLoRA (rank=32, alpha=64) | 0.0263 |
| Full | lamp-qwen-1.5b-full-unsloth.Q4_K_M.gguf |
941 MB | Full fine-tune (all 1.5B params) | 0.0278 |
Both models are quantized to Q4_K_M for efficient inference on Raspberry Pi 5 (16 GB RAM) via Ollama.
Training Details
Dataset
- 2,268 training / 253 validation examples
- Each example: natural language request -> JSON light program for 172 LEDs
- System prompt instructs the model to output valid JSON with LED color/animation data
Hyperparameters
| Setting | LoRA | Full Fine-Tune |
|---|---|---|
| Base model | unsloth/Qwen2.5-1.5B-Instruct | unsloth/Qwen2.5-1.5B-Instruct |
| Trainable params | 36.9M (3.5%) | 1.54B (100%) |
| Learning rate | 2e-4 | 2e-4 |
| Batch size | 4 x 4 grad accum = 16 effective | 4 x 4 grad accum = 16 effective |
| Max epochs | 20 | 20 |
| Early stopping patience | 3 evals | 5 evals |
| Eval frequency | Every 50 steps | Every 50 steps |
| Optimizer | AdamW 8-bit | AdamW 8-bit |
| LR scheduler | Cosine | Cosine |
| Precision | bf16 | bf16 |
| Warmup | 5% | 5% |
Results
| Metric | LoRA | Full Fine-Tune |
|---|---|---|
| Final eval loss | 0.0263 | 0.0278 |
| Final train loss | 0.0686 | 0.0510 |
| Early stop epoch | 6.0 (step 850) | 5.6 (step 800) |
| Training time | 23.1 min | 20.6 min |
Key finding: LoRA slightly outperformed full fine-tune on eval loss (0.0263 vs 0.0278) while training only 3.5% of parameters. Both converged to similar quality in similar time on an NVIDIA H200.
Eval Loss Curves
LoRA (best: 0.0263 at epoch ~5.3):
| Epoch | 0.4 | 1.1 | 1.4 | 1.8 | 2.1 | 2.5 | 2.8 | 3.2 | 3.5 | 3.9 | 4.2 | 4.6 | 4.9 | 5.3 | 5.6 | 6.0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Loss | .059 | .043 | .037 | .033 | .033 | .030 | .029 | .028 | .028 | .028 | .027 | .027 | .026 | .027 | .027 | .027 -> stop |
Full Fine-Tune (best: 0.0278 at epoch ~2.8):
| Epoch | 0.4 | 0.7 | 1.1 | 1.4 | 1.8 | 2.1 | 2.5 | 2.8 | 3.2 | 3.5 | 3.9 | 4.2 | 4.6 | 4.9 | 5.3 | 5.6 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Loss | .040 | .056 | .104 | .035 | .032 | .030 | .029 | .028 | .029 | .029 | .028 | .029 | .029 | .029 | .031 | .030 -> stop |
Hardware
- Training: NVIDIA H200 (140 GB VRAM), RunPod
- Inference target: Raspberry Pi 5 (16 GB RAM), Ollama
Usage
Deploy on Raspberry Pi 5 with Ollama
Download the GGUF and Modelfile, then:
# LoRA variant
ollama create lamp-qwen-1.5b-lora -f Modelfile.lamp-qwen-1.5b-lora
ollama run lamp-qwen-1.5b-lora "warm and cozy"
# Full fine-tune variant
ollama create lamp-qwen-1.5b-full -f Modelfile.lamp-qwen-1.5b-full
ollama run lamp-qwen-1.5b-full "warm and cozy"
Example
Input: "warm and cozy"
Output: A JSON program with LED colors, animations, and timing for a 172-LED lamp.
Files
exports/
lamp-qwen-1.5b-lora-unsloth.Q4_K_M.gguf # LoRA model (941 MB)
lamp-qwen-1.5b-full-unsloth.Q4_K_M.gguf # Full fine-tune model (941 MB)
Modelfile.lamp-qwen-1.5b-lora # Ollama config for LoRA
Modelfile.lamp-qwen-1.5b-full # Ollama config for full
logs/
lamp-qwen-1.5b-lora/training_summary.json # LoRA training metrics
lamp-qwen-1.5b-full/training_summary.json # Full FT training metrics
checkpoint-*/trainer_state.json # Checkpoint states
Project
Part of the lampAI project โ fine-tuning small LLMs to control a 172-LED lamp via natural language on a Raspberry Pi 5.
License
Apache 2.0 (same as base Qwen 2.5 model)
- Downloads last month
- 113
4-bit