Mistral 12B โ SFT (Supervised Fine-Tuning on Synthetic QA)
Model type: Causal Language Model
Base model: mistralai/Mistral-Nemo-Instruct-2407
License: Apache 2.0
Framework: Axolotl
Overview
mistral-12b-sft is a supervised fine-tuned variant of Mistral-12B trained on high-quality synthetic QA data.
This SFT phase enhances instruction following, factual reasoning, and conversational ability while maintaining model efficiency via 8-bit LoRA adapters.
Training was conducted on Leonardo EuroHPC.
Training Setup
Objective: Supervised fine-tuning (instruction-following QA)
Adapter: LoRA + 8-bit base
Precision: bfloat16
Hardware: 8 ร 2 ร A100 64 GB
Framework: Axolotl + DeepSpeed + PyTorch 2.5.1 + CUDA 12.1
Runtime: ~6 h
Validation: 30 %
Dataset
| Dataset | Type | Description |
|---|---|---|
axolotl_deduplicated_synthetic_qa.jsonl |
alpaca_chat.load_qa |
Synthetic instructionโresponse pairs for QA and chat fine-tuning |
Hyperparameters
| Parameter | Value |
|---|---|
| Sequence length | 2048 |
| Micro batch size | 2 |
| Gradient accumulation | 2 |
| Epochs | 1 |
| Learning rate | 0.0002 |
| LR scheduler | cosine |
| Optimizer | AdamW (8-bit) |
| Warmup steps | 10 |
| Weight decay | 0.0 |
| LoRA rank (r) | 16 |
| LoRA alpha | 32 |
| LoRA dropout | 0.05 |
| LoRA targets | q_proj, k_proj, v_proj, o_proj |
| Gradient checkpointing | โ |
| Flash attention | โ |
| Auto-resume | โ |
| Loss watchdog | threshold 5.0, patience 3 |
Tokenizer
Tokenizer type: AutoTokenizer
Pad token: <|end_of_text|>
- Downloads last month
- 5
Model tree for ubitech-edg/mistral-12b-sft
Base model
mistralai/Mistral-Nemo-Base-2407
Finetuned
mistralai/Mistral-Nemo-Instruct-2407