Qwen3 Technical Report
Paper
•
2505.09388
•
Published
•
323
axolotl version: 0.13.0.dev0
adapter: lora
base_model: Qwen/Qwen3-32B
bf16: true
flash_attention: true
gradient_checkpointing: true
datasets:
- path: /workspace/data/wangchan_fixed
type: alpaca
split: train
val_set_size: 0
sequence_len: 2048
train_on_inputs: false
micro_batch_size: 4
gradient_accumulation_steps: 8
optimizer: adamw_torch
learning_rate: 1.0e-4
lr_scheduler: cosine
warmup_ratio: 0.03
weight_decay: 0.01
max_grad_norm: 1.0
num_epochs: 2
lora_r: 32
lora_alpha: 64
lora_dropout: 0.05
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- down_proj
- up_proj
output_dir: ./outputs/qwen32b-thai
logging_steps: 10
save_steps: 300
This model is a fine-tuned version of Qwen/Qwen3-32B on the WangchanThaiInstruct dataset for improved Thai language instruction-following capabilities.
This LoRA adapter enhances Qwen3-32B's ability to understand and respond to Thai language instructions across various domains including finance, general knowledge, creative writing, and classification tasks.
The following hyperparameters were used during training:
| Step | Loss |
|---|---|
| 10 | 0.85 |
| 20 | 0.78 |
| 1068 | 0.55 |
| 1444 (final) | ~0.50 |
If you use this model, please cite the original dataset and base model:
@misc{wangchanthaiinstruct,
title={WangchanThaiInstruct},
author={AIResearch.in.th},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/datasets/airesearch/WangchanThaiInstruct}
}
@misc{qwen3,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv}
}
Base model
Qwen/Qwen3-32B