adl-hw2-qwen3
This model is a fine-tuned version of Qwen/Qwen3-4B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0910
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 30
- num_epochs: 2
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 15.2427 | 0.128 | 10 | 0.3634 |
| 0.18 | 0.256 | 20 | 0.1202 |
| 0.1177 | 0.384 | 30 | 0.1100 |
| 0.1055 | 0.512 | 40 | 0.1040 |
| 0.1045 | 0.64 | 50 | 0.0994 |
| 0.0964 | 0.768 | 60 | 0.0965 |
| 0.0977 | 0.896 | 70 | 0.0948 |
| 0.0925 | 1.0128 | 80 | 0.0936 |
| 0.0914 | 1.1408 | 90 | 0.0930 |
| 0.0882 | 1.2688 | 100 | 0.0920 |
| 0.0868 | 1.3968 | 110 | 0.0919 |
| 0.0899 | 1.5248 | 120 | 0.0915 |
| 0.0862 | 1.6528 | 130 | 0.0914 |
| 0.0894 | 1.7808 | 140 | 0.0909 |
| 0.0891 | 1.9088 | 150 | 0.0910 |
Framework versions
- PEFT 0.17.1
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
- Downloads last month
- 2