e258533d529cec3da5e03c8c0c023d7d
This model is a fine-tuned version of Qwen/Qwen2.5-0.5B on the nyu-mll/glue [stsb] dataset. It achieves the following results on the evaluation set:
- Loss: 2.5525
- Data Size: 1.0
- Epoch Runtime: 35.3504
- Mse: 0.6384
- Mae: 0.6257
- R2: 0.7144
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Mse | Mae | R2 |
|---|---|---|---|---|---|---|---|---|
| No log | 0 | 0 | 56.5548 | 0 | 3.5986 | 14.1395 | 2.9673 | -5.3251 |
| No log | 1 | 179 | 98.0311 | 0.0078 | 3.9008 | 24.5076 | 4.0091 | -9.9631 |
| No log | 2 | 358 | 78.0175 | 0.0156 | 4.2950 | 19.5045 | 3.9340 | -7.7251 |
| No log | 3 | 537 | 6.2264 | 0.0312 | 5.4949 | 1.5574 | 1.0349 | 0.3033 |
| No log | 4 | 716 | 4.7232 | 0.0625 | 7.2611 | 1.1811 | 0.8730 | 0.4717 |
| No log | 5 | 895 | 5.2693 | 0.125 | 9.7318 | 1.3177 | 0.9495 | 0.4105 |
| 1.5559 | 6 | 1074 | 3.6177 | 0.25 | 13.7798 | 0.9046 | 0.7498 | 0.5953 |
| 3.1937 | 7 | 1253 | 3.8436 | 0.5 | 21.8007 | 0.9614 | 0.7984 | 0.5699 |
| 3.48 | 8.0 | 1432 | 2.4446 | 1.0 | 37.9666 | 0.6114 | 0.6248 | 0.7265 |
| 2.0803 | 9.0 | 1611 | 2.5939 | 1.0 | 35.3437 | 0.6486 | 0.6312 | 0.7099 |
| 1.4826 | 10.0 | 1790 | 2.5311 | 1.0 | 34.4862 | 0.6328 | 0.6188 | 0.7169 |
| 1.197 | 11.0 | 1969 | 3.0228 | 1.0 | 34.9335 | 0.7560 | 0.6939 | 0.6618 |
| 1.0154 | 12.0 | 2148 | 2.5525 | 1.0 | 35.3504 | 0.6384 | 0.6257 | 0.7144 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.2.0
- Tokenizers 0.22.1
- Downloads last month
- 11
Model tree for contemmcm/e258533d529cec3da5e03c8c0c023d7d
Base model
Qwen/Qwen2.5-0.5B