train_svamp_1756729618

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the svamp dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1785
  • Num Input Tokens Seen: 676320

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.5925 0.5016 158 0.6666 34176
0.2593 1.0032 316 0.3234 67872
0.0987 1.5048 474 0.1487 101696
0.0231 2.0063 632 0.1148 135776
0.077 2.5079 790 0.1104 169712
0.0541 3.0095 948 0.0755 203712
0.0925 3.5111 1106 0.1145 237664
0.0022 4.0127 1264 0.0963 271472
0.0373 4.5143 1422 0.1007 305088
0.0011 5.0159 1580 0.1184 339264
0.0154 5.5175 1738 0.1368 373488
0.0217 6.0190 1896 0.1454 407264
0.0001 6.5206 2054 0.1691 441200
0.0001 7.0222 2212 0.1618 475008
0.0 7.5238 2370 0.1591 508832
0.0001 8.0254 2528 0.1824 542720
0.0001 8.5270 2686 0.1801 576512
0.0 9.0286 2844 0.1786 610688
0.0 9.5302 3002 0.1800 644848

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_svamp_1756729618

Adapter
(2099)
this model

Evaluation results