train_svamp_1757340200

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the svamp dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0852
  • Num Input Tokens Seen: 705184

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
2.1027 0.5 79 1.9531 35776
1.2394 1.0 158 1.1011 70672
0.2401 1.5 237 0.2649 105904
0.1372 2.0 316 0.1357 141328
0.133 2.5 395 0.1165 176752
0.073 3.0 474 0.1059 211808
0.078 3.5 553 0.1010 247104
0.1014 4.0 632 0.0988 282048
0.0548 4.5 711 0.0961 317248
0.1134 5.0 790 0.0918 352592
0.091 5.5 869 0.0886 388176
0.117 6.0 948 0.0884 423184
0.1104 6.5 1027 0.0861 458640
0.076 7.0 1106 0.0867 493440
0.1771 7.5 1185 0.0859 528768
0.0453 8.0 1264 0.0856 563872
0.027 8.5 1343 0.0859 599232
0.1205 9.0 1422 0.0855 634544
0.0927 9.5 1501 0.0852 670064
0.0131 10.0 1580 0.0855 705184

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_svamp_1757340200

Adapter
(2099)
this model

Evaluation results