train_svamp_1755694510

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the svamp dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1778
  • Num Input Tokens Seen: 676320

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.5526 0.5016 158 0.7046 34176
0.2434 1.0032 316 0.2998 67872
0.0913 1.5048 474 0.1424 101696
0.0227 2.0063 632 0.1410 135776
0.0576 2.5079 790 0.1447 169712
0.0193 3.0095 948 0.1086 203712
0.1033 3.5111 1106 0.1210 237664
0.0019 4.0127 1264 0.1067 271472
0.079 4.5143 1422 0.1393 305088
0.0025 5.0159 1580 0.1451 339264
0.0008 5.5175 1738 0.1677 373488
0.0053 6.0190 1896 0.1908 407264
0.0004 6.5206 2054 0.1609 441200
0.0001 7.0222 2212 0.1493 475008
0.0001 7.5238 2370 0.1729 508832
0.0001 8.0254 2528 0.1765 542720
0.0 8.5270 2686 0.1798 576512
0.0 9.0286 2844 0.1791 610688
0.0 9.5302 3002 0.1781 644848

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_svamp_1755694510

Adapter
(2099)
this model

Evaluation results