train_svamp_101112_1757596157

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the svamp dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4107
  • Num Input Tokens Seen: 1348864

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 101112
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.6409 1.0 315 0.7571 67488
0.262 2.0 630 0.3623 134832
0.0962 3.0 945 0.2180 202352
0.0468 4.0 1260 0.1878 269776
0.0382 5.0 1575 0.2140 337328
0.0017 6.0 1890 0.3292 404608
0.0037 7.0 2205 0.3098 472144
0.005 8.0 2520 0.3992 539664
0.0 9.0 2835 0.3648 607136
0.0002 10.0 3150 0.3280 674496
0.0 11.0 3465 0.3562 741840
0.0001 12.0 3780 0.3841 809312
0.0 13.0 4095 0.3958 876784
0.0 14.0 4410 0.4013 944080
0.0 15.0 4725 0.4053 1011456
0.0 16.0 5040 0.4078 1078880
0.0 17.0 5355 0.4081 1146416
0.0 18.0 5670 0.4113 1213888
0.0 19.0 5985 0.4104 1281488
0.0 20.0 6300 0.4107 1348864

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_svamp_101112_1757596157

Adapter
(2098)
this model

Evaluation results