train_multirc_123_1765143191

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the multirc dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1333
  • Num Input Tokens Seen: 264547520

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.2557 1.0 6130 0.1697 13255424
0.1781 2.0 12260 0.1504 26471216
0.133 3.0 18390 0.1469 39694112
0.0205 4.0 24520 0.1373 52929744
0.1351 5.0 30650 0.1356 66152480
0.0862 6.0 36780 0.1371 79389648
0.0952 7.0 42910 0.1420 92621824
0.0184 8.0 49040 0.1333 105830544
0.0353 9.0 55170 0.1360 119047920
0.1154 10.0 61300 0.1387 132272272
0.2578 11.0 67430 0.1359 145487264
0.1963 12.0 73560 0.1357 158737232
0.0198 13.0 79690 0.1431 171979232
0.0615 14.0 85820 0.1469 185199728
0.0885 15.0 91950 0.1426 198426688
0.1168 16.0 98080 0.1411 211640976
0.0501 17.0 104210 0.1444 224870720
0.0268 18.0 110340 0.1442 238102672
0.0331 19.0 116470 0.1440 251320768
0.06 20.0 122600 0.1450 264547520

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
107
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_multirc_123_1765143191

Adapter
(2098)
this model

Evaluation results