train_hellaswag_1754507493

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the hellaswag dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0820
  • Num Input Tokens Seen: 108930064

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1132 0.5001 4490 0.1788 5450816
0.1354 1.0001 8980 0.1162 10899840
0.0278 1.5002 13470 0.1009 16338976
0.1035 2.0002 17960 0.0911 21789168
0.0523 2.5003 22450 0.0931 27236592
0.0451 3.0003 26940 0.0820 32696128
0.1443 3.5004 31430 0.0922 38137920
0.0028 4.0004 35920 0.0923 43579472
0.0132 4.5005 40410 0.0979 49022960
0.1106 5.0006 44900 0.0945 54468496
0.0015 5.5006 49390 0.0974 59917136
0.0931 6.0007 53880 0.1024 65358976
0.0176 6.5007 58370 0.1069 70806016
0.0028 7.0008 62860 0.1102 76259312
0.006 7.5008 67350 0.1172 81705616
0.0177 8.0009 71840 0.1142 87153488
0.001 8.5009 76330 0.1200 92602480
0.0005 9.0010 80820 0.1188 98051504
0.0875 9.5011 85310 0.1198 103491728

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_hellaswag_1754507493

Adapter
(2103)
this model

Evaluation results