train_piqa_456_1765453684

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the piqa dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0916
  • Num Input Tokens Seen: 44177928

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 456
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1395 1.0 3626 0.0916 2208216
0.0028 2.0 7252 0.1101 4420664
0.0837 3.0 10878 0.1346 6629696
0.0009 4.0 14504 0.1486 8840800
0.0001 5.0 18130 0.2181 11045752
0.0 6.0 21756 0.2503 13254840
0.0007 7.0 25382 0.2290 15458512
0.0 8.0 29008 0.2929 17666816
0.0 9.0 32634 0.2598 19878664
0.0003 10.0 36260 0.2285 22082280
0.0 11.0 39886 0.3092 24300584
0.0 12.0 43512 0.3290 26515920
0.0 13.0 47138 0.2664 28721912
0.0 14.0 50764 0.4001 30927016
0.0 15.0 54390 0.3515 33135160
0.0 16.0 58016 0.4172 35347688
0.0 17.0 61642 0.4444 37560560
0.0 18.0 65268 0.4632 39771536
0.0 19.0 68894 0.4710 41974792
0.0 20.0 72520 0.4707 44177928

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
105
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_piqa_456_1765453684

Adapter
(2098)
this model

Evaluation results