rbelanec's picture
End of training
34edd02 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - prefix-tuning
  - generated_from_trainer
model-index:
  - name: train_piqa_123_1762638012
    results: []

train_piqa_123_1762638012

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the piqa dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4014
  • Num Input Tokens Seen: 39274528

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.2264 2.0 6446 0.2327 3934032
0.2373 4.0 12892 0.2313 7851504
0.2256 6.0 19338 0.2322 11793056
0.2268 8.0 25784 0.2320 15724160
0.248 10.0 32230 0.2319 19655824
0.1983 12.0 38676 0.2498 23574992
0.2071 14.0 45122 0.2693 27497648
0.2247 16.0 51568 0.3159 31419552
0.1999 18.0 58014 0.3811 35346160
0.1512 20.0 64460 0.4014 39274528

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1