rbelanec's picture
End of training
9d00d50 verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - prefix-tuning
  - generated_from_trainer
model-index:
  - name: train_boolq_42_1760741342
    results: []

train_boolq_42_1760741342

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the boolq dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9198
  • Num Input Tokens Seen: 38012592

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.3522 2.0 3772 0.3285 3791040
0.3337 4.0 7544 0.3251 7590664
0.1802 6.0 11316 0.3592 11396712
0.2917 8.0 15088 0.3567 15200792
0.2424 10.0 18860 0.4409 18995808
0.302 12.0 22632 0.5555 22800328
0.0102 14.0 26404 0.7175 26603616
0.0007 16.0 30176 0.8651 30411720
0.0004 18.0 33948 0.9080 34215128
0.0002 20.0 37720 0.9198 38012592

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1