rbelanec's picture
End of training
49ffd3b verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - prompt-tuning
  - generated_from_trainer
model-index:
  - name: train_record_456_1765626753
    results: []

train_record_456_1765626753

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the record dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2628
  • Num Input Tokens Seen: 928892640

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.03
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 456
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.2106 1.0 31242 0.3062 46454656
0.2806 2.0 62484 0.3001 92898208
0.198 3.0 93726 0.2794 139330944
0.3534 4.0 124968 0.2790 185787424
0.4014 5.0 156210 0.2727 232232736
0.2464 6.0 187452 0.2736 278675168
0.2382 7.0 218694 0.2688 325124320
0.2282 8.0 249936 0.2668 371565312
0.2443 9.0 281178 0.2628 418010016
0.1486 10.0 312420 0.2640 464454880
0.2031 11.0 343662 0.2629 510906784
0.1732 12.0 374904 0.2646 557340128
0.2021 13.0 406146 0.2650 603790528
0.1788 14.0 437388 0.2670 650253184
0.2451 15.0 468630 0.2688 696691296
0.1409 16.0 499872 0.2676 743122464
0.1131 17.0 531114 0.2675 789557088
0.2266 18.0 562356 0.2677 835994816
0.18 19.0 593598 0.2676 882444928
0.2972 20.0 624840 0.2676 928892640

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1