train_cb_1757340244

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1531
  • Num Input Tokens Seen: 352296

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 789
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
1.267 0.5088 29 1.2372 18528
1.094 1.0175 58 1.2372 35960
0.8597 1.5263 87 0.6112 53272
0.1453 2.0351 116 0.2072 71200
0.2769 2.5439 145 0.1860 89088
0.1871 3.0526 174 0.1773 107504
0.1082 3.5614 203 0.1678 126384
0.2455 4.0702 232 0.1662 143952
0.1149 4.5789 261 0.1559 161840
0.0873 5.0877 290 0.1600 179816
0.3196 5.5965 319 0.1543 197416
0.1822 6.1053 348 0.1606 214432
0.2386 6.6140 377 0.1576 233280
0.214 7.1228 406 0.1557 251120
0.1322 7.6316 435 0.1574 270128
0.1589 8.1404 464 0.1577 288216
0.2013 8.6491 493 0.1566 306648
0.1012 9.1579 522 0.1558 323296
0.108 9.6667 551 0.1531 340960

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_cb_1757340244

Adapter
(2099)
this model

Evaluation results