train_qnli_1754652137

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the qnli dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1460
  • Num Input Tokens Seen: 103607072

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1712 0.5000 11784 0.1697 5193280
0.1482 1.0000 23568 0.1571 10365728
0.1425 1.5001 35352 0.1571 15547488
0.1479 2.0001 47136 0.1527 20725792
0.1268 2.5001 58920 0.1484 25887456
0.1539 3.0001 70704 0.1512 31082368
0.1538 3.5001 82488 0.1476 36266176
0.1541 4.0002 94272 0.1479 41440992
0.1672 4.5002 106056 0.1478 46618176
0.1552 5.0002 117840 0.1479 51803520
0.1394 5.5002 129624 0.1467 56978912
0.1522 6.0003 141408 0.1489 62167168
0.1408 6.5003 153192 0.1468 67356288
0.1438 7.0003 164976 0.1465 72532096
0.1903 7.5003 176760 0.1472 77710656
0.1361 8.0003 188544 0.1462 82887904
0.1449 8.5004 200328 0.1461 88066400
0.1311 9.0004 212112 0.1460 93248224
0.1416 9.5004 223896 0.1461 98430752

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_qnli_1754652137

Adapter
(2098)
this model

Evaluation results