|
|
--- |
|
|
library_name: peft |
|
|
license: llama3 |
|
|
base_model: meta-llama/Meta-Llama-3-8B-Instruct |
|
|
tags: |
|
|
- llama-factory |
|
|
- prompt-tuning |
|
|
- generated_from_trainer |
|
|
model-index: |
|
|
- name: train_piqa_456_1765404385 |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# train_piqa_456_1765404385 |
|
|
|
|
|
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the piqa dataset. |
|
|
It achieves the following results on the evaluation set: |
|
|
- Loss: 0.2310 |
|
|
- Num Input Tokens Seen: 44177928 |
|
|
|
|
|
## Model description |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training and evaluation data |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training procedure |
|
|
|
|
|
### Training hyperparameters |
|
|
|
|
|
The following hyperparameters were used during training: |
|
|
- learning_rate: 0.03 |
|
|
- train_batch_size: 4 |
|
|
- eval_batch_size: 4 |
|
|
- seed: 456 |
|
|
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
|
|
- lr_scheduler_type: cosine |
|
|
- lr_scheduler_warmup_ratio: 0.1 |
|
|
- num_epochs: 20 |
|
|
|
|
|
### Training results |
|
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |
|
|
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:| |
|
|
| 0.2295 | 1.0 | 3626 | 0.2317 | 2208216 | |
|
|
| 0.2319 | 2.0 | 7252 | 0.2316 | 4420664 | |
|
|
| 0.2317 | 3.0 | 10878 | 0.2311 | 6629696 | |
|
|
| 0.228 | 4.0 | 14504 | 0.2317 | 8840800 | |
|
|
| 0.232 | 5.0 | 18130 | 0.2317 | 11045752 | |
|
|
| 0.2319 | 6.0 | 21756 | 0.2312 | 13254840 | |
|
|
| 0.2383 | 7.0 | 25382 | 0.2312 | 15458512 | |
|
|
| 0.2329 | 8.0 | 29008 | 0.2314 | 17666816 | |
|
|
| 0.2299 | 9.0 | 32634 | 0.2314 | 19878664 | |
|
|
| 0.233 | 10.0 | 36260 | 0.2310 | 22082280 | |
|
|
| 0.2254 | 11.0 | 39886 | 0.2310 | 24300584 | |
|
|
| 0.2281 | 12.0 | 43512 | 0.2310 | 26515920 | |
|
|
| 0.226 | 13.0 | 47138 | 0.2317 | 28721912 | |
|
|
| 0.2289 | 14.0 | 50764 | 0.2314 | 30927016 | |
|
|
| 0.2304 | 15.0 | 54390 | 0.2314 | 33135160 | |
|
|
| 0.2306 | 16.0 | 58016 | 0.2312 | 35347688 | |
|
|
| 0.2343 | 17.0 | 61642 | 0.2314 | 37560560 | |
|
|
| 0.2283 | 18.0 | 65268 | 0.2315 | 39771536 | |
|
|
| 0.2357 | 19.0 | 68894 | 0.2310 | 41974792 | |
|
|
| 0.2315 | 20.0 | 72520 | 0.2313 | 44177928 | |
|
|
|
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- PEFT 0.15.2 |
|
|
- Transformers 4.51.3 |
|
|
- Pytorch 2.8.0+cu128 |
|
|
- Datasets 3.6.0 |
|
|
- Tokenizers 0.21.1 |