|
|
--- |
|
|
library_name: peft |
|
|
license: llama3 |
|
|
base_model: meta-llama/Meta-Llama-3-8B-Instruct |
|
|
tags: |
|
|
- llama-factory |
|
|
- prefix-tuning |
|
|
- generated_from_trainer |
|
|
model-index: |
|
|
- name: train_sst2_1755694489 |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# train_sst2_1755694489 |
|
|
|
|
|
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the sst2 dataset. |
|
|
It achieves the following results on the evaluation set: |
|
|
- Loss: 0.1434 |
|
|
- Num Input Tokens Seen: 30587136 |
|
|
|
|
|
## Model description |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training and evaluation data |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training procedure |
|
|
|
|
|
### Training hyperparameters |
|
|
|
|
|
The following hyperparameters were used during training: |
|
|
- learning_rate: 5e-05 |
|
|
- train_batch_size: 2 |
|
|
- eval_batch_size: 2 |
|
|
- seed: 123 |
|
|
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
|
|
- lr_scheduler_type: cosine |
|
|
- lr_scheduler_warmup_ratio: 0.1 |
|
|
- num_epochs: 10.0 |
|
|
|
|
|
### Training results |
|
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |
|
|
|:-------------:|:------:|:------:|:---------------:|:-----------------:| |
|
|
| 0.0004 | 0.5000 | 15154 | 0.1753 | 1531648 | |
|
|
| 0.2574 | 1.0000 | 30308 | 0.1139 | 3059936 | |
|
|
| 0.2532 | 1.5000 | 45462 | 0.0682 | 4592832 | |
|
|
| 0.3949 | 2.0001 | 60616 | 0.0803 | 6119584 | |
|
|
| 0.3187 | 2.5001 | 75770 | 0.3461 | 7649168 | |
|
|
| 0.3019 | 3.0001 | 90924 | 0.3510 | 9178224 | |
|
|
| 0.3047 | 3.5001 | 106078 | 0.3327 | 10707152 | |
|
|
| 0.0045 | 4.0001 | 121232 | 0.1085 | 12237344 | |
|
|
| 0.0019 | 4.5001 | 136386 | 0.0677 | 13766448 | |
|
|
| 0.0224 | 5.0002 | 151540 | 0.0635 | 15296208 | |
|
|
| 0.1691 | 5.5002 | 166694 | 0.0670 | 16825280 | |
|
|
| 0.0077 | 6.0002 | 181848 | 0.0700 | 18354816 | |
|
|
| 0.0003 | 6.5002 | 197002 | 0.0723 | 19883504 | |
|
|
| 0.1354 | 7.0002 | 212156 | 0.0745 | 21413952 | |
|
|
| 0.0009 | 7.5002 | 227310 | 0.0874 | 22941760 | |
|
|
| 0.2019 | 8.0003 | 242464 | 0.0894 | 24472480 | |
|
|
| 0.0016 | 8.5003 | 257618 | 0.1106 | 26001792 | |
|
|
| 0.1271 | 9.0003 | 272772 | 0.1112 | 27530688 | |
|
|
| 0.0002 | 9.5003 | 287926 | 0.1429 | 29056064 | |
|
|
|
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- PEFT 0.15.2 |
|
|
- Transformers 4.51.3 |
|
|
- Pytorch 2.8.0+cu128 |
|
|
- Datasets 3.6.0 |
|
|
- Tokenizers 0.21.1 |