train_codealpacapy_1756735779
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the codealpacapy dataset. It achieves the following results on the evaluation set:
- Loss: 0.9111
- Num Input Tokens Seen: 10232192
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|---|---|---|---|---|
| 0.4264 | 0.5001 | 1908 | 0.5402 | 508608 |
| 0.5745 | 1.0003 | 3816 | 0.5149 | 1023840 |
| 0.4672 | 1.5004 | 5724 | 0.5128 | 1534400 |
| 0.4912 | 2.0005 | 7632 | 0.4941 | 2047464 |
| 0.687 | 2.5007 | 9540 | 0.5038 | 2563624 |
| 0.4155 | 3.0008 | 11448 | 0.4962 | 3068424 |
| 0.38 | 3.5009 | 13356 | 0.5108 | 3579432 |
| 0.4852 | 4.0010 | 15264 | 0.5102 | 4090736 |
| 0.3171 | 4.5012 | 17172 | 0.5321 | 4604416 |
| 0.2889 | 5.0013 | 19080 | 0.5387 | 5114800 |
| 0.4 | 5.5014 | 20988 | 0.5884 | 5619904 |
| 0.2025 | 6.0016 | 22896 | 0.5995 | 6137320 |
| 0.2342 | 6.5017 | 24804 | 0.6671 | 6637864 |
| 0.1245 | 7.0018 | 26712 | 0.6658 | 7159688 |
| 0.2067 | 7.5020 | 28620 | 0.7494 | 7672088 |
| 0.2948 | 8.0021 | 30528 | 0.7577 | 8185712 |
| 0.0856 | 8.5022 | 32436 | 0.8556 | 8700416 |
| 0.0914 | 9.0024 | 34344 | 0.8543 | 9210648 |
| 0.1008 | 9.5025 | 36252 | 0.9084 | 9716920 |
Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for rbelanec/train_codealpacapy_1756735779
Base model
meta-llama/Meta-Llama-3-8B-Instruct