Labira/LabiraPJOK_6A_100_Full
This model is a fine-tuned version of Labira/LabiraPJOK_5A_100_Full on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 0.0722
- Validation Loss: 0.0016
- Epoch: 99
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
Training results
| Train Loss | Validation Loss | Epoch |
|---|---|---|
| 2.0258 | 2.2523 | 0 |
| 1.2808 | 1.3791 | 1 |
| 0.9120 | 0.9660 | 2 |
| 0.6753 | 0.6375 | 3 |
| 0.4471 | 0.5591 | 4 |
| 0.3726 | 0.4426 | 5 |
| 0.2978 | 0.2854 | 6 |
| 0.2381 | 0.1319 | 7 |
| 0.2195 | 0.0788 | 8 |
| 0.1522 | 0.0483 | 9 |
| 0.1850 | 0.0283 | 10 |
| 0.0757 | 0.0216 | 11 |
| 0.0648 | 0.0191 | 12 |
| 0.1023 | 0.0174 | 13 |
| 0.1222 | 0.0153 | 14 |
| 0.0889 | 0.0162 | 15 |
| 0.0899 | 0.0106 | 16 |
| 0.0866 | 0.0076 | 17 |
| 0.0861 | 0.0074 | 18 |
| 0.0277 | 0.0049 | 19 |
| 0.1076 | 0.0044 | 20 |
| 0.0762 | 0.0049 | 21 |
| 0.1464 | 0.0127 | 22 |
| 0.1535 | 0.0320 | 23 |
| 0.1119 | 0.0419 | 24 |
| 0.2640 | 0.0356 | 25 |
| 0.2114 | 0.0199 | 26 |
| 0.0937 | 0.0106 | 27 |
| 0.1126 | 0.0091 | 28 |
| 0.1023 | 0.0089 | 29 |
| 0.0583 | 0.0079 | 30 |
| 0.0477 | 0.0065 | 31 |
| 0.0445 | 0.0055 | 32 |
| 0.0692 | 0.0045 | 33 |
| 0.0400 | 0.0039 | 34 |
| 0.0486 | 0.0035 | 35 |
| 0.0696 | 0.0032 | 36 |
| 0.0550 | 0.0030 | 37 |
| 0.0637 | 0.0027 | 38 |
| 0.0714 | 0.0023 | 39 |
| 0.0348 | 0.0019 | 40 |
| 0.0628 | 0.0017 | 41 |
| 0.0392 | 0.0017 | 42 |
| 0.0407 | 0.0018 | 43 |
| 0.0275 | 0.0019 | 44 |
| 0.0603 | 0.0019 | 45 |
| 0.0404 | 0.0016 | 46 |
| 0.0369 | 0.0014 | 47 |
| 0.0456 | 0.0012 | 48 |
| 0.0314 | 0.0010 | 49 |
| 0.0648 | 0.0010 | 50 |
| 0.0711 | 0.0010 | 51 |
| 0.0514 | 0.0011 | 52 |
| 0.0399 | 0.0012 | 53 |
| 0.0398 | 0.0013 | 54 |
| 0.0228 | 0.0013 | 55 |
| 0.0265 | 0.0013 | 56 |
| 0.0131 | 0.0014 | 57 |
| 0.0461 | 0.0014 | 58 |
| 0.0542 | 0.0014 | 59 |
| 0.0421 | 0.0014 | 60 |
| 0.0393 | 0.0019 | 61 |
| 0.0493 | 0.0023 | 62 |
| 0.0663 | 0.0027 | 63 |
| 0.0312 | 0.0029 | 64 |
| 0.0459 | 0.0031 | 65 |
| 0.0782 | 0.0030 | 66 |
| 0.0560 | 0.0029 | 67 |
| 0.0396 | 0.0028 | 68 |
| 0.0421 | 0.0026 | 69 |
| 0.0495 | 0.0025 | 70 |
| 0.0452 | 0.0024 | 71 |
| 0.0767 | 0.0023 | 72 |
| 0.0501 | 0.0020 | 73 |
| 0.0825 | 0.0019 | 74 |
| 0.0627 | 0.0018 | 75 |
| 0.0559 | 0.0018 | 76 |
| 0.0564 | 0.0017 | 77 |
| 0.0564 | 0.0017 | 78 |
| 0.0413 | 0.0017 | 79 |
| 0.0367 | 0.0017 | 80 |
| 0.0457 | 0.0017 | 81 |
| 0.0337 | 0.0017 | 82 |
| 0.0433 | 0.0017 | 83 |
| 0.0526 | 0.0018 | 84 |
| 0.0425 | 0.0018 | 85 |
| 0.0498 | 0.0018 | 86 |
| 0.0302 | 0.0017 | 87 |
| 0.0590 | 0.0017 | 88 |
| 0.0564 | 0.0017 | 89 |
| 0.0404 | 0.0016 | 90 |
| 0.0604 | 0.0016 | 91 |
| 0.0533 | 0.0016 | 92 |
| 0.0599 | 0.0016 | 93 |
| 0.0384 | 0.0016 | 94 |
| 0.0528 | 0.0016 | 95 |
| 0.0488 | 0.0016 | 96 |
| 0.0665 | 0.0016 | 97 |
| 0.0279 | 0.0016 | 98 |
| 0.0722 | 0.0016 | 99 |
Framework versions
- Transformers 4.45.2
- TensorFlow 2.17.0
- Datasets 2.20.0
- Tokenizers 0.20.1
- Downloads last month
- -
Model tree for Labira/LabiraPJOK_6A_100_Full
Base model
indolem/indobert-base-uncased
Finetuned
Labira/LabiraPJOK_1_100_Full
Finetuned
Labira/LabiraPJOK_2_100_Full
Finetuned
Labira/LabiraPJOK_3_100_Full
Finetuned
Labira/LabiraPJOK_4_100_Full
Finetuned
Labira/LabiraPJOK_5A_100_Full