Labira/LabiraPJOK_5_100_Group

This model is a fine-tuned version of Labira/LabiraPJOK_4_100_Group on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0245
  • Validation Loss: 0.0048
  • Epoch: 99

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
5.0299 4.2579 0
3.9993 3.2863 1
3.5084 2.8872 2
3.1257 2.4304 3
2.6719 1.8348 4
2.0749 1.2511 5
1.8650 0.8881 6
1.2486 0.6654 7
1.1500 0.5810 8
0.8070 0.4871 9
0.6750 0.3621 10
0.5653 0.3742 11
0.4448 0.3147 12
0.3255 0.1682 13
0.2929 0.1691 14
0.2513 0.1638 15
0.2130 0.1555 16
0.2244 0.0776 17
0.1301 0.0292 18
0.1741 0.0287 19
0.1357 0.0739 20
0.1466 0.0305 21
0.1158 0.0236 22
0.2801 0.0214 23
0.0510 0.0284 24
0.1506 0.0392 25
0.1013 0.0402 26
0.0567 0.0399 27
0.0801 0.0260 28
0.1379 0.0202 29
0.0538 0.0241 30
0.0698 0.0216 31
0.0695 0.0186 32
0.0481 0.0122 33
0.0647 0.0108 34
0.0686 0.0087 35
0.0741 0.0088 36
0.0460 0.0095 37
0.0575 0.0128 38
0.1026 0.0207 39
0.0333 0.0315 40
0.0367 0.0299 41
0.0731 0.0215 42
0.0989 0.0162 43
0.0231 0.0125 44
0.0309 0.0095 45
0.0217 0.0087 46
0.0235 0.0086 47
0.0239 0.0102 48
0.0346 0.0117 49
0.0327 0.0118 50
0.0217 0.0101 51
0.0153 0.0080 52
0.0372 0.0066 53
0.0321 0.0069 54
0.0610 0.0076 55
0.0159 0.0117 56
0.0378 0.0137 57
0.0278 0.0118 58
0.0176 0.0101 59
0.0233 0.0083 60
0.0268 0.0081 61
0.0140 0.0083 62
0.0130 0.0077 63
0.0218 0.0072 64
0.0205 0.0071 65
0.0177 0.0074 66
0.0201 0.0078 67
0.0787 0.0108 68
0.0193 0.0120 69
0.0144 0.0115 70
0.0254 0.0098 71
0.0195 0.0082 72
0.0254 0.0069 73
0.0134 0.0057 74
0.0176 0.0050 75
0.0156 0.0048 76
0.0112 0.0048 77
0.0181 0.0047 78
0.0321 0.0046 79
0.0145 0.0048 80
0.0148 0.0053 81
0.0191 0.0059 82
0.0373 0.0066 83
0.0128 0.0072 84
0.0270 0.0073 85
0.0217 0.0071 86
0.0119 0.0067 87
0.0123 0.0063 88
0.0116 0.0059 89
0.0107 0.0056 90
0.0233 0.0054 91
0.0109 0.0053 92
0.0145 0.0051 93
0.0113 0.0050 94
0.0137 0.0049 95
0.0127 0.0049 96
0.0124 0.0048 97
0.0098 0.0048 98
0.0245 0.0048 99

Framework versions

  • Transformers 4.45.2
  • TensorFlow 2.17.0
  • Datasets 2.20.0
  • Tokenizers 0.20.1
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for Labira/LabiraPJOK_5_100_Group

Finetuned
(1)
this model
Finetunes
1 model