Labira/LabiraPJOK_4_100_Full

This model is a fine-tuned version of Labira/LabiraPJOK_3_100_Full on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0114
  • Validation Loss: 0.0019
  • Epoch: 99

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 300, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
4.7082 1.0888 0
2.1015 0.9187 1
1.6691 0.7789 2
1.6319 0.6115 3
1.0958 0.5354 4
1.0045 0.4263 5
0.5512 0.2684 6
0.4544 0.2098 7
0.3825 0.1910 8
0.1953 0.1831 9
0.1976 0.1801 10
0.8616 0.1791 11
1.5897 2.3429 12
2.1748 2.0662 13
2.4636 1.7558 14
1.6550 1.4900 15
1.5978 1.3461 16
1.4202 1.2605 17
1.7664 1.1584 18
1.2947 1.0410 19
1.0935 0.9207 20
1.2261 0.7981 21
0.7240 0.6023 22
0.6482 0.4366 23
0.7506 0.3078 24
0.5547 0.2124 25
0.4757 0.1207 26
0.2835 0.0586 27
0.2399 0.0283 28
0.1450 0.0172 29
0.1593 0.0128 30
0.0375 0.0101 31
0.1237 0.0085 32
0.0495 0.0076 33
0.0962 0.0072 34
0.0260 0.0067 35
0.1116 0.0063 36
0.0955 0.0061 37
0.0384 0.0060 38
0.0141 0.0059 39
0.0106 0.0057 40
0.0565 0.0055 41
0.0118 0.0053 42
0.0166 0.0051 43
0.0240 0.0048 44
0.0127 0.0046 45
0.0323 0.0043 46
0.0109 0.0040 47
0.0164 0.0036 48
0.0268 0.0033 49
0.0268 0.0031 50
0.0127 0.0030 51
0.0090 0.0029 52
0.1247 0.0028 53
0.0192 0.0028 54
0.0093 0.0028 55
0.1175 0.0031 56
0.0146 0.0035 57
0.0101 0.0037 58
0.0116 0.0039 59
0.0085 0.0040 60
0.0095 0.0039 61
0.0252 0.0039 62
0.0539 0.0039 63
0.0112 0.0038 64
0.0254 0.0037 65
0.0130 0.0035 66
0.0128 0.0034 67
0.0071 0.0033 68
0.0101 0.0032 69
0.0107 0.0031 70
0.0093 0.0030 71
0.3929 0.0029 72
0.0199 0.0028 73
0.0072 0.0028 74
0.0147 0.0027 75
0.0116 0.0026 76
0.0151 0.0025 77
0.0094 0.0024 78
0.0135 0.0023 79
0.0122 0.0023 80
0.0116 0.0023 81
0.0120 0.0022 82
0.0167 0.0022 83
0.0092 0.0022 84
0.0096 0.0021 85
0.0127 0.0021 86
0.0168 0.0021 87
0.0171 0.0021 88
0.0287 0.0020 89
0.0120 0.0020 90
0.0181 0.0020 91
0.0137 0.0019 92
0.0198 0.0019 93
0.0120 0.0019 94
0.0095 0.0019 95
0.0144 0.0019 96
0.0070 0.0019 97
0.0057 0.0019 98
0.0114 0.0019 99

Framework versions

  • Transformers 4.45.2
  • TensorFlow 2.17.0
  • Datasets 2.20.0
  • Tokenizers 0.20.1
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for Labira/LabiraPJOK_4_100_Full