Z-Jafari commited on
Commit
0cc0d91
·
verified ·
1 Parent(s): 3a68bd8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -7,6 +7,15 @@ tags:
7
  model-index:
8
  - name: bert-fa-base-uncased-finetuned-DS_Q_N_C_QA
9
  results: []
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -17,6 +26,7 @@ should probably proofread and complete it, then remove this comment. -->
17
  This model is a fine-tuned version of [HooshvareLab/bert-fa-base-uncased](https://huggingface.co/HooshvareLab/bert-fa-base-uncased) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 1.2991
 
20
 
21
  ## Model description
22
 
@@ -43,6 +53,7 @@ The following hyperparameters were used during training:
43
  - lr_scheduler_type: linear
44
  - num_epochs: 3
45
  - mixed_precision_training: Native AMP
 
46
 
47
  ### Training results
48
 
@@ -58,4 +69,4 @@ The following hyperparameters were used during training:
58
  - Transformers 4.57.3
59
  - Pytorch 2.9.0+cu126
60
  - Datasets 4.0.0
61
- - Tokenizers 0.22.1
 
7
  model-index:
8
  - name: bert-fa-base-uncased-finetuned-DS_Q_N_C_QA
9
  results: []
10
+ datasets:
11
+ - Z-Jafari/PersianQuAD
12
+ - Z-Jafari/DS_Q_N_C_QA
13
+ language:
14
+ - fa
15
+ metrics:
16
+ - f1
17
+ - exact_match
18
+ pipeline_tag: question-answering
19
  ---
20
 
21
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
26
  This model is a fine-tuned version of [HooshvareLab/bert-fa-base-uncased](https://huggingface.co/HooshvareLab/bert-fa-base-uncased) on an unknown dataset.
27
  It achieves the following results on the evaluation set:
28
  - Loss: 1.2991
29
+ - {'exact_match': 77.5, 'f1': 81.09818261829699}
30
 
31
  ## Model description
32
 
 
53
  - lr_scheduler_type: linear
54
  - num_epochs: 3
55
  - mixed_precision_training: Native AMP
56
+ - fp16 = True
57
 
58
  ### Training results
59
 
 
69
  - Transformers 4.57.3
70
  - Pytorch 2.9.0+cu126
71
  - Datasets 4.0.0
72
+ - Tokenizers 0.22.1