Rexhaif commited on
Commit
f5294bf
·
verified ·
1 Parent(s): ae510b7

Model save

Browse files
Files changed (2) hide show
  1. README.md +163 -0
  2. generation_config.json +13 -0
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen3-4B
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - Rexhaif/wmt23-pairs-sft
10
+ model-index:
11
+ - name: Qwen3-4B-MTEval-SFT
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.9.2`
22
+ ```yaml
23
+ base_model: Qwen/Qwen3-4B
24
+ # Automatically upload checkpoint and final model to HF
25
+ hub_model_id: Rexhaif/Qwen3-4B-MTEval-SFT
26
+ hub_private_repo: false
27
+
28
+
29
+ load_in_8bit: false
30
+ load_in_4bit: false
31
+ strict: false
32
+
33
+ chat_template: tokenizer_default
34
+ datasets:
35
+ - path: Rexhaif/wmt23-pairs-sft
36
+ split: "train"
37
+ type: chat_template
38
+ field_messages: messages
39
+ roles_to_train: ["assistant"]
40
+
41
+ shuffle_merged_datasets: true
42
+
43
+ skip_prepare_dataset: false
44
+ dataset_prepared_path: ./data/wmt23-pairs-sft
45
+ output_dir: /hnvme/workspace/v106be28-outputs/sft-4b
46
+
47
+ dataloader_prefetch_factor: 32
48
+ dataloader_num_workers: 2
49
+ dataloader_pin_memory: true
50
+
51
+ gc_steps: 1
52
+
53
+ sequence_len: 512
54
+ sample_packing: false
55
+ eval_sample_packing: false
56
+ pad_to_sequence_len: false
57
+
58
+ wandb_project: llm-reasoning-mt-eval
59
+ wandb_entity:
60
+ wandb_name: qwen3-4b-sft
61
+
62
+ plugins:
63
+ - axolotl.integrations.liger.LigerPlugin
64
+ liger_rope: true
65
+ liger_rms_norm: true
66
+ liger_glu_activation: true
67
+ liger_layer_norm: true
68
+ liger_fused_linear_cross_entropy: true
69
+ gradient_accumulation_steps: 2
70
+ micro_batch_size: 32 # should match num_generations / num_gpus
71
+
72
+ optimizer: adamw_torch_fused
73
+ lr_scheduler: cosine
74
+ learning_rate: 5.0e-5
75
+ cosine_min_lr_ratio: 1.0e-7
76
+ max_grad_norm: 1.0
77
+ weight_decay: 0.1
78
+
79
+ bf16: true
80
+ tf32: true
81
+
82
+ flash_attention: true
83
+ flash_attn_fuse_qkv: true
84
+ flash_attn_fuse_mlp: true
85
+ auto_resume_from_checkpoints: true
86
+
87
+ n_epochs: 3
88
+ logging_steps: 10
89
+ warmup_ratio: 0.1
90
+ evals_per_epoch: 10
91
+ saves_per_epoch: 10
92
+ save_total_limit: 1
93
+ #max_steps: 5000
94
+ seed: 42
95
+ val_set_size: 0.01
96
+
97
+ gradient_checkpointing: false
98
+ gradient_checkpointing_kwargs:
99
+ use_reentrant: false
100
+
101
+ ```
102
+
103
+ </details><br>
104
+
105
+ # Qwen3-4B-MTEval-SFT
106
+
107
+ This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) on the Rexhaif/wmt23-pairs-sft dataset.
108
+ It achieves the following results on the evaluation set:
109
+ - Loss: 0.0511
110
+
111
+ ## Model description
112
+
113
+ More information needed
114
+
115
+ ## Intended uses & limitations
116
+
117
+ More information needed
118
+
119
+ ## Training and evaluation data
120
+
121
+ More information needed
122
+
123
+ ## Training procedure
124
+
125
+ ### Training hyperparameters
126
+
127
+ The following hyperparameters were used during training:
128
+ - learning_rate: 5e-05
129
+ - train_batch_size: 32
130
+ - eval_batch_size: 32
131
+ - seed: 42
132
+ - distributed_type: multi-GPU
133
+ - num_devices: 4
134
+ - gradient_accumulation_steps: 2
135
+ - total_train_batch_size: 256
136
+ - total_eval_batch_size: 128
137
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
138
+ - lr_scheduler_type: cosine
139
+ - lr_scheduler_warmup_steps: 101
140
+ - num_epochs: 1.0
141
+
142
+ ### Training results
143
+
144
+ | Training Loss | Epoch | Step | Validation Loss |
145
+ |:-------------:|:------:|:----:|:---------------:|
146
+ | No log | 0.0010 | 1 | 19.7622 |
147
+ | 0.251 | 0.1003 | 102 | 0.2284 |
148
+ | 0.2008 | 0.2007 | 204 | 0.1928 |
149
+ | 0.1571 | 0.3010 | 306 | 0.1638 |
150
+ | 0.1264 | 0.4014 | 408 | 0.1307 |
151
+ | 0.0964 | 0.5017 | 510 | 0.1090 |
152
+ | 0.0933 | 0.6021 | 612 | 0.0939 |
153
+ | 0.0628 | 0.7024 | 714 | 0.0762 |
154
+ | 0.0581 | 0.8028 | 816 | 0.0598 |
155
+ | 0.0519 | 0.9031 | 918 | 0.0511 |
156
+
157
+
158
+ ### Framework versions
159
+
160
+ - Transformers 4.51.3
161
+ - Pytorch 2.6.0+cu124
162
+ - Datasets 3.5.1
163
+ - Tokenizers 0.21.1
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "pad_token_id": 151643,
9
+ "temperature": 0.6,
10
+ "top_k": 20,
11
+ "top_p": 0.95,
12
+ "transformers_version": "4.51.3"
13
+ }