rshaikh22 commited on
Commit
7fa3ef5
·
verified ·
1 Parent(s): ece235e

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -2,6 +2,43 @@
2
  library_name: transformers
3
  license: apache-2.0
4
  base_model: Qwen/Qwen3-30B-A3B-Instruct-2507
5
- language:
6
- - en
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  library_name: transformers
3
  license: apache-2.0
4
  base_model: Qwen/Qwen3-30B-A3B-Instruct-2507
5
+ tags:
6
+ - medical
7
+ - case-studies
8
+ - japanese
9
+ - qwen
10
+ - merged
11
+ ---
12
+
13
+ # rshaikh22/Qwen3_30B_Instruct_CQA_Medical
14
+
15
+ This is a merged model combining Qwen/Qwen3-30B-A3B-Instruct-2507 with a LoRA adapter fine-tuned on Japanese medical case studies.
16
+
17
+ ## Model Details
18
+
19
+ - **Base Model**: Qwen/Qwen3-30B-A3B-Instruct-2507
20
+ - **Training Data**: Japanese medical case studies (~93,563 examples)
21
+ - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) - Merged
22
+ - **Model Type**: Merged Causal LM (no adapter needed)
23
+
24
+ ## Usage
25
+
26
+ ```python
27
+ from transformers import AutoModelForCausalLM, AutoTokenizer
28
+
29
+ model = AutoModelForCausalLM.from_pretrained("rshaikh22/Qwen3_30B_Instruct_CQA_Medical", trust_remote_code=True)
30
+ tokenizer = AutoTokenizer.from_pretrained("rshaikh22/Qwen3_30B_Instruct_CQA_Medical", trust_remote_code=True)
31
+
32
+ # Use the model
33
+ prompt = "Your prompt here"
34
+ inputs = tokenizer(prompt, return_tensors="pt")
35
+ outputs = model.generate(**inputs, max_length=200)
36
+ print(tokenizer.decode(outputs[0]))
37
+ ```
38
+
39
+ ## Training Details
40
+
41
+ - **Epochs**: 2
42
+ - **Learning Rate**: 5e-4
43
+ - **Batch Size**: 24
44
+ - **Training Examples**: ~93,563
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "pad_token_id": 151643,
9
+ "temperature": 0.7,
10
+ "top_k": 20,
11
+ "top_p": 0.8,
12
+ "transformers_version": "4.57.3"
13
+ }
model-00001-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3e4a3cffc2ed374fac6a958b750306e8b003767286a301f54f4e852f5567749
3
+ size 4997183592
model-00002-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78cb02e7e24a212768b1fbf94f17f0c2330f068f4d62908c173fbe20016c9b97
3
+ size 4997740032
model-00003-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a893d26ec79c81ed404821145a85e71d591a587ef92c7e95f3ecdf80539ea97b
3
+ size 4997740632
model-00004-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:701f19d0271a7371f8ea125ca90501c539f84d62e26c3068948ee07792dbdf93
3
+ size 4997741608
model-00005-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22e3b45550093bc675021127535b793195f852126073c7b018c1006c2e768a24
3
+ size 4997741608
model-00006-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e76c7ba1d3275b6d8ad5e420ea78197ca54665689d6a110eeeffec56a4b6c800
3
+ size 4997741608
model-00007-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ab380172ff8407703bf2dce00a5247309a1339287f6ac7e5b16773c071f0272
3
+ size 4997741608
model-00008-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:239eea951f96dec66358f1463ce4e1fda7706d3df56a73a5a23d34afa16cff5d
3
+ size 4997741608
model-00009-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5b6dca9a23ad232348fc0c616a389e2a4838a3484c25267d59d3ca6c63125b0
3
+ size 4997741608
model-00010-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb93b88c03b1a94c9d7899ff16bd9a51e0650b759d161307e18c0efd4d84167a
3
+ size 4997741608
model-00011-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e527bf4150dd2fb0e0e0a9826e08703e63dad6b73f4427683cfa0afd6bd45fd1
3
+ size 4997741608
model-00012-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:182f531b36a29387f1c52114e7630704fc7cc4f9338225f4d6ef3a4a8154caf0
3
+ size 4997741608
model-00013-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9417a08b130c33c52385cc9613ce82336cd3b9c1e2c785154aef7f56c1901310
3
+ size 1094220136
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -231,7 +231,6 @@
231
  "eos_token": "<|im_end|>",
232
  "errors": "replace",
233
  "extra_special_tokens": {},
234
- "fast": true,
235
  "model_max_length": 1010000,
236
  "pad_token": "<|endoftext|>",
237
  "split_special_tokens": false,
 
231
  "eos_token": "<|im_end|>",
232
  "errors": "replace",
233
  "extra_special_tokens": {},
 
234
  "model_max_length": 1010000,
235
  "pad_token": "<|endoftext|>",
236
  "split_special_tokens": false,