leran1995 commited on
Commit
02a27c3
·
verified ·
1 Parent(s): 3b91f46

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +108 -3
README.md CHANGED
@@ -1,3 +1,108 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ library_name: transformers
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - llm
10
+ - nanbeige
11
+ ---
12
+ <div align="center">
13
+
14
+ <img src="figures/nbg.png" width="220" alt="Nanbeige Logo">
15
+ </div>
16
+
17
+
18
+ Nanbeige4-3B-Thinking-2511 is an enhanced iteration over our previous Nanbeige4-3B-Thinking-2510.
19
+ Through advanced distillation techniques and reinforcement learning (RL) optimization, we have effectively scaled the model’s reasoning capacity, resulting in superior performance across a broad range of benchmarks.
20
+ Notably, Nanbeige4-3B-Thinking-2511 achieves state-of-the-art (SOTA) results among models smaller than 32B parameters Arena-Hard V2 and BFCL-V4.
21
+ This marks a major milestone in delivering powerful, efficient reasoning performance at a compact scale.
22
+
23
+ <div align="center">
24
+
25
+ <img src="figures/performance_2511.png">
26
+ </div>
27
+
28
+
29
+
30
+ ## <span id="Inference">Quickstart</span>
31
+
32
+ For the chat scenario:
33
+ ```
34
+ from transformers import AutoModelForCausalLM, AutoTokenizer
35
+ tokenizer = AutoTokenizer.from_pretrained(
36
+ 'Nanbeige/Nanbeige4-3B-Thinking-2511',
37
+ use_fast=False,
38
+ trust_remote_code=True
39
+ )
40
+ model = AutoModelForCausalLM.from_pretrained(
41
+ 'Nanbeige/Nanbeige4-3B-Thinking-2511',
42
+ torch_dtype='auto',
43
+ device_map='auto',
44
+ trust_remote_code=True
45
+ )
46
+ messages = [
47
+ {'role': 'user', 'content': 'Which number is bigger, 9.11 or 9.8?'}
48
+ ]
49
+ prompt = tokenizer.apply_chat_template(
50
+ messages,
51
+ add_generation_prompt=True,
52
+ tokenize=False
53
+ )
54
+ input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
55
+ output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
56
+ resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
57
+ print(resp)
58
+ ```
59
+
60
+ For the tool use scenario:
61
+ ```
62
+ from transformers import AutoModelForCausalLM, AutoTokenizer
63
+ tokenizer = AutoTokenizer.from_pretrained(
64
+ 'Nanbeige/Nanbeige4-3B-Thinking-2511',
65
+ use_fast=False,
66
+ trust_remote_code=True
67
+ )
68
+ model = AutoModelForCausalLM.from_pretrained(
69
+ 'Nanbeige/Nanbeige4-3B-Thinking-2511',
70
+ torch_dtype='auto',
71
+ device_map='auto',
72
+ trust_remote_code=True
73
+ )
74
+ messages = [
75
+ {'role': 'user', 'content': 'Help me check the weather in Beijing now'}
76
+ ]
77
+ tools = [{'type': 'function',
78
+ 'function': {'name': 'SearchWeather',
79
+ 'description': 'Find out current weather in a certain place on a certain day.',
80
+ 'parameters': {'type': 'dict',
81
+ 'properties': {'location': {'type': 'string',
82
+ 'description': 'A city in china.'},
83
+ 'required': ['location']}}}}]
84
+ prompt = tokenizer.apply_chat_template(
85
+ messages,
86
+ tools,
87
+ add_generation_prompt=True,
88
+ tokenize=False
89
+ )
90
+ input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
91
+ output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
92
+ resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
93
+ print(resp)
94
+ ```
95
+
96
+
97
+ # <span id="Limitations">Limitations</span>
98
+
99
+ While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information.
100
+ <br>
101
+
102
+ # <span id="Limitations">Citation</span>
103
+ If you find our model useful or want to use it in your projects, please kindly cite this Huggingface project.
104
+ <br>
105
+
106
+ # <span id="Limitations">Contact</span>
107
+ If you have any questions, please raise an issue or contact us at nanbeige@126.com.
108
+ <br>