leran1995's picture
Update README.md
02a27c3 verified
|
raw
history blame
3.72 kB
---
license: apache-2.0
language:
- en
- zh
library_name: transformers
pipeline_tag: text-generation
tags:
- llm
- nanbeige
---
<div align="center">
<img src="figures/nbg.png" width="220" alt="Nanbeige Logo">
</div>
Nanbeige4-3B-Thinking-2511 is an enhanced iteration over our previous Nanbeige4-3B-Thinking-2510.
Through advanced distillation techniques and reinforcement learning (RL) optimization, we have effectively scaled the model’s reasoning capacity, resulting in superior performance across a broad range of benchmarks.
Notably, Nanbeige4-3B-Thinking-2511 achieves state-of-the-art (SOTA) results among models smaller than 32B parameters Arena-Hard V2 and BFCL-V4.
This marks a major milestone in delivering powerful, efficient reasoning performance at a compact scale.
<div align="center">
<img src="figures/performance_2511.png">
</div>
## <span id="Inference">Quickstart</span>
For the chat scenario:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
'Nanbeige/Nanbeige4-3B-Thinking-2511',
use_fast=False,
trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
'Nanbeige/Nanbeige4-3B-Thinking-2511',
torch_dtype='auto',
device_map='auto',
trust_remote_code=True
)
messages = [
{'role': 'user', 'content': 'Which number is bigger, 9.11 or 9.8?'}
]
prompt = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=False
)
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
print(resp)
```
For the tool use scenario:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
'Nanbeige/Nanbeige4-3B-Thinking-2511',
use_fast=False,
trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
'Nanbeige/Nanbeige4-3B-Thinking-2511',
torch_dtype='auto',
device_map='auto',
trust_remote_code=True
)
messages = [
{'role': 'user', 'content': 'Help me check the weather in Beijing now'}
]
tools = [{'type': 'function',
'function': {'name': 'SearchWeather',
'description': 'Find out current weather in a certain place on a certain day.',
'parameters': {'type': 'dict',
'properties': {'location': {'type': 'string',
'description': 'A city in china.'},
'required': ['location']}}}}]
prompt = tokenizer.apply_chat_template(
messages,
tools,
add_generation_prompt=True,
tokenize=False
)
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
print(resp)
```
# <span id="Limitations">Limitations</span>
While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information.
<br>
# <span id="Limitations">Citation</span>
If you find our model useful or want to use it in your projects, please kindly cite this Huggingface project.
<br>
# <span id="Limitations">Contact</span>
If you have any questions, please raise an issue or contact us at nanbeige@126.com.
<br>