Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,199 +1,165 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
This is
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
[
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
#### Software
|
| 168 |
-
|
| 169 |
-
[More Information Needed]
|
| 170 |
-
|
| 171 |
-
## Citation [optional]
|
| 172 |
-
|
| 173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 174 |
-
|
| 175 |
-
**BibTeX:**
|
| 176 |
-
|
| 177 |
-
[More Information Needed]
|
| 178 |
-
|
| 179 |
-
**APA:**
|
| 180 |
-
|
| 181 |
-
[More Information Needed]
|
| 182 |
-
|
| 183 |
-
## Glossary [optional]
|
| 184 |
-
|
| 185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 186 |
-
|
| 187 |
-
[More Information Needed]
|
| 188 |
-
|
| 189 |
-
## More Information [optional]
|
| 190 |
-
|
| 191 |
-
[More Information Needed]
|
| 192 |
-
|
| 193 |
-
## Model Card Authors [optional]
|
| 194 |
-
|
| 195 |
-
[More Information Needed]
|
| 196 |
-
|
| 197 |
-
## Model Card Contact
|
| 198 |
-
|
| 199 |
-
[More Information Needed]
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
- zh
|
| 6 |
+
library_name: transformers
|
| 7 |
+
pipeline_tag: text-generation
|
| 8 |
+
tags:
|
| 9 |
+
- llm
|
| 10 |
+
- nanbeige
|
| 11 |
+
- heretic
|
| 12 |
+
- uncensored
|
| 13 |
+
- decensored
|
| 14 |
+
- abliterated
|
| 15 |
+
base_model:
|
| 16 |
+
- Nanbeige/Nanbeige4-3B-Base
|
| 17 |
+
---
|
| 18 |
+
# This is a decensored version of [C10X/Nanbeige4-3B-Thinking-2511-Claude-4.5-Opus-High-Reasoning-Distill-V2](https://huggingface.co/C10X/Nanbeige4-3B-Thinking-2511-Claude-4.5-Opus-High-Reasoning-Distill-V2), made using [Heretic](https://github.com/p-e-w/heretic) v1.1.0
|
| 19 |
+
|
| 20 |
+
## Abliteration parameters
|
| 21 |
+
|
| 22 |
+
| Parameter | Value |
|
| 23 |
+
| :-------- | :---: |
|
| 24 |
+
| **direction_index** | 14.75 |
|
| 25 |
+
| **attn.o_proj.max_weight** | 1.05 |
|
| 26 |
+
| **attn.o_proj.max_weight_position** | 23.95 |
|
| 27 |
+
| **attn.o_proj.min_weight** | 0.73 |
|
| 28 |
+
| **attn.o_proj.min_weight_distance** | 18.44 |
|
| 29 |
+
| **mlp.down_proj.max_weight** | 1.46 |
|
| 30 |
+
| **mlp.down_proj.max_weight_position** | 24.68 |
|
| 31 |
+
| **mlp.down_proj.min_weight** | 1.11 |
|
| 32 |
+
| **mlp.down_proj.min_weight_distance** | 18.33 |
|
| 33 |
+
|
| 34 |
+
## Performance
|
| 35 |
+
|
| 36 |
+
| Metric | This model | Original model ([C10X/Nanbeige4-3B-Thinking-2511-Claude-4.5-Opus-High-Reasoning-Distill-V2](https://huggingface.co/C10X/Nanbeige4-3B-Thinking-2511-Claude-4.5-Opus-High-Reasoning-Distill-V2)) |
|
| 37 |
+
| :----- | :--------: | :---------------------------: |
|
| 38 |
+
| **KL divergence** | 0.1122 | 0 *(by definition)* |
|
| 39 |
+
| **Refusals** | 7/100 | 94/100 |
|
| 40 |
+
|
| 41 |
+
-----
|
| 42 |
+
|
| 43 |
+
<div align="center">
|
| 44 |
+
|
| 45 |
+
<img src="figures/nbg.png" width="220" alt="Nanbeige Logo">
|
| 46 |
+
</div>
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
# News
|
| 50 |
+
|
| 51 |
+
🎉 Nanbeige4-3B-Thinking-2511 debuts at #11 on [**WritingBench**](https://huggingface.co/spaces/WritingBench/WritingBench)! Despite only 3B parameters, its creative-writing ability chops rival those of hundred-billion-parameter giants.
|
| 52 |
+
|
| 53 |
+
🎉 Nanbeige4-3B-Thinking-2511 ranks #15 on [**EQBench3**](https://eqbench.com/), demonstrating human-preference alignment and emotional intelligence comparable to much larger models.
|
| 54 |
+
|
| 55 |
+
# Introduction
|
| 56 |
+
Nanbeige4-3B-Thinking-2511 is an enhanced iteration over our previous Nanbeige4-3B-Thinking-2510.
|
| 57 |
+
Through advanced knowledge distillation techniques and targeted reinforcement learning (RL) optimization, we have significantly scaled the model’s reasoning capabilities, delivering stronger and more reliable performance on diverse challenging benchmarks.
|
| 58 |
+
This version establishes new state-of-the-art (SOTA) results among open models under 32B parameters on AIME, GPQA-Diamond, Arena-Hard-V2, and BFCL-V4, which marks a major milestone in delivering powerful yet efficient reasoning capabilities at a compact scale.
|
| 59 |
+
|
| 60 |
+
* Technical Report: https://arxiv.org/pdf/2512.06266
|
| 61 |
+
|
| 62 |
+
<div align="center">
|
| 63 |
+
|
| 64 |
+
<img src="figures/nbg_performance.png">
|
| 65 |
+
</div>
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
|
| 71 |
+
## <span id="Inference">Quickstart</span>
|
| 72 |
+
|
| 73 |
+
For inference hyperparameters, we recommend the following settings:
|
| 74 |
+
* Temperature: 0.6
|
| 75 |
+
* Top-p: 0.95
|
| 76 |
+
* Repeat penalty: 1.0
|
| 77 |
+
|
| 78 |
+
For the chat scenario:
|
| 79 |
+
```
|
| 80 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 81 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
| 82 |
+
'Nanbeige/Nanbeige4-3B-Thinking-2511',
|
| 83 |
+
use_fast=False,
|
| 84 |
+
trust_remote_code=True
|
| 85 |
+
)
|
| 86 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 87 |
+
'Nanbeige/Nanbeige4-3B-Thinking-2511',
|
| 88 |
+
torch_dtype='auto',
|
| 89 |
+
device_map='auto',
|
| 90 |
+
trust_remote_code=True
|
| 91 |
+
)
|
| 92 |
+
messages = [
|
| 93 |
+
{'role': 'user', 'content': 'Which number is bigger, 9.11 or 9.8?'}
|
| 94 |
+
]
|
| 95 |
+
prompt = tokenizer.apply_chat_template(
|
| 96 |
+
messages,
|
| 97 |
+
add_generation_prompt=True,
|
| 98 |
+
tokenize=False
|
| 99 |
+
)
|
| 100 |
+
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
|
| 101 |
+
output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
|
| 102 |
+
resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
|
| 103 |
+
print(resp)
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
For the tool use scenario:
|
| 107 |
+
```
|
| 108 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 109 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
| 110 |
+
'Nanbeige/Nanbeige4-3B-Thinking-2511',
|
| 111 |
+
use_fast=False,
|
| 112 |
+
trust_remote_code=True
|
| 113 |
+
)
|
| 114 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 115 |
+
'Nanbeige/Nanbeige4-3B-Thinking-2511',
|
| 116 |
+
torch_dtype='auto',
|
| 117 |
+
device_map='auto',
|
| 118 |
+
trust_remote_code=True
|
| 119 |
+
)
|
| 120 |
+
messages = [
|
| 121 |
+
{'role': 'user', 'content': 'Help me check the weather in Beijing now'}
|
| 122 |
+
]
|
| 123 |
+
tools = [{'type': 'function',
|
| 124 |
+
'function': {'name': 'SearchWeather',
|
| 125 |
+
'description': 'Find out current weather in a certain place on a certain day.',
|
| 126 |
+
'parameters': {'type': 'dict',
|
| 127 |
+
'properties': {'location': {'type': 'string',
|
| 128 |
+
'description': 'A city in china.'},
|
| 129 |
+
'required': ['location']}}}}]
|
| 130 |
+
prompt = tokenizer.apply_chat_template(
|
| 131 |
+
messages,
|
| 132 |
+
tools,
|
| 133 |
+
add_generation_prompt=True,
|
| 134 |
+
tokenize=False
|
| 135 |
+
)
|
| 136 |
+
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
|
| 137 |
+
output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
|
| 138 |
+
resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
|
| 139 |
+
print(resp)
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
|
| 143 |
+
# <span id="Limitations">Limitations</span>
|
| 144 |
+
|
| 145 |
+
While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information.
|
| 146 |
+
<br>
|
| 147 |
+
|
| 148 |
+
# <span id="Limitations">Citation</span>
|
| 149 |
+
If you find our model useful or want to use it in your projects, please cite as follows:
|
| 150 |
+
```
|
| 151 |
+
@misc{yang2025nanbeige43btechnicalreportexploring,
|
| 152 |
+
title={Nanbeige4-3B Technical Report: Exploring the Frontier of Small Language Models},
|
| 153 |
+
author={Chen Yang and Guangyue Peng and Jiaying Zhu and Ran Le and Ruixiang Feng and Tao Zhang and Wei Ruan and Xiaoqi Liu and Xiaoxue Cheng and Xiyun Xu and Yang Song and Yanzipeng Gao and Yiming Jia and Yun Xing and Yuntao Wen and Zekai Wang and Zhenwei An and Zhicong Sun and Zongchao Chen},
|
| 154 |
+
year={2025},
|
| 155 |
+
eprint={2512.06266},
|
| 156 |
+
archivePrefix={arXiv},
|
| 157 |
+
primaryClass={cs.CL},
|
| 158 |
+
url={https://arxiv.org/abs/2512.06266},
|
| 159 |
+
}
|
| 160 |
+
```
|
| 161 |
+
<br>
|
| 162 |
+
|
| 163 |
+
# <span id="Limitations">Contact</span>
|
| 164 |
+
If you have any questions, please raise an issue or contact us at nanbeige@126.com.
|
| 165 |
+
<br>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|