File size: 8,455 Bytes
a497729
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: >-
  https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
  - en
tags:
  - nvidia
  - Nemotron-Cascade
  - reasoning
  - general-purpose
  - SFT
  - RL
  - pytorch
base_model: nvidia/Nemotron-Cascade-8B
---


# Nemotron-Cascade-8B

<p align="center">

[![Technical Report](https://img.shields.io/badge/2512.13607-Technical_Report-blue)](https://arxiv.org/abs/2512.13607)
[![SFT Dataset](https://img.shields.io/badge/🤗-SFT_Datset-blue)](https://huggingface.co/collections/nvidia/nemotron-cascade)
[![RL Dataset](https://img.shields.io/badge/🤗-RL_Datset-blue)](https://huggingface.co/collections/nvidia/nemotron-cascade)
[![Models](https://img.shields.io/badge/🤗-Models-blue)](https://huggingface.co/collections/nvidia/nemotron-cascade)
</p>

<img src="fig/nemotron-cascade-8b-results.png" alt="main_fig" style="width: 1000px; max-width: 100%;" />


## Introduction

We're excited to introduce [Nemotron-Cascade-8B](https://huggingface.co/nvidia/Nemotron-Cascade-8B), a powerful general-purpose model trained through sequential and domain-wise reinforcement learning. Nemotron-Cascade-8B is post-trained from the [Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base) model. It operates in both ***thinking*** and ***instruct*** (non-reasoning) modes and delivers best-in-class performance across a wide range of benchmarks.


## Training Pipeline
<img src="fig/pipeline.png" alt="train_pipeline_fig" style="width: 1000px; max-width: 100%;" />

The training pipeline for Nemotron-Cascade begins with a multi-stage SFT phase to equip the model with foundational skills. Subsequently, Cascade RL is applied across multiple domains to further enhance the model’s performance in these areas.

Notably, RLHF for alignment, when used as a pre-step, boosts the model’s complex reasoning ability far beyond mere preference optimization, and subsequent domain-wise RLVR stages rarely degrade the benchmark performance attained in earlier domains and may even improve it (see an illustration in the following Figure).

<figure style="margin: 0; padding: 0;">
  <img src="fig/lcb_through_cascade_rl.png" alt="lcb_through_cascade_rl_fig" style="width: 100%; max-width: 100%; margin: 0; padding: 0;">
  <figcaption>The LiveCodeBench v6 (08/24–05/25) performance of the Nemotron-Cascade-14B-Thinking model throughout the Cascade RL process.</figcaption>
</figure>


## Results

- We evaluate our model against competitive reasoning models on a diverse set of benchmarks, covering general-knowledge reasoning, alignment and instruction following, mathematical reasoning, competitive programming, software engineering, and tool-use proficiency.
- For Nemotron-Cascade models, we use a maximum generation length of 64K tokens and set the temperature to 0.6 and top-p to 0.95 for reasoning tasks.
- Our Nemotron-Cascade models achieve best-in-class performance across almost all benchmarks. Remarkably, Nemotron-Cascade-8B and Nemotron-Cascade-8B-Thinking achieve comparable LiveCodeBench (LCB) and LCB Pro scores to DeepSeek-R1-0528 (671B).

| **Benchmark<br>Metric: Pass@1** | **Qwen3-8B** | **Nemotron-Nano-9B-v2** | **DeepSeek-R1-0528 671B** | **Gemini-2.5-Flash-Thinking** | **Nemotron-<br>Cascade-8B-<br>Thinking** | **Nemotron-<br>Cascade-8B** |
| :---- | :---: | :---: | :---: | :---: | :---: | :---: |
| ***Knowledge Reasoning*** |
| MMLU | 83.0 | 82.6 | 89.9 | - | 84.0 | 83.7 |
| MMLU Pro | 75.1 | 73.3 | 85.0 | 81.9 | 75.5 | 75.7 |
| GPQA-Diamond | 62.0 | 64.0 | 81.0 | 82.8 | 66.7 | 66.5 |
| ***Alignment*** |
| ArenaHard | 85.8 | 74.6 | 95.1 | 95.7 | 85.8 | 87.9 |
| IFEval (Strict Prompt) | 85.0 | 86.1 | 84.1 | 89.8 | 83.7 | 90.2 |
| IFBench | 34.4 | 37.4 | 38.0 | 36.1 | 41.4 | 40.8 |
| ***Math*** |
| AIME 2024 | 76.0 | 81.9 | 91.4 | 82.3 | 88.8 | 89.5 |
| AIME 2025 | 67.3 | 72.0 | 87.5 | 72.0 | 81.4 | 80.1 |
| ***Code*** |
| LCB v5 (08/24-02/25) | 61.2 | 68.2 | 74.8 | 63.4 | 74.5 | 74.3 |
| LCB v6 (08/24-05/25) | 58.3 | 65.3 | 73.3 | 61.9 | 71.4 | 71.1 |
| LCB Pro 25Q2 (Easy) | 46.1 | 59.3 | 63.9 | 47.4 | 64.8 | 65.7 |
| LCB Pro 25Q2 (Med) | 2.2 | 4.8 | 7.0 | 1.8 | 6.1 | 6.4 |
| SWE Verified (Agentless) | 20.5 | - | 57.6 | 48.9 | 38.5 | 37.2 |
| ***Tool Calling*** |
| BFCL V3 | 68.1 | 66.9 | 67.9 | 68.6 | 67.0 | 64.4 |


## Evaluation Tookit

To reproduce our results, please check evaluation code, scripts, cached prediction files in https://huggingface.co/nvidia/Nemotron-Cascade-8B/blob/main/evaluation/README.md

## Chat Template

Nemotron-Cascade-8B follows the Qwen3-style ChatML template and offers both ***thinking*** and ***instruct*** (non-reasoning) modes. To switch between these two modes, simply append the `" /think"` (for ***thinking***) or the `" /no_think"` (for ***instruct***) tag to the end of the user input. Note that a leading space is included in these tags to ensure correct tokenization. 

To reduce the context length in a multi-turn conversation, when the previous user turn involves thinking mode, only the final summary of the model's output will be added to the conversation history, and the user turn tag is switched from `" /think"` to `" /no_think"`.

A brief example is shown below:

```python
from transformers import AutoTokenizer

model_name = 'nvidia/Nemotron-Cascade-8B'
tokenizer = AutoTokenizer.from_pretrained(model_name)

'''
single-turn example
'''
messages = [
    {"role": "user", "content": "calculate 1+1?"}
]

# thinking mode
prompt_thinking = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=True)
# prompt_thinking = '<|im_start|>system\nYou are a helpful and harmless assistant.<|im_end|>\n<|im_start|>user\ncalculate 1+1? /think<|im_end|>\n<|im_start|>assistant\n'

# instruct mode
prompt_instruct = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=False)
# prompt_instruct = '<|im_start|>system\nYou are a helpful and harmless assistant.<|im_end|>\n<|im_start|>user\ncalculate 1+1? /no_think<|im_end|>\n<|im_start|>assistant\n'

'''
multi-turn example
'''
messages = [
    {"role": "user", "content": "calculate 1+1?"},
    {"role": "assistant", "content": "<think>THINKING_CONTENT</think>\nTo calculate \\(1 + 1\\):\n\n1. **Identify the operation**: This is a basic addition problem involving two integers.\n2. **Perform the addition**:  \n   \\(1 + 1 = 2\\).\n\n**Result**: \\(\\boxed{2}\\)",},
    {"role": "user", "content": "what about 2+2"}
]

# thinking mode
prompt_thinking = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=True)
# prompt_thinking = '<|im_start|>system\nYou are a helpful and harmless assistant.<|im_end|>\n<|im_start|>user\ncalculate 1+1? /no_think<|im_end|>\n<|im_start|>assistant\nTo calculate \\(1 + 1\\):\n\n1. **Identify the operation**: This is a basic addition problem involving two integers.\n2. **Perform the addition**:  \n   \\(1 + 1 = 2\\).\n\n**Result**: \\(\\boxed\{2\}\\)<|im_end|>\n<|im_start|>user\nwhat about 2+2 /think<|im_end|>\n<|im_start|>assistant\n'

# instruct mode
prompt_instruct = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=False)
# prompt_instruct = '<|im_start|>system\nYou are a helpful and harmless assistant.<|im_end|>\n<|im_start|>user\ncalculate 1+1? /no_think<|im_end|>\n<|im_start|>assistant\nTo calculate \\(1 + 1\\):\n\n1. **Identify the operation**: This is a basic addition problem involving two integers.\n2. **Perform the addition**:  \n   \\(1 + 1 = 2\\).\n\n**Result**: \\(\\boxed\{2\}\\)<|im_end|>\n<|im_start|>user\nwhat about 2+2 /no_think<|im_end|>\n<|im_start|>assistant\n'
```


## Release Date
Dec 08, 2025


## License
Your use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).


## Citation
```
@article{Nemotron_Cascade_Scaling_Cascaded_Reinforcement_Learning,
  title={Nemotron-Cascade: Scaling Cascaded Reinforcement Learning for General-Purpose Reasoning Models},
  author={Wang, Boxin and Lee, Chankyu and Lee, Nayeon and Lin, Sheng-Chieh and Dai, Wenliang and Chen, Yang and Chen, Yangyi and Yang, Zhuolin and Liu, Zihan and Shoeybi, Mohammad and Catanzaro, Bryan and Ping, Wei},
  year={2025}
}
```