sanjeevnv commited on
Commit
fd23d10
·
1 Parent(s): 2cf4ed4

Update Model card

Browse files
README.md ADDED
@@ -0,0 +1,518 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ license_name: nvidia-open-model-license
5
+ license_link: >-
6
+ https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
7
+ pipeline_tag: text-generation
8
+ language:
9
+ - en
10
+ - es
11
+ - fr
12
+ - de
13
+ - ja
14
+ - it
15
+ - pt
16
+ - zh
17
+ - ar
18
+ - da
19
+ - ko
20
+ - nl
21
+ - pl
22
+ - ru
23
+ - sv
24
+ - th
25
+ tags:
26
+ - nvidia
27
+ - pytorch
28
+ datasets:
29
+ - nvidia/Nemotron-Pretraining-Code-v1
30
+ - nvidia/Nemotron-CC-v2
31
+ - nvidia/Nemotron-Pretraining-SFT-v1
32
+ - nvidia/Nemotron-CC-Math-v1
33
+ - nvidia/Nemotron-Pretraining-Code-v2
34
+ - nvidia/Nemotron-Pretraining-Specialized-v1
35
+ - nvidia/Nemotron-CC-v2.1
36
+ - nvidia/Nemotron-CC-Code-v1
37
+ - nvidia/Nemotron-Pretraining-Dataset-sample
38
+ track_downloads: true
39
+ ---
40
+
41
+ # NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16
42
+ <div align="center" style="line-height: 1;">
43
+ <a href="https://build.nvidia.com/nvidia/nemotron-3-nano-30b-a3b" target="_blank" style="margin: 2px;">
44
+ <img alt="Chat" src="https://img.shields.io/badge/🤖Chat-Nemotron_3_Nano-536af5?color=76B900&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
45
+ </a>
46
+ <a href="https://research.nvidia.com/labs/nemotron/files/NVIDIA-Nemotron-3-Nano-Technical-Report.pdf" target="_blank" style="margin: 2px;">
47
+ <img alt="Chat" src="https://img.shields.io/badge/📝Paper-Read Now!-536af5?color=76B900&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
48
+ </a>
49
+ <a href="https://huggingface.co/collections/nvidia/nemotron-pre-training-datasets" target="_blank" style="margin: 2px;">
50
+ <img alt="Pre-Training Datasets" src="https://img.shields.io/badge/🗄️_Pre--Training_Datasets-Available_Here-76B900?logoColor=white" style="display: inline-block; vertical-align: middle;"/>
51
+ </a>
52
+ </div>
53
+ <div align="center" style="line-height: 1;">
54
+ <a href="https://developer.nvidia.com/nemotron" target="_blank" style="margin: 2px;">
55
+ <img alt="Homepage" src="https://img.shields.io/badge/🏠Nemotron Developer Page-Learn More Here!-536af5?color=76B900&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
56
+ </a>
57
+ <a href="https://discord.gg/9xpKQtVvrk" target="_blank" style="margin: 2px;">
58
+ <img alt="Homepage" src="https://img.shields.io/badge/Discord-NVIDIA%20AI%20Developer-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
59
+ </a>
60
+ </div>
61
+
62
+ <div align="center" style="line-height: 1;">
63
+ <a href="https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/" style="margin: 2px;">
64
+ <img alt="License" src="https://img.shields.io/badge/License-NVIDIA Open Model License-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
65
+ </a>
66
+ </div>
67
+
68
+ ![](./accuracy_chart.svg)
69
+
70
+ ## Model Overview
71
+
72
+ **Model Developer:** NVIDIA Corporation
73
+
74
+ **Model Dates:**
75
+
76
+ September 2025 \- December 2025
77
+
78
+ **Data Freshness:**
79
+
80
+ The pre-training data has a cutoff date of June 25, 2025.
81
+
82
+ ## Description
83
+
84
+ NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16 is a base large language model (LLM) trained from scratch by NVIDIA, with the next token prediction loss. It provides a good starting point for instruction fine-tuning.
85
+
86
+ This model is ready for commercial use.
87
+
88
+ ### What is Nemotron?
89
+
90
+ NVIDIA Nemotron™ is a family of open models with open weights, training data, and recipes, delivering leading efficiency and accuracy for building specialized AI agents.
91
+
92
+ ## Feature Voting
93
+
94
+ We want to hear from you! Share your ideas, vote on what matters, and help [shape the future of Nemotron](https://nemotron.ideas.nvidia.com/).
95
+
96
+ ## License/Terms of Use
97
+
98
+ GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).
99
+
100
+ ## Base Benchmark Evaluations
101
+
102
+ We evaluated our model on the following benchmarks:
103
+
104
+ | Task | Qwen3 30B-A3B-Base | NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16 |
105
+ | :---- | :---- | :---- |
106
+ | **General Knowledge** | | |
107
+ | MMLU (5-shot, acc) | **81.07** | 78.56 |
108
+ | MMLU-Pro (5-shot, CoT EM) | 61.71 | **65.05** |
109
+ | AGIEval-En (3/5-shot, CoT acc) | 63.12 | **68.32** |
110
+ | **Code** | | |
111
+ | HumanEval (0-shot) | 70.73 | **78.05** |
112
+ | MBPP-Sanitized (3-shot) | 73.15 | **75.49** |
113
+ | **Math** | | |
114
+ | GSM8K (8-shot, acc) | 89.01 | **92.34** |
115
+ | MATH (4-shot, acc) | 61.14 | **82.88** |
116
+ | MATH-500 (4-shot, avg@32) | 55.08 | **78.63** |
117
+ | **Commonsense Understanding** | | |
118
+ | ARC-Challenge (25-shot, acc_norm) | **94.45** | 91.89 |
119
+ | HellaSwag (10-shot, acc_norm) | 83.14 | **85.56** |
120
+ | OpenBookQA (0-shot, acc_norm) | 44.80 | **46.20** |
121
+ | PIQA (0-shot, acc_norm) | 81.01 | **84.33** |
122
+ | WinoGrande (5-shot, acc) | 78.22 | **79.64** |
123
+ | **Reading Comprehension** | | |
124
+ | RACE (0-shot, acc) | **90.05** | 88.04 |
125
+ | **Multilingual** | | |
126
+ | MMLU Global Lite (5-shot, avg acc) | **76.84** | 74.47 |
127
+ | MGSM (8-shot, avg acc) | 82.53 | **83.00** |
128
+ | **Long Context** | | |
129
+ | RULER (64K, 0-shot, acc) | 63.55 | **87.50** |
130
+ | RULER (128K, 0-shot, acc) | 60.69 | **82.92** |
131
+ | RULER (256K, 0-shot, acc) | Not Supported | **75.44** |
132
+ | RULER (512K, 0-shot, acc) | Not Supported | **70.56** |
133
+
134
+ All evaluation results were collected via [Nemo Evaluator SDK](https://github.com/NVIDIA-NeMo/Evaluator) and [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). The open source container on LM Evaluation Harness packaged via NVIDIA's Nemo Evaluator SDK used for evaluations can be found [here](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/eval-factory/containers/lm-evaluation-harness). A reproducibility tutorial along with all configs can be found in [Nemo Evaluator SDK examples](https://github.com/NVIDIA-NeMo/Evaluator/tree/main/packages/nemo-evaluator-launcher/examples/nemotron/nano-v3-reproducibility.md).
135
+
136
+ ### Deployment Geography: Global
137
+
138
+ ### Use Case
139
+
140
+ This model is intended for developers and researchers building instruction-following LLMs.
141
+
142
+ Supported languages include: English, Spanish, French, German, Japanese, Italian, Chinese, Arabic, Hebrew, Hindi, Korean, Czech, Danish, Dutch, Finnish, Polish, Portuguese, Thai, Swedish, and Russian.
143
+
144
+ ### Release Date:
145
+
146
+ December 15, 2025 via [Hugging Face](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16)
147
+
148
+ ## Reference(s)
149
+
150
+ * [NVIDIA Nemotron 3 model family on Hugging Face](https://huggingface.co/collections/nvidia/nvidia-nemotron-v3)
151
+ * [NVIDIA Nemotron 2 model family on Hugging Face](https://huggingface.co/collections/nvidia/nvidia-nemotron-v2)
152
+ * [NVIDIA Nemotron 3 White Paper](https://research.nvidia.com/labs/nemotron/files/NVIDIA-Nemotron-3-White-Paper.pdf)
153
+
154
+ ## Model Architecture
155
+
156
+ - **Architecture Type:** Mamba2-Transformer Hybrid Mixture of Experts (MoE)
157
+ - **Network Architecture:** Nemotron Hybrid MoE
158
+
159
+ - **Number of model parameters:** 30B
160
+
161
+ ## Training Methodology
162
+
163
+ Stage 1: Pre-Training
164
+ * [NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16) model was pre-trained using crawled and synthetic code, math, science, and general knowledge data. All datasets are disclosed in the [Training, Testing, and Evaluation Datasets](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16#training-testing-and-evaluation-datasets) section of this document. Major portions of the pre-training corpus are released in the [Nemotron-Pre-Training-Datasets](https://huggingface.co/collections/nvidia/nemotron-pre-training-datasets) collection.
165
+ * Software used for pre-training: [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
166
+
167
+ The end-to-end training recipe is available in the [NVIDIA Nemotron Developer Repository](https://github.com/NVIDIA-NeMo/Nemotron). Evaluation results can be replicated using the [NeMo Evaluator SDK](https://github.com/NVIDIA-NeMo/Evaluator). More details on the datasets and synthetic data generation methods can be found in the technical report [NVIDIA Nemotron 3 Nano](https://research.nvidia.com/labs/nemotron/files/NVIDIA-Nemotron-3-Nano-Technical-Report.pdf).
168
+
169
+ ## Input
170
+
171
+ - **Input Type(s):** Text
172
+
173
+ - **Input Format(s):** String
174
+
175
+ - **Input Parameters:** One-Dimensional (1D): Sequences
176
+
177
+ - **Maximum input size:** 128K tokens
178
+
179
+ - **Other Properties Related to Input:**
180
+ Supported languages include: English, Spanish, French, German, Japanese, Italian, Chinese, Arabic, Hebrew, Hindi, Korean, Czech, Danish, Dutch, Finnish, Polish, Portuguese, Thai, Swedish, and Russian.
181
+
182
+
183
+ ## Output
184
+
185
+ - **Output Type(s):** Text
186
+
187
+ - **Output Format:** String
188
+
189
+ - **Output Parameters:** One-Dimensional (1D): Sequences
190
+
191
+ - **Maximum output size:** 128K tokens
192
+
193
+
194
+ Our AI models are designed and optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
195
+
196
+ ## Software Integration
197
+
198
+ - Runtime Engine(s): NeMo 25.11.01
199
+ - Supported Hardware Microarchitecture Compatibility: NVIDIA H100-80GB, NVIDIA A100
200
+ - Operating System(s): Linux
201
+
202
+
203
+ The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
204
+
205
+ ### Use it with Transformers
206
+
207
+ The snippet below shows how to use this model with Huggingface Transformers (tested on version 4.57.3). We recommend using NeMo Framework 25.11.01 to ensure all required libraries are available.
208
+
209
+ Please note that the model supports up to a 1M context size, although the default context size in the Hugging Face configuration is 256k due to higher VRAM requirements.
210
+
211
+ ```
212
+ from transformers import AutoTokenizer, AutoModelForCausalLM
213
+
214
+ model_name = "nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16"
215
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
216
+ model = AutoModelForCausalLM.from_pretrained(
217
+ model_name,
218
+ torch_dtype=torch.bfloat16,
219
+ trust_remote_code=True,
220
+ device_map="auto"
221
+ )
222
+
223
+ prompt = "The capital of France is"
224
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
225
+
226
+ outputs = model.generate(
227
+ **inputs,
228
+ max_new_tokens=32,
229
+ do_sample=False,
230
+ eos_token_id=tokenizer.eos_token_id
231
+ )
232
+
233
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
234
+ ```
235
+
236
+ ## Model Version(s)
237
+
238
+ - v1.0
239
+
240
+ # Training, Testing, and Evaluation Datasets
241
+
242
+ **Data Modality:** Text
243
+ **The total size:** 10,648,823,153,919 Tokens
244
+ **Total number of datasets:** 141
245
+ **Dataset partition:** *Training \[100%\], testing \[0%\], validation \[0%\]*
246
+ **Time period for training data collection:** 2013 to May 1, 2025
247
+ **Time period for testing data collection:** 2013 to May 1, 2025
248
+ **Time period for validation data collection:** 2013 to May 1, 2025
249
+ **Data Collection Method by dataset:** Hybrid: Automated, Human, Synthetic
250
+
251
+ NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16 is pre-trained on a large corpus of high-quality curated and synthetically-generated data. It is trained in the English language, as well as 19 other languages and 43 programming languages. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. We also include a small portion of question-answering, and alignment style data to improve model accuracy. The model was trained for approximately 25 trillion tokens.
252
+
253
+ Alongside the model, we release our final [pre-training](https://huggingface.co/collections/nvidia/nemotron-pre-training-datasets) data, as outlined in this section. For ease of analysis, there is a sample set that is ungated. For all remaining code, math and multilingual data, gating and approval is required, and the dataset is permissively licensed for model training purposes.
254
+
255
+ More details on the datasets and synthetic data generation methods can be found in the technical report [NVIDIA Nemotron 3 Nano](https://research.nvidia.com/labs/nemotron/files/NVIDIA-Nemotron-3-Nano-Technical-Report.pdf).
256
+
257
+ ## Public dataset
258
+
259
+ | Dataset | Collection Period |
260
+ | :---- | :---- |
261
+ | [GSM8K](https://github.com/openai/grade-school-math) | 4/23/2025 |
262
+ | [CC-NEWS](https://commoncrawl.org/blog/news-dataset-available) | 4/23/2025 |
263
+ | [Common Crawl](https://commoncrawl.org/) | 4/23/2025 |
264
+ | [Wikimedia](https://dumps.wikimedia.org/) | 4/23/2025 |
265
+ | [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k) | 4/23/2025 |
266
+ | [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k) | 4/23/2025 |
267
+ | [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) | 4/23/2025 |
268
+ | [APIGen Function-Calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) | 4/23/2025 |
269
+ | [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) | 4/23/2025 |
270
+ | [Open Textbook Library \- CC BY-SA & GNU subset](https://open.umn.edu/opentextbooks/textbooks/) and [OpenStax \- CC BY-SA subset](https://openstax.org/) | 4/23/2025 |
271
+ | [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb), [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k), [PRM800K](https://github.com/openai/prm800k), and [SciBench](https://github.com/mandyyyyii/scibench) | 4/23/2025 |
272
+ | [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) | 4/23/2025 |
273
+ | [Court Listener](https://www.courtlistener.com/help/api/bulk-data/) | Legacy Download |
274
+ | [peS2o](https://huggingface.co/datasets/allenai/peS2o) | Legacy Download |
275
+ | [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) | Legacy Download |
276
+ | [BioRxiv](https://www.biorxiv.org/tdm) | Legacy Download |
277
+ | [PMC Open Access Subset](https://pmc.ncbi.nlm.nih.gov/tools/openftlist/) | Legacy Download |
278
+ | [OpenWebText2](https://openwebtext2.readthedocs.io/en/latest/) | Legacy Download |
279
+ | [Stack Exchange Data Dump](https://archive.org/details/stackexchange) | Legacy Download |
280
+ | [PubMed Abstracts](https://github.com/thoppe/The-Pile-PubMed) | Legacy Download |
281
+ | [NIH ExPorter](https://exporter.nih.gov/ExPORTER_Catalog.aspx) | Legacy Download |
282
+ | [arXiv](https://info.arxiv.org/help/bulk_data/index.html) | Legacy Download |
283
+ | [BigScience Workshop Datasets](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#datasets) | Legacy Download |
284
+ | [Reddit Dataset](https://files.pushshift.io/reddit/) | Legacy Download |
285
+ | [SEC's Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/search-filings) | Legacy Download |
286
+ | [Advanced Mathematical Problem Solving](https://github.com/hendrycks/math?tab=readme-ov-file) | Legacy Download |
287
+ | [MathPile](https://github.com/GAIR-NLP/MathPile/) | Legacy Download |
288
+ | [NuminaMath CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) | Legacy Download |
289
+ | [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/) | Legacy Download |
290
+ | [FLAN](https://github.com/google-research/FLAN) | Legacy Download |
291
+ | [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb) | Legacy Download |
292
+ | [SciBench](https://github.com/mandyyyyii/scibench) | Legacy Download |
293
+ | [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) | Legacy Download |
294
+ | [FinQA](https://finqasite.github.io/) | Legacy Download |
295
+ | [Riddles](https://github.com/crawsome/riddles) | Legacy Download |
296
+ | [Problems in Elementary Mathematics for Home Study](https://archive.org/details/AntonovVygodskyNikitinSankinProblemsInElementaryMathematicsForHomeStudyMir1982) | Legacy Download |
297
+ | [MedMCQA](https://huggingface.co/datasets/openlifescienceai/medmcqa) | Legacy Download |
298
+ | [Cosmos QA](https://huggingface.co/datasets/allenai/cosmos_qa) | Legacy Download |
299
+ | [MCTest](https://huggingface.co/datasets/sagnikrayc/mctest) | Legacy Download |
300
+ | [AI2's Reasoning Challenge](https://huggingface.co/datasets/ai2_arc) | Legacy Download |
301
+ | [OpenBookQA](https://github.com/allenai/OpenBookQA) | Legacy Download |
302
+ | [MMLU Auxiliary Train](https://huggingface.co/datasets/cais/mmlu/viewer/all/auxiliary_train) | Legacy Download |
303
+ | [social-chemestry-101](https://huggingface.co/datasets/tasksource/social-chemestry-101) | Legacy Download |
304
+ | [Moral Stories](https://huggingface.co/datasets/demelin/moral_stories) | Legacy Download |
305
+ | [The Common Pile v0.1](https://huggingface.co/common-pile) | Legacy Download |
306
+ | [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) | Legacy Download |
307
+ | [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath) | Legacy Download |
308
+ | [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath) | Legacy Download |
309
+ | [MultiverseMathHard](https://huggingface.co/datasets/Nexusflow/MultiverseMathHard) | 10/2/2025 |
310
+ | [News Commentary](https://opus.nlpl.eu/News-Commentary.php) | 10/2/2025 |
311
+ | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | 10/2/2025 |
312
+ | [finepdfs](https://huggingface.co/datasets/HuggingFaceFW/finepdfs) | 10/2/2025 |
313
+ | [HotpotQA](https://huggingface.co/hotpot_qa/datasets) | 10/2/2025 |
314
+ | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | 10/2/2025 |
315
+ | [NLTK Words Lists](https://www.nltk.org/nltk_data/) | 10/2/2025 |
316
+
317
+ ## Private Non-publicly Accessible Datasets of Third Parties
318
+
319
+ | Dataset |
320
+ | :---- |
321
+ | Global Regulation |
322
+ | TAUS Translation Memory |
323
+ | Scale HLE |
324
+ | HackerRank Coding |
325
+
326
+ ## Private Non-publicly Accessible Datasets by NVIDIA
327
+
328
+ | Dataset |
329
+ | :---- |
330
+ | Machine Translation of STEM data using [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) |
331
+
332
+ ## Crawled and Scraped from Online Sources by NVIDIA
333
+
334
+ The English Common Crawl data was downloaded from the Common Crawl Foundation (see their FAQ for details on their crawling) and includes the snapshots CC-MAIN-2013-20 through CC-MAIN-2025-13. The data was subsequently deduplicated and filtered in various ways described in the Nemotron-CC paper. Additionally, we extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC.
335
+
336
+ The GitHub Crawl was collected using the GitHub REST API and the Amazon S3 API. Each crawl was operated in accordance with the rate limits set by its respective source, either GitHub or S3. We collect raw source code and subsequently remove any having a license which does not exist in our permissive-license set (for additional details, refer to the [technical report](https://research.nvidia.com/labs/nemotron/files/NVIDIA-Nemotron-3-Nano-Technical-Report.pdf)).
337
+
338
+ | Dataset | Modality | Dataset Size | Collection Period | Collecting Organisation |
339
+ | :---- | :---- | :---- | :---- | :---- |
340
+ | English Common Crawl | Text | 3.36T | 4/8/2025 | NVIDIA Advanced Deep Learning Research |
341
+ | English Common Crawl 1.1 | Text | Not disclosed | 10/2/2025 | NVIDIA Advanced Deep Learning Research |
342
+ | Multilingual Common Crawl | Text | 812.7B | 5/1/2025 | NVIDIA Advanced Deep Learning Research |
343
+ | GitHub Crawl | Text | 747.4B | 4/29/2025 | NVIDIA Advanced Deep Learning Research |
344
+
345
+ ## NVIDIA-Sourced Synthetic Datasets
346
+
347
+ | Dataset | Modality | Dataset Size | Seed Dataset | Model(s) used for generation |
348
+ | :---- | :---- | :---- | :---- | :---- |
349
+ | Synthetic Art of Problem Solving from DeepSeek-R1 | Text | 40B | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
350
+ | Synthetic Moral Stories and Social Chemistry from Mixtral-8x22B-v0.1 | Text | 327M | [social-chemestry-101](https://huggingface.co/datasets/tasksource/social-chemestry-101); [Moral Stories](https://huggingface.co/datasets/demelin/moral_stories) | [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) |
351
+ | Synthetic Social Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 83.6M | [OpenStax \- CC BY-SA subset](https://openstax.org/) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) |
352
+ | Synthetic Health Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 9.7M | [OpenStax \- CC BY-SA subset](https://openstax.org/) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) |
353
+ | Synthetic STEM seeded with OpenStax, Open Textbook Library, and GSM8K from DeepSeek-R1, DeepSeek-V3, DeepSeek-V3-0324, and Qwen2.5-72B | Text | 175M | [OpenStax \- CC BY-SA subset](https://openstax.org/); [GSM8K](https://github.com/openai/grade-school-math); [Open Textbook Library \- CC BY-SA & GNU subset](https://open.umn.edu/opentextbooks/textbooks/) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1), [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) |
354
+ | [Nemotron-PrismMath](https://huggingface.co/datasets/nvidia/Nemotron-PrismMath) | Text | 4.6B | [Big-Math-RL-Verified](https://huggingface.co/datasets/SynthLabsAI/Big-Math-RL-Verified); [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) | [Qwen2.5-0.5B-instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct), [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct); [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
355
+ | Synthetic Question Answering Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | 350M | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
356
+ | Synthetic Rephrased [Math Data from Common Crawl](https://huggingface.co/datasets/nvidia/Nemotron-MIND) from phi-4 | Text | 73B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
357
+ | Synthetic Math Data from Common Crawl 4plus | Text | 52.3B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
358
+ | Synthetic Math Data from Common Crawl 3 | Text | 80.9B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
359
+ | Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from DeepSeek-V3 and DeepSeek-V3-0324 | Text | 4.0B | [AQUA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat); [LogiQA](https://huggingface.co/datasets/lucasmccabe/logiqa); [AR-LSAT](https://github.com/zhongwanjun/AR-LSAT) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324) |
360
+ | Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from Qwen3-30B-A3B | Text | 4.2B | [AQUA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat); [LogiQA](https://huggingface.co/datasets/lucasmccabe/logiqa); [AR-LSAT](https://github.com/zhongwanjun/AR-LSAT) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
361
+ | Synthetic Art of Problem Solving from Qwen2.5-32B-Instruct, Qwen2.5-Math-72B, Qwen2.5-Math-7B, and Qwen2.5-72B-Instruct | Text | | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); [GSM8K](https://github.com/openai/grade-school-math); [PRM800K](https://github.com/openai/prm800k) | [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [Qwen2.5-Math-72B](https://huggingface.co/Qwen/Qwen2.5-Math-72B); [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B); [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
362
+ | Synthetic MMLU Auxiliary Train from DeepSeek-R1 | Text | 0.5B | [MMLU Auxiliary Train](https://huggingface.co/datasets/cais/mmlu/viewer/all/auxiliary_train) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
363
+ | Synthetic Long Context Continued Post-Training Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
364
+ | Synthetic Common Crawl from Qwen3-30B-A3B and Mistral-Nemo-12B-Instruct | Text | 415.8B | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct) |
365
+ | Synthetic Multilingual Data from Common Crawl from Qwen3-30B-A3B | Text | | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
366
+ | Synthetic Multilingual Data from Wikimedia from Qwen3-30B-A3B | Text | | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
367
+ | Synthetic Math Data from Wikimedia from Nemotron-4-340B-Instruct | Text | | \- | [Nemotron-4-340B-Instruct](https://huggingface.co/nvidia/Nemotron-4-340B-Instruct) |
368
+ | Synthetic Common Crawl Code from phi-4 | Text | 427.9B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
369
+ | Synthetic Scientific Coding from Qwen3-235B-A22B | Text | 1.2B | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507) |
370
+ | Tool Calling Data | Text | 26.2B | | [Qwen3-235B-A22B-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
371
+ | Synthetic Essential-Web from QwQ-32B | Text | 28.1B | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) |
372
+ | Translated Synthetic Crawl | Text | 389.9B | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
373
+ | Translated Synthetic Wikipedia | Text | 7.9B | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
374
+ | Synthetic Long Context from Qwen3-235B-A22B-Instruct-2507 | Text | Undisclosed | [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507) |
375
+ | Synthetic Search STEM OPENQ from DeepSeek-R1-0528 | Text | Undisclosed | - | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
376
+ | Synthetic MCQ from Qwen2.5-32B-Instruct and DeepSeek-R1-0528 | Text | Undisclosed | - | [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
377
+ | Synthetic Offline Search MCQA HLE from DeepSeek-R1-0528 | Text | Undisclosed | - | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
378
+ | Synthetic Offline Search MCQA GPQA from Qwen3-235B-A22B and DeepSeek-R1-0528 | Text | Undisclosed | - | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
379
+ | Synthetic Human Preference from QwQ-32B, Qwen3-30B-A3B, Qwen3-235B-A22B, Qwen3-235B-A22B-Instruct-2507, Mistral-Small-3.1-24B-Instruct-2503, Mistral-Small-3.2-24B-Instruct-2506, MiniMax-M1-80k, MiniMax-M1-40k, Kimi-K2-Instruct, DeepSeek-V3-0324, DeepSeek-R1-0528 | Text | Undisclosed | - | [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B); [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503); [Mistral-Small-3.2-24B-Instruct-2506](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506); [MiniMax-M1-80k](https://huggingface.co/MiniMaxAI/MiniMax-M1-80k); [MiniMax-M1-40k](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k); [Kimi-K2-Instruct](https://huggingface.co/moonshotai/Kimi-K2-Instruct); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
380
+ | Synthetic Code from Qwen3-32B | Text | Undisclosed | English Common Crawl; English Common Crawl 1.1 | [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) |
381
+ | Synthetic OpenCodeReasoning from DeepSeek-R1 | Text | Undisclosed | [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
382
+ | Synthetic LIMO from DeepSeek-R1-0528 | Text | Undisclosed | [LIMO](https://huggingface.co/datasets/GAIR/LIMO) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
383
+ | Synthetic SCP from DeepSeek-R1-0528 | Text | Undisclosed | [SCP-116K](https://huggingface.co/datasets/EricLu/SCP-116K) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
384
+ | Synthetic Stack Exchange from DeepSeek-R1-0528 | Text | Undisclosed | [Stack Exchange](https://archive.org/details/stackexchange) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
385
+ | Synthetic Common Crawl from Qwen3-30B-A3B | Text | Undisclosed | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
386
+ | Synthetic Wikipedia from Qwen3-30B-A3B | Text | Undisclosed | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
387
+ | Synthetic Essential-Web from Qwen3-30B-A3B and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
388
+ | Synthetic Textbook Math from Qwen3-30B-A3B, Qwen3-235B-A22B, phi-4 | Text | Undisclosed | [Common Crawl](https://commoncrawl.org/); [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [phi-4](https://huggingface.co/microsoft/phi-4) |
389
+ | Synthetic Math and Code from DeepSeek-R1 and DeepSeek-R1-0528 | Text | Undisclosed | [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder-Evol-Instruct-110K); [opc-sft-stage2](https://huggingface.co/datasets/OpenCoder-LLM/opc-sft-stage2); [TACO](https://huggingface.co/datasets/BAAI/TACO); [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning); [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning); [NuminaMath CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
390
+
391
+
392
+ ## Training Dataset
393
+
394
+ | Dataset | \# of Tokens in Nemotron Nano 2 | \# of Tokens in Nemotron 3 Nano |
395
+ | :---- | :---- | :---- |
396
+ | English Common Crawl | 3,360,110,334,818 | 3,456,523,212,210 |
397
+ | English Synthetic CC | 1,949,464,641,123 | 4,340,740,677,920 |
398
+ | Crawl++ | 360,389,153,262 | 360,389,153,262 |
399
+ | Math | 124,606,230,663 | 154,217,502,165 |
400
+ | Synthetic Math | 73,007,767,155 | 73,007,767,155 |
401
+ | Code | 747,409,228,724 | 1,043,856,922,136 |
402
+ | Synthetic Code | 175,067,553,293 | 453,117,917,176 |
403
+ | Common Crawl Code | 0 | 263,072,374,097 |
404
+ | English Wiki | 17,349,266,926 | 17,349,266,926 |
405
+ | Synthetic Wiki | 0 | 7,850,648,552 |
406
+ | Books | 0 | 0 |
407
+ | Papers | 191,586,493,365 | 191,586,493,365 |
408
+ | PDF-to-text | 141,096,578,533 | 141,096,578,533 |
409
+ | Code SFT | 60,025,726,817 | 102,863,752,325 |
410
+ | STEM SFT | 272,680,426,295 | 359,826,214,274 |
411
+ | General SFT | 6,057,478,645 | 6,057,478,645 |
412
+ | Tool-Calling SFT | 0 | 26,244,716,867 |
413
+ | Multilingual | 2,172,261,909,350 | 1,743,892,490,859|
414
+ | Synthetic multilingual | 997,710,364,950 | 595,140,661,135 |
415
+ | **Total** | **10,648,823,153,919** | **13,336,833,827,602** |
416
+
417
+ We use a considerable amount of synthetic data. Out of 10.6 trillion tokens, 3,534,013,958,278 tokens are synthetically generated.
418
+
419
+ We extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC. Additionally, we used data from Wikipedia and FineWeb-2 (Penedo et al., 2025\) for these fifteen languages.
420
+
421
+ | Language | Total Tokens |
422
+ | :---- | :---- |
423
+ | Arabic | 118,056,362,726 |
424
+ | Danish | 117,747,321,618 |
425
+ | German | 146,613,691,781 |
426
+ | Spanish | 469,156,575,409 |
427
+ | French | 139,982,002,289 |
428
+ | Italian | 298,858,370,174 |
429
+ | Japanese | 682,755,693,336 |
430
+ | Korean | 127,099,747,538 |
431
+ | Dutch | 89,041,592,681 |
432
+ | Polish | 105,356,493,147 |
433
+ | Portuguese | 243,249,275,089 |
434
+ | Russian | 185,314,014,057 |
435
+ | Swedish | 74,954,953,299 |
436
+ | Thai | 160,778,944,467 |
437
+ | Chinese | 211,007,236,689 |
438
+
439
+ We collect a total of 922,476,782,017 tokens of code in 43 different languages.
440
+
441
+ | Language | Tokens |
442
+ | :---- | :---- |
443
+ | Assembly | 750,628,764 |
444
+ | C | 42,657,300,868 |
445
+ | C\# | 56,153,329,307 |
446
+ | C++ | 67,773,701,658 |
447
+ | CommonLisp | 263,234,672 |
448
+ | CSS | 38,848,760,035 |
449
+ | Cuda | 400,222,993 |
450
+ | Dart | 3,816,960,470 |
451
+ | Dockerfile | 474,958,084 |
452
+ | Fortran | 1,105,049,387 |
453
+ | Go | 8,332,419,480 |
454
+ | Haskell | 1,294,613,669 |
455
+ | HTML | 69,082,117,487 |
456
+ | Java | 131,440,465,822 |
457
+ | JavaScript | 75,573,420,861 |
458
+ | JSON | 15,366,881,241 |
459
+ | Julia | 621,046,949 |
460
+ | JupyterNotebook | 2,241,893,197 |
461
+ | Lua | 4,146,420,802 |
462
+ | Makefile | 12,640,010,879 |
463
+ | Markdown | 64,796,743,311 |
464
+ | Mathematica | 320,504,225 |
465
+ | OmniversePython | 26,946,093 |
466
+ | Pascal | 1,625,013,876 |
467
+ | Perl | 1,575,314,434 |
468
+ | PHP | 61,575,339,005 |
469
+ | Python | 126,916,727,384 |
470
+ | R | 19,811,381,935 |
471
+ | reStructuredText | 1,779,876,391 |
472
+ | Ruby | 6,446,962,615 |
473
+ | Rust | 4,438,640,533 |
474
+ | Scala | 3,343,959,154 |
475
+ | Shell | 18,758,779,250 |
476
+ | SQL | 23,205,633,085 |
477
+ | Swift | 5,976,714,881 |
478
+ | SystemVerilog | 233,056,185 |
479
+ | TeX | 7,347,157,527 |
480
+ | TypeScript | 15,657,838,582 |
481
+ | Verilog | 811,884,369 |
482
+ | VHDL | 648,401,444 |
483
+ | VisualBasic.NET | 1,005,680,881 |
484
+ | XML | 12,616,779,741 |
485
+ | YAML | 10,574,010,491 |
486
+
487
+ ## Evaluation Dataset
488
+
489
+ * Data Collection Method by dataset: Hybrid: Human, Synthetic
490
+ * Labeling Method by dataset: Hybrid: Automated, Human, Synthetic
491
+
492
+ ## Inference
493
+
494
+ - Engines: HF, vLLM, TRT-LLM
495
+
496
+ - Test Hardware: NVIDIA A100 80GB, H100 80GB
497
+
498
+ ## Ethical Considerations
499
+
500
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our [Trustworthy AI terms of service](https://www.nvidia.com/en-us/agreements/trustworthy-ai/terms/), developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
501
+
502
+ We advise against circumvention of any provided safety guardrails contained in the Model without a substantially similar guardrail appropriate for your use case. For more details: [Safety & Security](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16/blob/main/safety.md).
503
+
504
+ For more detailed information on ethical considerations for this model, please see the Model Card++ [Bias](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16/blob/main/bias.md), [Explainability](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16/blob/main/explainability.md), and [Privacy](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16/blob/main/privacy.md) Subcards.
505
+
506
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
507
+
508
+ ## Citation
509
+
510
+ ```
511
+ @misc{nvidia_nemotron_nano_v3_2025,
512
+ title = {{Nemotron 3 Nano}: Open, Efficient Mixture-of-Experts Hybrid {Mamba}-{Transformer} Model for {Agentic} Reasoning},
513
+ author = {{NVIDIA}},
514
+ year = {2025},
515
+ url = {https://research.nvidia.com/labs/nemotron/files/NVIDIA-Nemotron-3-Nano-Technical-Report.pdf},
516
+ note = {Technical report}
517
+ }
518
+ ```
accuracy_chart.svg ADDED
bias.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ | Field | Response |
2
+ | :---- | :---- |
3
+ | Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None |
4
+ | Bias Metric (If Measured): | Not Relevant for the Base Model |
5
+ | Which characteristic (feature) show(s) the greatest difference in performance?: | The model shows high variance in the characteristics when it is used with a high temperature. |
6
+ | Which feature(s) have the worst performance overall? | Not Relevant for the Base Model |
7
+ | Measures taken to mitigate against unwanted bias: | Not Applicable |
8
+ | If using internal data, description of methods implemented in data acquisition or processing, if any, to address the prevalence of identifiable biases in the training, testing, and validation data: | The training datasets contain a large amount of synthetic data generated by LLMs. We manually curated prompts. |
9
+ | Tools used to assess statistical imbalances and highlight patterns that may introduce bias into AI models: | These datasets, such as Common Crawl, CC-News, and Wikimedia, do not collectively or exhaustively represent all demographic groups (and proportionally therein). For instance, these datasets do not contain explicit mentions of demographic classes such as age, gender, or ethnicity in over 85% of samples. In the subset where such terms are present, Common Crawl and CC-News contain notable representational skews—for example, references to "male" significantly outnumber those to "female," and mentions of "White" are the most frequent among ethnic identifiers. To mitigate these imbalances, we recommend considering evaluation techniques such as bias audits, fine-tuning with demographically balanced datasets, and mitigation strategies like counterfactual data augmentation to align with the desired model behavior. This evaluation used a 3,000-sample subset per dataset, identified as the optimal threshold for maximizing embedder accuracy, and includes outputs from uncalibrated embedders; as such, certain limitations may exist in the reliability of the embedding. |
explainability.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ | Field | Response |
2
+ | :---- | :---- |
3
+ | Intended Task/Domain: | Text generation, reasoning, and chat |
4
+ | Model Type: | Text-to-text Mamba2-Transformer Hybrid |
5
+ | Intended Users: | Generative AI creators working with conversational AI models and image content. |
6
+ | Output: | Text |
7
+ | Tools used to evaluate datasets to identify synthetic data and ensure data authenticity. | We used a Gemma-3 4B-based filtering model fine-tuned on [Nemotron Content Safety Dataset v2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0) to ensure the quality of synthetic data. |
8
+ | Describe how the model works: | Generates text by predicting the next word or token based on the context provided in the input sequence using multiple self-attention layers. |
9
+ | Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable |
10
+ | Technical Limitations & Mitigation: | This model performs particularly well in instruction following regimes, as such may be strongly influenced by untrusted inputs and should be paired with appropriate guardrails and data filtering to better align use-case behaviors when exposed to such data. |
11
+ | Verified to have met prescribed NVIDIA quality standards: | Yes |
12
+ | Performance Metrics: | Accuracy, Throughput, and User-side throughput |
13
+ | Potential Known Risks: | The model was optimized explicitly for instruction following and as such may be influenced by untrusted inputs (prompt injection, indirect prompt injection, jailbreaking, web search, etc.) as a result of its instruction tuning that may degrade safety alignment and other training efforts. This model should be paired with additional guardrails and data filtering to limit exposure to instructions from malicious sources. Bypassing of safety alignment, system guardrails, and filters may allow harmful outcomes up to and including remote code execution in some agentic systems when effective security controls are not in place. The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may generate and amplify harmful, biased, or otherwise unsafe content reinforcing these biases and return toxic responses especially when prompted with toxic prompts. The model may also generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. The model may exhibit self-anthropomorphism (e.g., displaying human-like characteristics in dialogue, such as expressing preferences and emotions). In integrated system contexts, the model could potentially be exploited to access or disclose information beyond the model’s intended permissions or scope of operation.|
14
+ | Licensing: | GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). |
nemo-evaluator-launcher-configs/local_nvidia-nemotron-nano-3-30b-a3b-base.yaml ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ defaults:
16
+ - execution: local
17
+ - deployment: none
18
+ - _self_
19
+
20
+ execution:
21
+ output_dir: NVIDIA-Nemotron-Nano-3-30B-A3B-Base-BF16
22
+ # mode: sequential # enables sequential execution
23
+
24
+ target:
25
+ api_endpoint:
26
+ model_id: nvidia/NVIDIA-Nemotron-Nano-3-30B-A3B-Base-BF16
27
+ url: http://0.0.0.0:8000/v1/chat/completions # locally hosted endpoint
28
+
29
+ # specify the benchmarks to evaluate
30
+ evaluation:
31
+ env_vars:
32
+ HF_TOKEN: HF_TOKEN
33
+ nemo_evaluator_config: # global config settings that apply to all tasks
34
+ config:
35
+ params:
36
+ max_retries: 5 # number of retries for API requests
37
+ request_timeout: 360 # timeout for API requests in seconds
38
+ parallelism: 1 # number of parallel requests
39
+ extra:
40
+ tokenizer: nvidia/NVIDIA-Nemotron-Nano-3-30B-A3B-Base-BF16
41
+ tokenizer_backend: huggingface
42
+ tasks:
43
+ - name: adlr_mmlu_pro_5_shot_base
44
+ - name: adlr_mmlu
45
+ - name: adlr_agieval_en_cot
46
+ - name: adlr_gpqa_diamond_cot_5_shot
47
+ - name: adlr_humaneval_greedy
48
+ - name: adlr_humaneval_sampled
49
+ - name: adlr_mbpp_sanitized_3_shot_greedy
50
+ - name: adlr_mbpp_sanitized_3_shot_sampled
51
+ - name: adlr_gsm8k_cot_8_shot
52
+ - name: adlr_minerva_math_nemo_4_shot
53
+ - name: adlr_math_500_4_shot_sampled
54
+ - name: adlr_commonsense_qa_7_shot
55
+ - name: adlr_arc_challenge_llama_25_shot
56
+ - name: hellaswag
57
+ - name: openbookqa
58
+ - name: piqa
59
+ - name: adlr_race
60
+ - name: adlr_winogrande_5_shot
61
+ - name: social_iqa
62
+ - name: adlr_global_mmlu_lite_5_shot
63
+ - name: adlr_mgsm_native_cot_8_shot
nemo-evaluator-launcher-configs/local_qwen3-30b-a3b-base.yaml ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ defaults:
16
+ - execution: local
17
+ - deployment: none
18
+ - _self_
19
+
20
+ execution:
21
+ output_dir: qwen3-30b-a3b-base
22
+ # mode: sequential # enables sequential execution
23
+
24
+ target:
25
+ api_endpoint:
26
+ model_id: Qwen/Qwen3-30B-A3B-Base
27
+ url: http://0.0.0.0:8000/v1/chat/completions # locally hosted endpoint
28
+
29
+ # specify the benchmarks to evaluate
30
+ evaluation:
31
+ env_vars:
32
+ HF_TOKEN: HF_TOKEN
33
+ nemo_evaluator_config: # global config settings that apply to all tasks
34
+ config:
35
+ params:
36
+ max_retries: 5 # number of retries for API requests
37
+ request_timeout: 360 # timeout for API requests in seconds
38
+ parallelism: 1 # number of parallel requests
39
+ extra:
40
+ tokenizer: Qwen/Qwen3-30B-A3B-Base
41
+ tokenizer_backend: huggingface
42
+ tasks:
43
+ - name: adlr_mmlu_pro_5_shot_base
44
+ - name: adlr_mmlu
45
+ - name: adlr_agieval_en_cot
46
+ - name: adlr_gpqa_diamond_cot_5_shot
47
+ - name: adlr_humaneval_greedy
48
+ - name: adlr_humaneval_sampled
49
+ - name: adlr_mbpp_sanitized_3_shot_greedy
50
+ - name: adlr_mbpp_sanitized_3_shot_sampled
51
+ - name: adlr_gsm8k_cot_8_shot
52
+ - name: adlr_minerva_math_nemo_4_shot
53
+ - name: adlr_math_500_4_shot_sampled
54
+ - name: adlr_commonsense_qa_7_shot
55
+ - name: adlr_arc_challenge_llama_25_shot
56
+ - name: hellaswag
57
+ - name: openbookqa
58
+ - name: piqa
59
+ - name: adlr_race
60
+ - name: adlr_winogrande_5_shot
61
+ - name: social_iqa
62
+ - name: adlr_global_mmlu_lite_5_shot
63
+ - name: adlr_mgsm_native_cot_8_shot
privacy.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ | Field | Response |
2
+ | :---- | :---- |
3
+ | Generatable or reverse engineerable personal data? | No |
4
+ | Personal data used to create this model? | No |
5
+ | Was consent obtained for any personal data used? | Not Applicable |
6
+ | A description of any methods implemented in data acquisition or processing, if any, to address the prevalence of personal data in the training data, where relevant and applicable. | We used only prompts that do not contain any personal data for synthetic data generation. |
7
+ | How often is the dataset reviewed? | Before Release |
8
+ | Is there provenance for all datasets used in training? | Yes |
9
+ | Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
10
+ | Is data compliant with data subject requests for data correction or removal, if such a request was made? | No, not possible with externally-sourced data. |
11
+ | Applicable Privacy Policy | [NVIDIA Privacy Policy](https://www.nvidia.com/en-us/about-nvidia/privacy-policy/) |
12
+ | During AI model development, strict adherence to copyright policy ensured compliance through risk mitigation and legal reviews. Post-data collection, reserved rights content is identified and removed, with verified opt-out processes for rightsholders. Detailed records document due diligence and transparency. | True |
13
+ | We employ automated tools and data processing techniques during pre-training to identify and filter certain categories of personal information. Scans of training datasets detected no PII. | True. We employ automated tools and data processing techniques to scan for Personally Identifiable Information (PII) during pre-training to identify and filter certain categories of personal information, including public-facing contact details such as email addresses and phone numbers. Scans of Common Crawl, CC-News, and Wikimedia datasets did not detect PII in the majority of samples. However, Microsoft Presidio indicated potential findings including business contact information embedded in natural language, such as email addresses and phone numbers. These were removed using verified instances of PII through a combination of automated filtering and human-in-the-loop validation. This evaluation used a 3,000-sample subset per dataset, identified as the optimal threshold for maximizing embedder accuracy. |
safety.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ | Field | Response |
2
+ | :---- | :---- |
3
+ | Model Application Field(s): | Chat, Instruction Following, Chatbot Development, Code Generation, Reasoning, Customer Service |
4
+ | Describe the life critical impact (if present). | Not Applicable |
5
+ | Description of methods implemented in data acquisition or processing, if any, to address other types of potentially harmful data in the training, testing, and validation data: | We used a guard model for content safety to exclude potentially harmful data from training. |
6
+ | Description of any methods implemented in data acquisition or processing, if any, to address illegal or harmful content in the training data, including, but not limited to, child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) | We used a Gemma-3 4B-based guard model trained on [Nemotron Content Safety Dataset v2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0) for content safety to exclude potentially illegal or harmful content from the training. |
7
+ | Use Case Restrictions: | Abide by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). |
8
+ | Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. |
9
+ | This AI model was developed based on our policies to ensure responsible data handling and risk mitigation. The datasets used for training have been scanned for harmful content and illegal content, consistent with our policies including scanning for Child Sexual Abuse Material (CSAM). Ongoing review and monitoring mechanisms are in place based on our policies and to maintain data integrity. | True. We use [Nemotron Content Safety Dataset V2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0) and an internal safety dataset specialized for minority sexuality for content safety evaluation to ensure the safety of this model. |