Llama-3_3-Nemotron-Super-49B-v1 GGUF Models
Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)
Our latest quantization method introduces precision-adaptive quantization for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on Llama-3-8B. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
Benchmark Context
All tests conducted on Llama-3-8B-Instruct using:
- Standard perplexity evaluation pipeline
- 2048-token context window
- Same prompt set across all quantizations
Method
- Dynamic Precision Allocation:
- First/Last 25% of layers β IQ4_XS (selected layers)
- Middle 50% β IQ2_XXS/IQ3_S (increase efficiency)
- Critical Component Protection:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% vs standard 1-2bit
Quantization Performance Comparison (Llama-3-8B)
| Quantization | Standard PPL | DynamicGate PPL | Ξ PPL | Std Size | DG Size | Ξ Size | Std Speed | DG Speed |
|---|---|---|---|---|---|---|---|---|
| IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
| IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
| IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
| IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
| IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
Key:
- PPL = Perplexity (lower is better)
- Ξ PPL = Percentage change from standard to DynamicGate
- Speed = Inference time (CPU avx2, 2048 token context)
- Size differences reflect mixed quantization overhead
Key Improvements:
- π₯ IQ1_M shows massive 43.9% perplexity reduction (27.46 β 15.41)
- π IQ2_S cuts perplexity by 36.9% while adding only 0.2GB
- β‘ IQ1_S maintains 39.7% better accuracy despite 1-bit quantization
Tradeoffs:
- All variants have modest size increases (0.1-0.3GB)
- Inference speeds remain comparable (<5% difference)
When to Use These Models
π Fitting models into GPU VRAM
β Memory-constrained deployments
β Cpu and Edge Devices where 1-2bit errors can be tolerated
β Research into ultra-low-bit quantization
Choosing the Right Model Format
Selecting the correct model format depends on your hardware capabilities and memory constraints.
BF16 (Brain Float 16) β Use if BF16 acceleration is available
- A 16-bit floating-point format designed for faster computation while retaining good precision.
- Provides similar dynamic range as FP32 but with lower memory usage.
- Recommended if your hardware supports BF16 acceleration (check your device's specs).
- Ideal for high-performance inference with reduced memory footprint compared to FP32.
π Use BF16 if:
β Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
β You want higher precision while saving memory.
β You plan to requantize the model into another format.
π Avoid BF16 if:
β Your hardware does not support BF16 (it may fall back to FP32 and run slower).
β You need compatibility with older devices that lack BF16 optimization.
F16 (Float 16) β More widely supported than BF16
- A 16-bit floating-point high precision but with less of range of values than BF16.
- Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
π Use F16 if:
β Your hardware supports FP16 but not BF16.
β You need a balance between speed, memory usage, and accuracy.
β You are running on a GPU or another device optimized for FP16 computations.
π Avoid F16 if:
β Your device lacks native FP16 support (it may run slower than expected).
β You have memory limitations.
Quantized Models (Q4_K, Q6_K, Q8, etc.) β For CPU & Low-VRAM Inference
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- Lower-bit models (Q4_K) β Best for minimal memory usage, may have lower precision.
- Higher-bit models (Q6_K, Q8_0) β Better accuracy, requires more memory.
π Use Quantized Models if:
β You are running inference on a CPU and need an optimized model.
β Your device has low VRAM and cannot load full-precision models.
β You want to reduce memory footprint while keeping reasonable accuracy.
π Avoid Quantized Models if:
β You need maximum accuracy (full-precision models are better for this).
β Your hardware has enough VRAM for higher-precision formats (BF16/F16).
Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)
These models are optimized for extreme memory efficiency, making them ideal for low-power devices or large-scale deployments where memory is a critical constraint.
IQ3_XS: Ultra-low-bit quantization (3-bit) with extreme memory efficiency.
- Use case: Best for ultra-low-memory devices where even Q4_K is too large.
- Trade-off: Lower accuracy compared to higher-bit quantizations.
IQ3_S: Small block size for maximum memory efficiency.
- Use case: Best for low-memory devices where IQ3_XS is too aggressive.
IQ3_M: Medium block size for better accuracy than IQ3_S.
- Use case: Suitable for low-memory devices where IQ3_S is too limiting.
Q4_K: 4-bit quantization with block-wise optimization for better accuracy.
- Use case: Best for low-memory devices where Q6_K is too large.
Q4_0: Pure 4-bit quantization, optimized for ARM devices.
- Use case: Best for ARM-based devices or low-memory environments.
Summary Table: Model Format Selection
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|---|---|---|---|---|
| BF16 | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
| F16 | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
| Q4_K | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
| Q6_K | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
| Q8_0 | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
| IQ3_XS | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
| Q4_0 | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
Included Files & Details
Llama-3_3-Nemotron-Super-49B-v1-bf16.gguf
- Model weights preserved in BF16.
- Use this if you want to requantize the model into a different format.
- Best if your device supports BF16 acceleration.
Llama-3_3-Nemotron-Super-49B-v1-f16.gguf
- Model weights stored in F16.
- Use if your device supports FP16, especially if BF16 is not available.
Llama-3_3-Nemotron-Super-49B-v1-bf16-q8_0.gguf
- Output & embeddings remain in BF16.
- All other layers quantized to Q8_0.
- Use if your device supports BF16 and you want a quantized version.
Llama-3_3-Nemotron-Super-49B-v1-f16-q8_0.gguf
- Output & embeddings remain in F16.
- All other layers quantized to Q8_0.
Llama-3_3-Nemotron-Super-49B-v1-q4_k.gguf
- Output & embeddings quantized to Q8_0.
- All other layers quantized to Q4_K.
- Good for CPU inference with limited memory.
Llama-3_3-Nemotron-Super-49B-v1-q4_k_s.gguf
- Smallest Q4_K variant, using less memory at the cost of accuracy.
- Best for very low-memory setups.
Llama-3_3-Nemotron-Super-49B-v1-q6_k.gguf
- Output & embeddings quantized to Q8_0.
- All other layers quantized to Q6_K .
Llama-3_3-Nemotron-Super-49B-v1-q8_0.gguf
- Fully Q8 quantized model for better accuracy.
- Requires more memory but offers higher precision.
Llama-3_3-Nemotron-Super-49B-v1-iq3_xs.gguf
- IQ3_XS quantization, optimized for extreme memory efficiency.
- Best for ultra-low-memory devices.
Llama-3_3-Nemotron-Super-49B-v1-iq3_m.gguf
- IQ3_M quantization, offering a medium block size for better accuracy.
- Suitable for low-memory devices.
Llama-3_3-Nemotron-Super-49B-v1-q4_0.gguf
- Pure Q4_0 quantization, optimized for ARM devices.
- Best for low-memory environments.
- Prefer IQ4_NL for better accuracy.
π If you find these models useful
Please click like β€ . Also I'd really appreciate it if you could test my Network Monitor Assistant at π Network Monitor Assitant.
π¬ Click the chat icon (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM.
What I'm Testing
I'm experimenting with function calling against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function".
π‘ TestLLM β Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeβstill working on scaling!). If you're curious, I'd be happy to share how it works! .
The other Available AI Assistants
π’ TurboLLM β Uses gpt-4o-mini Fast! . Note: tokens are limited since OpenAI models are pricey, but you can Login or Download the Quantum Network Monitor agent to get more tokens, Alternatively use the TestLLM .
π΅ HugLLM β Runs open-source Hugging Face models Fast, Runs small models (β8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability)
Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβall out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is open source. Feel free to use whatever you find helpful.
If you appreciate the work, please consider buying me a coffee β. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! π
Mungert/Llama-3_3-Nemotron-Super-49B-v1-GGUF GGUF Models
Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)
Our latest quantization method introduces precision-adaptive quantization for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on Llama-3-8B. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
Benchmark Context
All tests conducted on Llama-3-8B-Instruct using:
- Standard perplexity evaluation pipeline
- 2048-token context window
- Same prompt set across all quantizations
Method
- Dynamic Precision Allocation:
- First/Last 25% of layers β IQ4_XS (selected layers)
- Middle 50% β IQ2_XXS/IQ3_S (increase efficiency)
- Critical Component Protection:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% vs standard 1-2bit
Quantization Performance Comparison (Llama-3-8B)
| Quantization | Standard PPL | DynamicGate PPL | Ξ PPL | Std Size | DG Size | Ξ Size | Std Speed | DG Speed |
|---|---|---|---|---|---|---|---|---|
| IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
| IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
| IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
| IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
| IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
Key:
- PPL = Perplexity (lower is better)
- Ξ PPL = Percentage change from standard to DynamicGate
- Speed = Inference time (CPU avx2, 2048 token context)
- Size differences reflect mixed quantization overhead
Key Improvements:
- π₯ IQ1_M shows massive 43.9% perplexity reduction (27.46 β 15.41)
- π IQ2_S cuts perplexity by 36.9% while adding only 0.2GB
- β‘ IQ1_S maintains 39.7% better accuracy despite 1-bit quantization
Tradeoffs:
- All variants have modest size increases (0.1-0.3GB)
- Inference speeds remain comparable (<5% difference)
When to Use These Models
π Fitting models into GPU VRAM
β Memory-constrained deployments
β Cpu and Edge Devices where 1-2bit errors can be tolerated
β Research into ultra-low-bit quantization
Choosing the Right Model Format
Selecting the correct model format depends on your hardware capabilities and memory constraints.
BF16 (Brain Float 16) β Use if BF16 acceleration is available
- A 16-bit floating-point format designed for faster computation while retaining good precision.
- Provides similar dynamic range as FP32 but with lower memory usage.
- Recommended if your hardware supports BF16 acceleration (check your device's specs).
- Ideal for high-performance inference with reduced memory footprint compared to FP32.
π Use BF16 if:
β Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
β You want higher precision while saving memory.
β You plan to requantize the model into another format.
π Avoid BF16 if:
β Your hardware does not support BF16 (it may fall back to FP32 and run slower).
β You need compatibility with older devices that lack BF16 optimization.
F16 (Float 16) β More widely supported than BF16
- A 16-bit floating-point high precision but with less of range of values than BF16.
- Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
π Use F16 if:
β Your hardware supports FP16 but not BF16.
β You need a balance between speed, memory usage, and accuracy.
β You are running on a GPU or another device optimized for FP16 computations.
π Avoid F16 if:
β Your device lacks native FP16 support (it may run slower than expected).
β You have memory limitations.
Quantized Models (Q4_K, Q6_K, Q8, etc.) β For CPU & Low-VRAM Inference
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- Lower-bit models (Q4_K) β Best for minimal memory usage, may have lower precision.
- Higher-bit models (Q6_K, Q8_0) β Better accuracy, requires more memory.
π Use Quantized Models if:
β You are running inference on a CPU and need an optimized model.
β Your device has low VRAM and cannot load full-precision models.
β You want to reduce memory footprint while keeping reasonable accuracy.
π Avoid Quantized Models if:
β You need maximum accuracy (full-precision models are better for this).
β Your hardware has enough VRAM for higher-precision formats (BF16/F16).
Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)
These models are optimized for extreme memory efficiency, making them ideal for low-power devices or large-scale deployments where memory is a critical constraint.
IQ3_XS: Ultra-low-bit quantization (3-bit) with extreme memory efficiency.
- Use case: Best for ultra-low-memory devices where even Q4_K is too large.
- Trade-off: Lower accuracy compared to higher-bit quantizations.
IQ3_S: Small block size for maximum memory efficiency.
- Use case: Best for low-memory devices where IQ3_XS is too aggressive.
IQ3_M: Medium block size for better accuracy than IQ3_S.
- Use case: Suitable for low-memory devices where IQ3_S is too limiting.
Q4_K: 4-bit quantization with block-wise optimization for better accuracy.
- Use case: Best for low-memory devices where Q6_K is too large.
Q4_0: Pure 4-bit quantization, optimized for ARM devices.
- Use case: Best for ARM-based devices or low-memory environments.
Summary Table: Model Format Selection
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|---|---|---|---|---|
| BF16 | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
| F16 | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
| Q4_K | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
| Q6_K | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
| Q8_0 | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
| IQ3_XS | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
| Q4_0 | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
Included Files & Details
Mungert/Llama-3_3-Nemotron-Super-49B-v1-GGUF-bf16.gguf
- Model weights preserved in BF16.
- Use this if you want to requantize the model into a different format.
- Best if your device supports BF16 acceleration.
Mungert/Llama-3_3-Nemotron-Super-49B-v1-GGUF-f16.gguf
- Model weights stored in F16.
- Use if your device supports FP16, especially if BF16 is not available.
Mungert/Llama-3_3-Nemotron-Super-49B-v1-GGUF-bf16-q8_0.gguf
- Output & embeddings remain in BF16.
- All other layers quantized to Q8_0.
- Use if your device supports BF16 and you want a quantized version.
Mungert/Llama-3_3-Nemotron-Super-49B-v1-GGUF-f16-q8_0.gguf
- Output & embeddings remain in F16.
- All other layers quantized to Q8_0.
Mungert/Llama-3_3-Nemotron-Super-49B-v1-GGUF-q4_k.gguf
- Output & embeddings quantized to Q8_0.
- All other layers quantized to Q4_K.
- Good for CPU inference with limited memory.
Mungert/Llama-3_3-Nemotron-Super-49B-v1-GGUF-q4_k_s.gguf
- Smallest Q4_K variant, using less memory at the cost of accuracy.
- Best for very low-memory setups.
Mungert/Llama-3_3-Nemotron-Super-49B-v1-GGUF-q6_k.gguf
- Output & embeddings quantized to Q8_0.
- All other layers quantized to Q6_K .
Mungert/Llama-3_3-Nemotron-Super-49B-v1-GGUF-q8_0.gguf
- Fully Q8 quantized model for better accuracy.
- Requires more memory but offers higher precision.
Mungert/Llama-3_3-Nemotron-Super-49B-v1-GGUF-iq3_xs.gguf
- IQ3_XS quantization, optimized for extreme memory efficiency.
- Best for ultra-low-memory devices.
Mungert/Llama-3_3-Nemotron-Super-49B-v1-GGUF-iq3_m.gguf
- IQ3_M quantization, offering a medium block size for better accuracy.
- Suitable for low-memory devices.
Mungert/Llama-3_3-Nemotron-Super-49B-v1-GGUF-q4_0.gguf
- Pure Q4_0 quantization, optimized for ARM devices.
- Best for low-memory environments.
- Prefer IQ4_NL for better accuracy.
π If you find these models useful
Please click like β€ . Also I'd really appreciate it if you could test my Network Monitor Assistant at π Network Monitor Assitant.
π¬ Click the chat icon (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM.
What I'm Testing
I'm experimenting with function calling against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function".
π‘ TestLLM β Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeβstill working on scaling!). If you're curious, I'd be happy to share how it works! .
The other Available AI Assistants
π’ TurboLLM β Uses gpt-4o-mini Fast! . Note: tokens are limited since OpenAI models are pricey, but you can Login or Download the Quantum Network Monitor agent to get more tokens, Alternatively use the TestLLM .
π΅ HugLLM β Runs open-source Hugging Face models Fast, Runs small models (β8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability)
Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβall out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is open source. Feel free to use whatever you find helpful.
If you appreciate the work, please consider buying me a coffee β. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! π
Llama-3.3-Nemotron-Super-49B-v1
Model Overview
Llama-3.3-Nemotron-Super-49B-v1 is a large language model (LLM) which is a derivative of Meta Llama-3.3-70B-Instruct (AKA the reference model). It is a reasoning model that is post trained for reasoning, human chat preferences, and tasks, such as RAG and tool calling. The model supports a context length of 128K tokens.
Llama-3.3-Nemotron-Super-49B-v1 is a model which offers a great tradeoff between model accuracy and efficiency. Efficiency (throughput) directly translates to savings. Using a novel Neural Architecture Search (NAS) approach, we greatly reduce the modelβs memory footprint, enabling larger workloads, as well as fitting the model on a single GPU at high workloads (H200). This NAS approach enables the selection of a desired point in the accuracy-efficiency tradeoff. For more information on the NAS approach, please refer to this paper.
The model underwent a multi-phase post-training process to enhance both its reasoning and non-reasoning capabilities. This includes a supervised fine-tuning stage for Math, Code, Reasoning, and Tool Calling as well as multiple reinforcement learning (RL) stages using REINFORCE (RLOO) and Online Reward-aware Preference Optimization (RPO) algorithms for both chat and instruction-following. The final model checkpoint is obtained after merging the final SFT and Online RPO checkpoints. For more details on how the model was trained, please see this blog.
This model is part of the Llama Nemotron Collection. You can find the other model(s) in this family here:
This model is ready for commercial use.
License/Terms of Use
GOVERNING TERMS: Your use of this model is governed by the NVIDIA Open Model License.
Additional Information: Llama 3.3 Community License Agreement. Built with Llama.
Model Developer: NVIDIA
Model Dates: Trained between November 2024 and February 2025
Data Freshness: The pretraining data has a cutoff of 2023 per Meta Llama 3.3 70B
Use Case:
Developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. Also suitable for typical instruction-following tasks.
Release Date:
3/18/2025
References
- [2411.19146] Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
- [2502.00203] Reward-aware Preference Optimization: A Unified Mathematical Framework for Model Alignment
Model Architecture
Architecture Type: Dense decoder-only Transformer model
Network Architecture: Llama 3.3 70B Instruct, customized through Neural Architecture Search (NAS)
The model is a derivative of Metaβs Llama-3.3-70B-Instruct, using Neural Architecture Search (NAS). The NAS algorithm results in non-standard and non-repetitive blocks. This includes the following:
- Skip attention: In some blocks, the attention is skipped entirely, or replaced with a single linear layer.
- Variable FFN: The expansion/compression ratio in the FFN layer is different between blocks.
We utilize a block-wise distillation of the reference model, where for each block we create multiple variants providing different tradeoffs of quality vs. computational complexity, discussed in more depth below. We then search over the blocks to create a model which meets the required throughput and memory (optimized for a single H100-80GB GPU) while minimizing the quality degradation. The model then undergoes knowledge distillation (KD), with a focus on English single and multi-turn chat use-cases. The KD step included 40 billion tokens consisting of a mixture of 3 datasets - FineWeb, Buzz-V1.2 and Dolma.
Intended use
Llama-3.3-Nemotron-Super-49B-v1 is a general purpose reasoning and chat model intended to be used in English and coding languages. Other non-English languages (German, French, Italian, Portuguese, Hindi, Spanish, and Thai) are also supported.
Input
- Input Type: Text
- Input Format: String
- Input Parameters: One-Dimensional (1D)
- Other Properties Related to Input: Context length up to 131,072 tokens
Output
- Output Type: Text
- Output Format: String
- Output Parameters: One-Dimensional (1D)
- Other Properties Related to Output: Context length up to 131,072 tokens
Model Version
1.0 (3/18/2025)
Software Integration
- Runtime Engine: Transformers
- Recommended Hardware Microarchitecture Compatibility:
- NVIDIA Hopper
- NVIDIA Ampere
Quick Start and Usage Recommendations:
- Reasoning mode (ON/OFF) is controlled via the system prompt, which must be set as shown in the example below. All instructions should be contained within the user prompt
- We recommend setting temperature to
0.6, and Top P to0.95for Reasoning ON mode - We recommend using greedy decoding for Reasoning OFF mode
- We have provided a list of prompts to use for evaluation for each benchmark where a specific template is required
You can try this model out through the preview API, using this link: Llama-3_3-Nemotron-Super-49B-v1.
See the snippet below for usage with Hugging Face Transformers library. Reasoning mode (ON/OFF) is controlled via system prompt. Please see the example below
We recommend using the transformers package with version 4.48.3.
Example of reasoning on:
import torch
import transformers
model_id = "nvidia/Llama-3_3-Nemotron-Super-49B-v1"
model_kwargs = {"torch_dtype": torch.bfloat16, "trust_remote_code": True, "device_map": "auto"}
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
max_new_tokens=32768,
temperature=0.6,
top_p=0.95,
**model_kwargs
)
thinking = "on"
print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"},{"role": "user", "content": "Solve x*(sin(x)+2)=0"}]))
Example of reasoning off:
import torch
import transformers
model_id = "nvidia/Llama-3_3-Nemotron-Super-49B-v1"
model_kwargs = {"torch_dtype": torch.bfloat16, "trust_remote_code": True, "device_map": "auto"}
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
max_new_tokens=32768,
do_sample=False,
**model_kwargs
)
# Thinking can be "on" or "off"
thinking = "off"
print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"},{"role": "user", "content": "Solve x*(sin(x)+2)=0"}]))
Inference:
Engine:
- Transformers
Test Hardware:
- FP8: 1x NVIDIA H100-80GB GPU (Coming Soon!)
- BF16:
- 2x NVIDIA H100-80GB
- 2x NVIDIA A100-80GB GPUs
[Preferred/Supported] Operating System(s): Linux
Training Datasets
A large variety of training data was used for the knowledge distillation phase before post-training pipeline, 3 of which included: FineWeb, Buzz-V1.2, and Dolma.
The data for the multi-stage post-training phases for improvements in Code, Math, and Reasoning is a compilation of SFT and RL data that supports improvements of math, code, general reasoning, and instruction following capabilities of the original Llama instruct model.
In conjunction with this model release, NVIDIA has released 30M samples of post-training data, as public and permissive. Please see Llama-Nemotron-Postraining-Dataset-v1.
Distribution of the domains is as follows:
| Category | Value |
|---|---|
| math | 19,840,970 |
| code | 9,612,677 |
| science | 708,920 |
| instruction following | 56,339 |
| chat | 39,792 |
| safety | 31,426 |
Prompts have been sourced from either public and open corpus or synthetically generated. Responses were synthetically generated by a variety of models, with some prompts containing responses for both reasoning on and off modes, to train the model to distinguish between two modes.
Data Collection for Training Datasets:
- Hybrid: Automated, Human, Synthetic
Data Labeling for Training Datasets:
- Hybrid: Automated, Human, Synthetic
Evaluation Datasets
We used the datasets listed below to evaluate Llama-3.3-Nemotron-Super-49B-v1.
Data Collection for Evaluation Datasets:
- Hybrid: Human/Synthetic
Data Labeling for Evaluation Datasets:
- Hybrid: Human/Synthetic/Automatic
Evaluation Results
These results contain both βReasoning Onβ, and βReasoning Offβ. We recommend using temperature=0.6, top_p=0.95 for βReasoning Onβ mode, and greedy decoding for βReasoning Offβ mode. All evaluations are done with 32k sequence length. We run the benchmarks up to 16 times and average the scores to be more accurate.
NOTE: Where applicable, a Prompt Template will be provided. While completing benchmarks, please ensure that you are parsing for the correct output format as per the provided prompt in order to reproduce the benchmarks seen below.
Arena-Hard
| Reasoning Mode | Score |
|---|---|
| Reasoning Off | 88.3 |
MATH500
| Reasoning Mode | pass@1 |
|---|---|
| Reasoning Off | 74.0 |
| Reasoning On | 96.6 |
User Prompt Template:
"Below is a math question. I want you to reason through the steps and then give a final answer. Your final answer should be in \boxed{}.\nQuestion: {question}"
AIME25
| Reasoning Mode | pass@1 |
|---|---|
| Reasoning Off | 13.33 |
| Reasoning On | 58.4 |
User Prompt Template:
"Below is a math question. I want you to reason through the steps and then give a final answer. Your final answer should be in \boxed{}.\nQuestion: {question}"
GPQA
| Reasoning Mode | pass@1 |
|---|---|
| Reasoning Off | 50 |
| Reasoning On | 66.67 |
User Prompt Template:
"What is the correct answer to this question: {question}\nChoices:\nA. {option_A}\nB. {option_B}\nC. {option_C}\nD. {option_D}\nLet's think step by step, and put the final answer (should be a single letter A, B, C, or D) into a \boxed{}"
IFEval
| Reasoning Mode | Strict:Instruction |
|---|---|
| Reasoning Off | 89.21 |
BFCL V2 Live
| Reasoning Mode | Score |
|---|---|
| Reasoning Off | 73.7 |
User Prompt Template:
You are an expert in composing functions. You are given a question and a set of possible functions.
Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
If none of the function can be used, point it out. If the given question lacks the parameters required by the function,
also point it out. You should only return the function call in tools call sections.
If you decide to invoke any of the function(s), you MUST put it in the format of <TOOLCALL>[func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)]</TOOLCALL>
You SHOULD NOT include any other text in the response.
Here is a list of functions in JSON format that you can invoke.
<AVAILABLE_TOOLS>{functions}</AVAILABLE_TOOLS>
{user_prompt}
MBPP 0-shot
| Reasoning Mode | pass@1 |
|---|---|
| Reasoning Off | 84.9 |
| Reasoning On | 91.3 |
User Prompt Template:
You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
@@ Instruction
Here is the given problem and test examples:
{prompt}
Please use the python programming language to solve this problem.
Please make sure that your code includes the functions from the test samples and that the input and output formats of these functions match the test samples.
Please return all completed codes in one code block.
This code block should be in the following format:
```python
# Your codes here
```
MT-Bench
| Reasoning Mode | Score |
|---|---|
| Reasoning Off | 9.17 |
Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.
Please report security vulnerabilities or NVIDIA AI Concerns here.
- Downloads last month
- 366
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit