YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Cosmos-Transfer-LidarGen: A diffusion model for Lidar generation

Model overview

Description:

Cosmos-Transfer-LidarGen generates LiDAR point cloud range maps from multi-view RGB images, enabling 3-view RGB image to LiDAR point cloud generation that bridges the gap between visual perception and 3D spatial understanding. This model takes multi-view RGB images as input and generates corresponding LiDAR range maps with spatial correspondence. Cosmos LidarGen was developed by NVIDIA as a part of the Cosmos foundation model family.

This model is ready for commercial/non-commercial use.

License/Terms of Use

This model is released under the NVIDIA Open Model License. For a custom license, please contact cosmos-license@nvidia.com.

Under the NVIDIA Open Model License, NVIDIA confirms:

  • Models are commercially usable.
  • You are free to create and distribute Derivative Models.
  • NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.

Important Note: If You bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism (collectively "Guardrail") contained in the Model without a substantially similar Guardrail appropriate for your use case, your rights under this Agreement NVIDIA Open Model License Agreement will automatically terminate.

Deployment Geography:

Global

Use Case:

Research related to autonomous driving; enabling users to generate LiDAR point clouds from multi-view RGB images for 3D spatial understanding and perception tasks.

Release Date:

Hugging Face [10/23/2025] via [https://huggingface.co/nvidia/Cosmos-Transfer-LidarGen]

References(s):

Cosmos-Dream-Drive

Model Architecture:

Architecture Type: Transformer

Number of model parameters: 7.0*10^9

Network Architecture: This model was developed based on Cosmos-Predict1 Single2MultiView foundation model. The diffusion transformer model is designed for multi-view video generation in the latent space. The network is composed of interleaved self-attention, cross-attention and feedforward layers as its building blocks. The cross-attention layers allow the model to condition on input images throughout the denoising process. Before each layer, adaptive layer normalization is applied to embed the time information for denoising. When multi-view RGB images are provided as input, their latent frames are concatenated with the generated LiDAR frames along the spatial dimension.

Input:

Input Type(s): Image
Input Format: Red, Green, Blue (RGB)
Input Parameters: Two-Dimensional (2D)
Other Properties Related to Input: Multi-view RGB images from 3 f-theta cameras (front wide 120° FOV, rear left 70° FOV, rear right 70° FOV). Input resolution of [720, 1280] cropped to [704, 1280]. Synchronized camera views required. Note that this model only works for Nvidia camera-lidar rig configuration. Users will need to post-train their own LidarGen model following the provided receipe on different data.

Output:

Output Type(s): Lidar
Output Format: Lidar range map, 1 x H x W
Output Parameters: Two-Dimensional (2D)
Other Properties Related to Output: LiDAR range maps with resolution [512, 1792]. Range coverage of 5-100 meters. Point cloud format with optional statistical outlier filtering.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engine(s):

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Blackwell
  • NVIDIA Hopper

Preferred/Supported Operating System(s):

  • Linux

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Usage

Note: We have only tested doing inference with BF16 precision. On a single NVIDIA A100-SXM4-80GB GPU, our diffusion model takes around 160 seconds to generate a single Lidar range map from multi-view images.

Model Version(s):

Cosmos LidarGen

Training, Testing, and Evaluation Datasets:

Data Collection Method:

  • AV: Automatic/Sensors

Labeling Method:

  • AV: Hybrid: Human, Automated

Inference Acceleration Engine: None
Test Hardware: NVIDIA NVIDIA A100-SXM4-80GB

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.

For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below. Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.

Bias

Field Response
Participation considerations from adversely impacted groups protected classes in model design and testing: None
Measures taken to mitigate against unwanted bias: None

Explainability

Field Response
Intended Task/Domain: Multi-view RGB Image to LiDAR Point Cloud Generation
Model Type: Diffusion Transformer
Intended Users: Researchers and developers working with autonomous driving, robotics, and 3D perception systems.
Output: LiDAR range maps and point clouds (3D spatial data from multi-view RGB images)
Describe how the model works: Multi-view RGB images are encoded into latent tokens and concatenated with generated LiDAR frames. The diffusion transformer uses interleaved self-attention, cross-attention, and feedforward layers to condition on input images throughout the denoising process, generating corresponding LiDAR range maps with spatial correspondence.
Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: Not Applicable
Technical Limitations & Mitigation: This model may not work well with poor lighting conditions, different camera-lidar setup. Performance may degrade with unsynchronized camera views or images outside the expected field of view (front wide 120° FOV, rear left/right 70° FOV). Users should validate model performance on their specific camera setup and environmental conditions.
Verified to have met prescribed NVIDIA quality standards: Yes
Performance Metrics: Object detection mAP
Potential Known Risks: If the model does not work as intended, it may produce inaccurate 3D spatial representations that could lead to incorrect depth perception, navigation errors, or collision risks in autonomous systems.
Licensing: NVIDIA Open Model License

Privacy

Field Response
Generatable or reverse engineerable personal data? No
Personal data used to create this model? No
How often is dataset reviewed? Before Release
Is there provenance for all datasets used in training? Yes
Does data labeling (annotation, metadata) comply with privacy laws? Yes
Is data compliant with data subject requests for data correction or removal, if such a request was made? Yes
Applicable Privacy Policy https://www.nvidia.com/en-us/about-nvidia/privacy-policy/

Safety

Field Response
Model Application Field(s): Autonomous Vehicles, Industrial/Machinery and Robotics
Describe the life critical impact (if present). Could be incorporated into autonomous machines. As a tool for generating synthetic environments, caution should be used for testing AI models based on data generated from this model.
Use Case Restrictions: Abide by NVIDIA Open Model License
Model and dataset restrictions: The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including nvidia/Cosmos-Transfer-LidarGen