DualCamCtrl: Dual-Branch Diffusion Model for Geometry-Aware Camera-Controlled Video Generation
Abstract
DualCamCtrl is a diffusion model for camera-controlled video generation that uses a dual-branch framework and Semantic Guided Mutual Alignment to improve consistency and disentangle appearance and geometry modeling.
This paper presents DualCamCtrl, a novel end-to-end diffusion model for camera-controlled video generation. Recent works have advanced this field by representing camera poses as ray-based conditions, yet they often lack sufficient scene understanding and geometric awareness. DualCamCtrl specifically targets this limitation by introducing a dual-branch framework that mutually generates camera-consistent RGB and depth sequences. To harmonize these two modalities, we further propose the Semantic Guided Mutual Alignment (SIGMA) mechanism, which performs RGB-depth fusion in a semantics-guided and mutually reinforced manner. These designs collectively enable DualCamCtrl to better disentangle appearance and geometry modeling, generating videos that more faithfully adhere to the specified camera trajectories. Additionally, we analyze and reveal the distinct influence of depth and camera poses across denoising stages and further demonstrate that early and late stages play complementary roles in forming global structure and refining local details. Extensive experiments demonstrate that DualCamCtrl achieves more consistent camera-controlled video generation, with over 40\% reduction in camera motion errors compared with prior methods. Our project page: https://soyouthinkyoucantell.github.io/dualcamctrl-page/
Community
๐ Paper: https://arxiv.org/pdf/2511.23127
๐ arXiv: https://arxiv.org/abs/2511.23127
๐ป Code: https://github.com/EnVision-Research/DualCamCtrl
๐ค HuggingFace: https://huggingface.co/FayeHongfeiZhang/DualCamCtrl
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- PostCam: Camera-Controllable Novel-View Video Generation with Query-Shared Cross-Attention (2025)
- CtrlVDiff: Controllable Video Generation via Unified Multimodal Video Diffusion (2025)
- Enhancing Video Inpainting with Aligned Frame Interval Guidance (2025)
- AutoScape: Geometry-Consistent Long-Horizon Scene Generation (2025)
- One4D: Unified 4D Generation and Reconstruction via Decoupled LoRA Control (2025)
- UniLumos: Fast and Unified Image and Video Relighting with Physics-Plausible Feedback (2025)
- MVCustom: Multi-View Customized Diffusion via Geometric Latent Rendering and Completion (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper