Flowception: Temporally Expansive Flow Matching for Video Generation
Abstract
Flowception, a non-autoregressive video generation framework, interleaves discrete frame insertions with continuous denoising, improving efficiency and performance over existing methods.
We present Flowception, a novel non-autoregressive and variable-length video generation framework. Flowception learns a probability path that interleaves discrete frame insertions with continuous frame denoising. Compared to autoregressive methods, Flowception alleviates error accumulation/drift as the frame insertion mechanism during sampling serves as an efficient compression mechanism to handle long-term context. Compared to full-sequence flows, our method reduces FLOPs for training three-fold, while also being more amenable to local attention variants, and allowing to learn the length of videos jointly with their content. Quantitative experimental results show improved FVD and VBench metrics over autoregressive and full-sequence baselines, which is further validated with qualitative results. Finally, by learning to insert and denoise frames in a sequence, Flowception seamlessly integrates different tasks such as image-to-video generation and video interpolation.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- TempoMaster: Efficient Long Video Generation via Next-Frame-Rate Prediction (2025)
- VideoSSM: Autoregressive Long Video Generation with Hybrid State-Space Memory (2025)
- Uniform Discrete Diffusion with Metric Path for Video Generation (2025)
- Autoregressive Video Autoencoder with Decoupled Temporal and Spatial Context (2025)
- FilmWeaver: Weaving Consistent Multi-Shot Videos with Cache-Guided Autoregressive Diffusion (2025)
- Generative Neural Video Compression via Video Diffusion Prior (2025)
- JoyAvatar: Real-time and Infinite Audio-Driven Avatar Generation with Autoregressive Diffusion (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper