Wan in Rust (Candle)

This repository provides a high-performance, native Rust implementation of Wan2.1 using the Candle ML framework.

Features

  • ๐Ÿฆ€ Native Rust: No Python dependency required for inference.
  • ๐Ÿš€ Performance: Optimized for NVIDIA GPUs with Flash Attention v2 and cuDNN.
  • ๐Ÿ’พ Memory Efficient: Supports GGUF quantization for UMT5-XXL text encoder and VAE tiling/slicing for generating videos on consumer GPUs.
  • ๐Ÿ›  Flexible: Easy to use CLI for video generation and library for custom integration.

Quick Start

Installation

Ensure you have Rust and the CUDA Toolkit installed, then:

git clone https://github.com/FerrisMind/candle-video
cd candle-video
cargo build --release --features flash-attn,cudnn

Video Generation

cargo run --example wan --release -- \
    --local-weights ./models/wan-video \
    --prompt "A serene mountain lake at sunset, photorealistic, 4k" 

Credits


For more details, visit the main GitHub Repository.

Downloads last month
119
GGUF
Model size
6B params
Architecture
t5encoder
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for oxide-lab/Wan2.1-T2V-1.3B

Quantized
(6)
this model