Add model card for Kiwi-Edit
#1
by
nielsr HF Staff - opened
README.md
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pipeline_tag: image-to-video
|
| 3 |
+
library_name: diffusers
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance
|
| 7 |
+
|
| 8 |
+
Kiwi-Edit is a versatile video editing framework built on an MLLM encoder and a video Diffusion Transformer (DiT). It supports both natural language instruction-based video editing and combined reference image + instruction video editing.
|
| 9 |
+
|
| 10 |
+
[[Paper](https://huggingface.co/papers/2603.02175)] [[Project Page](https://showlab.github.io/Kiwi-Edit)] [[GitHub](https://github.com/showlab/Kiwi-Edit)]
|
| 11 |
+
|
| 12 |
+
## Introduction
|
| 13 |
+
|
| 14 |
+
Instruction-based video editing has witnessed rapid progress, yet current methods often struggle with precise visual control. Kiwi-Edit introduces a unified editing architecture that synergizes learnable queries and latent visual features for reference semantic guidance. By leveraging a scalable data generation pipeline and the RefVIE dataset, the model achieves significant gains in instruction following and reference fidelity, establishing a new state-of-the-art in controllable video editing.
|
| 15 |
+
|
| 16 |
+
## Quick Start
|
| 17 |
+
|
| 18 |
+
### Installation (Diffusers Environment)
|
| 19 |
+
|
| 20 |
+
```bash
|
| 21 |
+
# Create conda environment
|
| 22 |
+
conda create -n diffusers python=3.10 -y
|
| 23 |
+
conda activate diffusers
|
| 24 |
+
# Install PyTorch 2.7 with CUDA
|
| 25 |
+
pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 --index-url https://download.pytorch.org/whl/cu128
|
| 26 |
+
pip install diffusers decord einops accelerate transformers==4.57.0 opencv-python av
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
### Inference Sample
|
| 30 |
+
|
| 31 |
+
You can run a quick test on a demo video using the script provided in the official repository:
|
| 32 |
+
|
| 33 |
+
```bash
|
| 34 |
+
python diffusers_demo.py \
|
| 35 |
+
--video_path ./demo_data/video/source/0005e4ad9f49814db1d3f2296b911abf.mp4 \
|
| 36 |
+
--prompt "Remove the monkey." \
|
| 37 |
+
--save_path output.mp4 --model_path linyq/kiwi-edit-5b-instruct-only-diffusers
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
## Citation
|
| 41 |
+
|
| 42 |
+
If you use Kiwi-Edit in your research, please cite the following paper:
|
| 43 |
+
|
| 44 |
+
```bibtex
|
| 45 |
+
@misc{kiwiedit,
|
| 46 |
+
title={Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance},
|
| 47 |
+
author={Yiqi Lin and Guoqiang Liang and Ziyun Zeng and Zechen Bai and Yanzhe Chen and Mike Zheng Shou},
|
| 48 |
+
year={2026},
|
| 49 |
+
eprint={2603.02175},
|
| 50 |
+
archivePrefix={arXiv},
|
| 51 |
+
primaryClass={cs.CV},
|
| 52 |
+
url={https://arxiv.org/abs/2603.02175},
|
| 53 |
+
}
|
| 54 |
+
```
|