Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,48 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
datasets:
|
| 4 |
+
- MCG-NJU/X-Dance
|
| 5 |
+
base_model:
|
| 6 |
+
- Wan-AI/Wan2.1-I2V-14B-480P
|
| 7 |
+
pipeline_tag: image-to-video
|
| 8 |
+
library_name: diffusers
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
<p align="center">
|
| 12 |
+
|
| 13 |
+
<h2 align="center">SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation</h2>
|
| 14 |
+
<p align="center">
|
| 15 |
+
<a href="https://scholar.google.com/citations?hl=en&user=0lLB3fsAAAAJ"><strong>Jiaming Zhang</strong></a>
|
| 16 |
+
路
|
| 17 |
+
<a href="https://dblp.org/pid/316/8117.html"><strong>Shengming Cao</strong></a>
|
| 18 |
+
路
|
| 19 |
+
<a href="https://qianduoduolr.github.io/"><strong>Rui Li</strong></a>
|
| 20 |
+
路
|
| 21 |
+
<a href="https://openreview.net/profile?id=~Xiaotong_Zhao1"><strong>Xiaotong Zhao</strong></a>
|
| 22 |
+
路
|
| 23 |
+
<a href="https://scholar.google.com/citations?user=TSMchWcAAAAJ&hl=en&oi=ao"><strong>Yutao Cui</strong></a>
|
| 24 |
+
<br>
|
| 25 |
+
<a href=""><strong>Xinglin Hou</strong></a>
|
| 26 |
+
路
|
| 27 |
+
<a href="https://mcg.nju.edu.cn/member/gswu/en/index.html"><strong>Gangshan Wu</strong></a>
|
| 28 |
+
路
|
| 29 |
+
<a href="https://openreview.net/profile?id=~Haolan_Chen1"><strong>Haolan Chen</strong></a>
|
| 30 |
+
路
|
| 31 |
+
<a href="https://scholar.google.com/citations?user=FHvejDIAAAAJ"><strong>Yu Xu</strong></a>
|
| 32 |
+
路
|
| 33 |
+
<a href="https://scholar.google.com/citations?user=TSMchWcAAAAJ&hl=en&oi=ao"><strong>Limin Wang</strong></a>
|
| 34 |
+
路
|
| 35 |
+
<a href="https://openreview.net/profile?id=~Kai_Ma4"><strong>Kai Ma</strong></a>
|
| 36 |
+
<br>
|
| 37 |
+
<br>
|
| 38 |
+
<a href="https://arxiv.org/abs/TODO"><img src='https://img.shields.io/badge/arXiv-TODO-red' alt='Paper PDF'></a>
|
| 39 |
+
<a href='https://mcg-nju.github.io/steadydancer-web'><img src='https://img.shields.io/badge/Project-Page-blue' alt='Project Page'></a>
|
| 40 |
+
<a href='https://huggingface.co/MCG-NJU/SteadyDancer-14B'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow'></a>
|
| 41 |
+
<a href='https://huggingface.co/datasets/MCG-NJU/X-Dance'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-X--Dance-green'></a>
|
| 42 |
+
<br>
|
| 43 |
+
<b></a>Multimedia Computing Group, Nanjing University | </a>Platform and Content Group (PCG), Tencent </b>
|
| 44 |
+
<br>
|
| 45 |
+
</p>
|
| 46 |
+
</p>
|
| 47 |
+
|
| 48 |
+
This repository is the `checkpoint` of paper "SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation". SteadyDancer is a strong animation framework based on **Image-to-Video paradigm**, ensuring **robust first-frame preservation**. In contrast to prior *Reference-to-Video* approaches that often suffer from identity drift due to **spatio-temporal misalignments** common in real-world applications, SteadyDancer generates **high-fidelity and temporally coherent** human animations, outperforming existing methods in visual quality and control while **requiring significantly fewer training resources**.
|