Datasets:

Size:
< 1K
ArXiv:
Libraries:
Datasets
License:

Improve dataset card: Add task categories, HF paper link, sample usage, and dataset details

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +181 -3
README.md CHANGED
@@ -1,5 +1,11 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
3
  ---
4
 
5
  <p align="center">
@@ -30,17 +36,18 @@ license: apache-2.0
30
  <br>
31
  <br>
32
  <a href="https://arxiv.org/abs/2511.19320"><img src='https://img.shields.io/badge/arXiv-2511.19320-red' alt='Paper PDF'></a>
 
33
  <a href='https://mcg-nju.github.io/steadydancer-web'><img src='https://img.shields.io/badge/Project-Page-blue' alt='Project Page'></a>
34
  <a href='https://github.com/MCG-NJU/SteadyDancer'><img src='https://img.shields.io/badge/Github-SteadyDancer-orange'></a>
35
  <a href='https://huggingface.co/MCG-NJU/SteadyDancer-14B'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow'></a>
36
  <a href='https://huggingface.co/datasets/MCG-NJU/X-Dance'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-X--Dance-green'></a>
37
  <br>
38
- <b></a>Multimedia Computing Group, Nanjing University &nbsp; | &nbsp; </a>Platform and Content Group (PCG), Tencent </b>
39
  <br>
40
  </p>
41
  </p>
42
 
43
- This repository is the `test dataset` of paper "SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation", called **X-Dance**.
44
 
45
  SteadyDancer is a strong animation framework based on **Image-to-Video paradigm**, ensuring **robust first-frame preservation**. In contrast to prior *Reference-to-Video* approaches that often suffer from identity drift due to **spatio-temporal misalignments** common in real-world applications, SteadyDancer generates **high-fidelity and temporally coherent** human animations, outperforming existing methods in visual quality and control while **requiring significantly fewer training resources**.
46
 
@@ -50,4 +57,175 @@ We first collected 12 distinct driving videos, comprising 8 sequences of intrica
50
  Tailored to these motions, **we specifically curated a diverse set of reference images to simulate real-world misalignments**. This specially designed collection contains: (1) anime characters to introduce stylistic domain gaps; (2) half-body shots to represent compositional inconsistencies; (3) cross-gender or anime characters to simulate significant skeletal structural discrepancies; and (4) subjects in distinct postures to maximize the initial action gap.
51
  By systematically pairing these reference images with the 12 driving videos, we simulate two critical real-world challenges: (1) Spatial pose-structure inconsistency (e.g., an anime character driving a real-world pose); and (2) Temporal discontinuity, specifically the significant gap between the reference pose and the initial driving pose.
52
 
53
- ![X-Dance](assets/X-Dance.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - image-to-video
5
+ tags:
6
+ - human-image-animation
7
+ - video-generation
8
+ - pose-guided
9
  ---
10
 
11
  <p align="center">
 
36
  <br>
37
  <br>
38
  <a href="https://arxiv.org/abs/2511.19320"><img src='https://img.shields.io/badge/arXiv-2511.19320-red' alt='Paper PDF'></a>
39
+ <a href="https://huggingface.co/papers/2511.19320"><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Paper-orange' alt='Hugging Face Paper'></a>
40
  <a href='https://mcg-nju.github.io/steadydancer-web'><img src='https://img.shields.io/badge/Project-Page-blue' alt='Project Page'></a>
41
  <a href='https://github.com/MCG-NJU/SteadyDancer'><img src='https://img.shields.io/badge/Github-SteadyDancer-orange'></a>
42
  <a href='https://huggingface.co/MCG-NJU/SteadyDancer-14B'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow'></a>
43
  <a href='https://huggingface.co/datasets/MCG-NJU/X-Dance'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-X--Dance-green'></a>
44
  <br>
45
+ <b>Multimedia Computing Group, Nanjing University &nbsp; | &nbsp; Platform and Content Group (PCG), Tencent </b>
46
  <br>
47
  </p>
48
  </p>
49
 
50
+ This repository is the `test dataset` of paper "[SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation](https://huggingface.co/papers/2511.19320)", called **X-Dance**.
51
 
52
  SteadyDancer is a strong animation framework based on **Image-to-Video paradigm**, ensuring **robust first-frame preservation**. In contrast to prior *Reference-to-Video* approaches that often suffer from identity drift due to **spatio-temporal misalignments** common in real-world applications, SteadyDancer generates **high-fidelity and temporally coherent** human animations, outperforming existing methods in visual quality and control while **requiring significantly fewer training resources**.
53
 
 
57
  Tailored to these motions, **we specifically curated a diverse set of reference images to simulate real-world misalignments**. This specially designed collection contains: (1) anime characters to introduce stylistic domain gaps; (2) half-body shots to represent compositional inconsistencies; (3) cross-gender or anime characters to simulate significant skeletal structural discrepancies; and (4) subjects in distinct postures to maximize the initial action gap.
58
  By systematically pairing these reference images with the 12 driving videos, we simulate two critical real-world challenges: (1) Spatial pose-structure inconsistency (e.g., an anime character driving a real-world pose); and (2) Temporal discontinuity, specifically the significant gap between the reference pose and the initial driving pose.
59
 
60
+ ![X-Dance](https://cdn-uploads.huggingface.co/production/uploads/6667e3d60a7f1d1cbb63cf4d/y0853pM3f0-4NqR0G2i_Q.png)
61
+
62
+ ## Sample Usage
63
+
64
+ To generate dance video from a source image and a driving video using the SteadyDancer model with this dataset, please follow the steps below from the [official GitHub repository](https://github.com/MCG-NJU/SteadyDancer).
65
+
66
+ ### 🛠️ Installation
67
+ ```bash
68
+ # Clone this repository
69
+ git clone https://github.com/MCG-NJU/SteadyDancer.git
70
+ cd SteadyDancer
71
+
72
+ # Create and activate conda environment
73
+ conda create -n steadydancer python=3.10 -y
74
+ conda activate steadydancer
75
+
76
+ # Install animate generation dependencies (Pytorch 2.5.1, CUDA 12.1 for example)
77
+ pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu121
78
+ pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl && python -c "import flash_attn"
79
+ pip install xformers==0.0.29.post1
80
+ pip install "xfuser[diffusers,flash-attn]"
81
+ pip install -r requirements.txt
82
+
83
+ # Install pose extraction dependencies
84
+ pip install moviepy decord # moviepy-2.2.1, decord-0.6.0
85
+ pip install --no-cache-dir -U openmim # openmim-0.3.9
86
+ mim install mmengine # mmengine-0.10.7
87
+ mim install "mmcv==2.1.0" # mmcv-2.1.0
88
+ mim install "mmdet>=3.1.0" # mmdet-3.3.0
89
+ pip install mmpose # mmpose-1.3.2
90
+ ```
91
+
92
+ - Errors consistently occur during the installation of the mmcv and mmpose packages, so please verify that both packages were installed successfully:
93
+ ```bash
94
+ python -c "import mmcv"
95
+ python -c "import mmpose"
96
+ python -c "from mmpose.apis import inference_topdown"
97
+ python -c "from mmpose.apis import init_model as init_pose_estimator"
98
+ python -c "from mmpose.evaluation.functional import nms"
99
+ python -c "from mmpose.utils import adapt_mmdet_pipeline"
100
+ python -c "from mmpose.structures import merge_data_samples"
101
+ ```
102
+
103
+ - If you encounter "*ModuleNotFoundError: No module named 'mmcv._ext'*" issue during installation, please re-install mmcv manually (We haven't found a more convenient and stable method. If you have a better method, please submit a pull request to help us. We would greatly appreciate it 😊.):
104
+ ```bash
105
+ mim uninstall mmcv -y
106
+ git clone https://github.com/open-mmlab/mmcv.git
107
+ cd mmcv && git checkout v2.1.0
108
+ pip install -r requirements/optional.txt
109
+ gcc --version # Check the gcc version (requires 5.4+)
110
+ python setup.py build_ext # Build the C++ and CUDA extensions, may take a while
111
+ python setup.py develop
112
+ pip install -e . -v # Install mmcv in editable mode
113
+ python .dev_scripts/check_installation.py # just verify the installation was successful by running this script, ignore the last verify script
114
+ cd ../
115
+ ```
116
+
117
+ ### 📥 Download Checkpoints
118
+ ```bash
119
+ # Download DW-Pose pretrained weights
120
+ mkdir -p ./preprocess/pretrained_weights/dwpose
121
+ huggingface-cli download yzd-v/DWPose --local-dir ./preprocess/pretrained_weights/dwpose --include "dw-ll_ucoco_384.pth"
122
+ wget https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_l_8x8_300e_coco/yolox_l_8x8_300e_coco_20211126_140236-d3bd2b23.pth -O ./preprocess/pretrained_weights/dwpose/yolox_l_8x8_300e_coco.pth
123
+
124
+ # Download SteadyDancer-14B model weights
125
+ huggingface-cli download jiamingZ/SteadyDancer-14B --local-dir ./SteadyDancer-14B
126
+ ```
127
+
128
+ ### 🚀 Inference
129
+
130
+ To generate dance video from a source image and a driving video (We have provided pose example in `preprocess/output/video00001_img00001/example` and `preprocess/output/video00002_img00002/example` to try our model quickly), please follow the steps below:
131
+ - Pose extraction and alignment:
132
+ ```bash
133
+ ref_image_path="data/images/00001.png"
134
+ driving_video_path="data/videos/00001"
135
+ pair_id="video00001_img00001"
136
+ output=./preprocess/output/${pair_id}/$(date +"%Y%m%d%H%M%S")
137
+
138
+ ## Extract and align pose (Positive Condition)
139
+ outfn=$output/positive/all.mp4
140
+ outfn_align_pose_video=$output/positive/single.mp4
141
+ python preprocess/pose_align.py \
142
+ --imgfn_refer "$ref_image_path" \
143
+ --vidfn "${driving_video_path}/video.mp4" \
144
+ --outfn "$outfn" \
145
+ --outfn_align_pose_video "$outfn_align_pose_video"
146
+
147
+ outfn_align_pose_video=$output/positive/single.mp4
148
+ python preprocess/dump_video_images.py "$outfn_align_pose_video" "$(dirname "$outfn_align_pose_video")"
149
+
150
+
151
+ ## Extract and align pose (Negative Condition)
152
+ outfn=$output/negative/all.mp4
153
+ outfn_align_pose_video=$output/negative/single.mp4
154
+ python preprocess/pose_align_withdiffaug.py \
155
+ --imgfn_refer "$ref_image_path" \
156
+ --vidfn "${driving_video_path}/video.mp4" \
157
+ --outfn "$outfn" \
158
+ --outfn_align_pose_video "$outfn_align_pose_video"
159
+
160
+ outfn_align_pose_video=$output/negative/single_aug.mp4
161
+ python preprocess/dump_video_images.py "$outfn_align_pose_video" "$(dirname "$outfn_align_pose_video")"
162
+
163
+
164
+ ## copy other files
165
+ cp "$ref_image_path" "$output/ref_image.png"
166
+ cp "${driving_video_path}/video.mp4" "$output/driving_video.mp4"
167
+ cp "${driving_video_path}/prompt.txt" "$output/prompt.txt"
168
+
169
+
170
+ ## (Optional) Visualization of original pose without alignment
171
+ driving_video_path="data/videos/00001"
172
+ python preprocess/pose_extra.py \
173
+ --vidfn $driving_video_path/video.mp4 \
174
+ --outfn_all $driving_video_path/pose_ori_all.mp4 \
175
+ --outfn_single $driving_video_path/pose_ori_single.mp4
176
+ ```
177
+
178
+ - Generate animation video with SteadyDancer:
179
+ ```bash
180
+ ckpt_dir="./SteadyDancer-14B"
181
+
182
+ input_dir="preprocess/output/video00001_img00001/example" # </path/to/preprocess/output/> contains ref_image.png, driving_video.mp4, prompt.txt, positive/, negative/ folders, e.g. the above ./preprocess/output/${pair_id}/$(date +"%Y%m%d%H%M%S")
183
+ image="$input_dir/ref_image.png" # reference image path
184
+ cond_pos_folder="$input_dir/positive/" # positive condition pose folder
185
+ cond_neg_folder="$input_dir/negative/" # negative condition pose folder
186
+ prompt=$(cat $input_dir/prompt.txt) # read prompt from file
187
+ save_file="$(basename "$(dirname "$input_dir")")---$(basename "$input_dir").mp4" # save file name
188
+
189
+ cfg_scale=5.0
190
+ condition_guide_scale=1.0
191
+ pro=0.4
192
+ base_seed=106060
193
+
194
+ CUDA_VISIBLE_DEVICES=0 python generate_dancer.py \
195
+ --task i2v-14B --size 1024*576 \
196
+ --ckpt_dir $ckpt_dir \
197
+ --prompt "$prompt" \
198
+ --image "$image" \
199
+ --cond_pos_folder "$cond_pos_folder" \
200
+ --cond_neg_folder "$cond_neg_folder" \
201
+ --sample_guide_scale $cfg_scale \
202
+ --condition_guide_scale $condition_guide_scale \
203
+ --end_cond_cfg $pro \
204
+ --base_seed $base_seed \
205
+ --save_file "$save_file"
206
+ ```
207
+
208
+ ## 🎥 X-Dance Benchmark
209
+ To fill the void left by existing same-source benchmarks (such as TikTok), which fail to evaluate spatio-temporal misalignments, we propose **X-Dance**, a new benchmark that focuses on these challenges. The X-Dance benchmark is constructed from diverse image categories (male/female/cartoon, and upper-/full-body shots) and challenging driving videos (complex motions with blur and occlusion). Its curated set of pairings intentionally introduces spatial-structural inconsistencies and temporal start-gaps, allowing for a more robust evaluation of model generalization in the real world.
210
+ You can download the X-Dance benchmark from [huggingface](https://huggingface.co/datasets/MCG-NJU/X-Dance).
211
+
212
+ ## ❤️ Acknowledgements
213
+ Our implementation is based on [Wan 2.1](https://github.com/Wan-Video/Wan2.1). We modify [MusePose](https://github.com/TMElyralab/MusePose/tree/main) to generate and align pose video. Thanks for their remarkable contribution and released code!
214
+
215
+ ## 📚 Citation
216
+
217
+ If you find our paper or this codebase useful for your research, please cite us.
218
+ ```BibTeX
219
+ @misc{zhang2025steadydancer,
220
+ title={SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation},
221
+ author={Jiaming Zhang and Shengming Cao and Rui Li and Xiaotong Zhao and Yutao Cui and Xinglin Hou and Gangshan Wu and Haolan Chen and Yu Xu and Limin Wang and Kai Ma},
222
+ year={2025},
223
+ eprint={2511.19320},
224
+ archivePrefix={arXiv},
225
+ primaryClass={cs.CV},
226
+ url={https://arxiv.org/abs/2511.19320},
227
+ }
228
+ ```
229
+
230
+ ## 📄 License
231
+ This repository is released under the Apache-2.0 license as found in the [LICENSE](LICENSE) file.