Datasets:
Add dataset card for SynHairMan (#1)
Browse files- Add dataset card for SynHairMan (efd627b1be9e3ea0ef0fb522c5af9df09b801ad2)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- image-segmentation
|
| 4 |
+
license: bsd-2-clause
|
| 5 |
+
tags:
|
| 6 |
+
- video-matting
|
| 7 |
+
- synthetic-data
|
| 8 |
+
- human-body
|
| 9 |
+
- hair-segmentation
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# SynHairMan: Synthetic Video Matting Dataset
|
| 13 |
+
|
| 14 |
+
This repository contains the `SynHairMan` dataset, which was introduced in the paper [Generative Video Matting](https://huggingface.co/papers/2508.07905).
|
| 15 |
+
|
| 16 |
+
**Project Page:** [https://yongtaoge.github.io/project/gvm](https://yongtaoge.github.io/project/gvm)
|
| 17 |
+
**GitHub Repository:** [https://github.com/aim-uofa/GVM](https://github.com/aim-uofa/GVM)
|
| 18 |
+
|
| 19 |
+
## Dataset Description
|
| 20 |
+
|
| 21 |
+
The `SynHairMan` dataset addresses the challenge of limited high-quality ground-truth data in video matting. It is a large-scale synthetic and pseudo-labeled segmentation dataset developed through a scalable data generation pipeline. This pipeline renders diverse human bodies and fine-grained hairs, yielding approximately 200 video clips, each with a 3-second duration.
|
| 22 |
+
|
| 23 |
+
The dataset is specifically designed for pre-training and fine-tuning video matting models, aiming to improve their generalization capabilities in real-world scenarios and ensuring strong temporal consistency by bridging the domain gap between synthetic and real-world scenes.
|
| 24 |
+
|
| 25 |
+
## License
|
| 26 |
+
|
| 27 |
+
For academic usage, this project is licensed under the [2-clause BSD License](https://github.com/aim-uofa/GVM/blob/main/LICENSE). For commercial inquiries, please contact Chunhua Shen (chhshen@gmail.com).
|
| 28 |
+
|
| 29 |
+
## Citation
|
| 30 |
+
|
| 31 |
+
If you find this dataset helpful for your research, please cite the original paper:
|
| 32 |
+
|
| 33 |
+
```bibtex
|
| 34 |
+
@inproceedings{ge2025gvm,
|
| 35 |
+
author = {Ge, Yongtao and Xie, Kangyang and Xu, Guangkai and Ke, Li and Liu, Mingyu and Huang, Longtao and Xue, Hui and Chen, Hao and Shen, Chunhua},
|
| 36 |
+
title = {Generative Video Matting},
|
| 37 |
+
publisher = {Association for Computing Machinery},
|
| 38 |
+
url = {https://doi.org/10.1145/3721238.3730642},
|
| 39 |
+
doi = {10.1145/3721238.3730642},
|
| 40 |
+
booktitle = {Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers},
|
| 41 |
+
series = {SIGGRAPH Conference Papers '25}
|
| 42 |
+
}
|
| 43 |
+
```
|