Datasets:
language:
- en
pretty_name: WoW-1 Benchmark Samples
tags:
- robotics
- physical-reasoning
- causal-reasoning
- action-understanding
- video-understanding
- embodied-ai
- wow
- arxiv:2509.22642
license: mit
task_categories:
- video-classification
- action-generation
dataset_type: benchmark
size_categories:
- 1K<n<10K
π§ WoW-1 Benchmark Samples
WoW-1 Benchmark Samples is the official evaluation dataset released as part of the WoW (World-Omniscient World Model) project. This benchmark is designed to assess the physical consistency and causal reasoning capabilities of generative world models for robotics and embodied AI.
π Dataset Overview
This dataset contains 612 natural language prompts representing real-world robot interaction tasks. These instructions are used to evaluate world models on their ability to understand and generate plausible, physically grounded responses in video or action space.
Each sample describes a short-term or long-horizon task involving:
- Object manipulation (e.g., "Put the screw driver into the drawer")
- Physical causality (e.g., "Pick up an egg and crack it into the bowl")
- Spatial reasoning (e.g., "Move the lid from the black pot to the blue pan")
- State transitions (e.g., "Turn off the light switch")
π§ͺ Use Cases
This dataset is intended for:
- Evaluating generative video models on physical realism
- Testing embodied agents on causal reasoning
- Benchmarking language-to-action and planning models
- Training or fine-tuning robotic manipulation systems
π’ Format
- Modality: Text (natural language commands)
- Format: Plain text / JSON / Parquet
- Example:
{
"text": "Put the apples on the table into the basket."
}
π Dataset Stats
- Number of samples: 612
- Text lengths: 11 to 230 characters
- Language: English
π Example Samples
Clean the table surfaceUse the right arm to grab the pearl and give it to the left armOpen the door of the red microwavePlace the tennis ball in the brown object
π Related Models
This dataset is used for evaluating models such as:
WoW-1-DiT-2B,WoW-1-DiT-7BWoW-1-Wan-14BSOPHIA-guided generative models
π Related Paper
WoW: Towards a World omniscient World model Through Embodied Interaction
Xiaowei Chi et al., 2025 β arXiv:2509.22642
Please cite this paper if you use the dataset:
@article{chi2025wow,
title={WoW: Towards a World omniscient World model Through Embodied Interaction},
author={Chi, Xiaowei and Jia, Peidong and Fan, Chun-Kai and Ju, Xiaozhu and Mi, Weishi and Qin, Zhiyuan and Zhang, Kevin and Tian, Wanxin and Ge, Kuangzhi and Li, Hao and others},
journal={arXiv preprint arXiv:2509.22642},
year={2025}
}
π Project Links
- π¬ Project site: wow-world-model.github.io
- π» GitHub: github.com/wow-world-model/wow-world-model
- π ArXiv: arxiv.org/abs/2509.22642
πͺͺ License
This dataset is released under the MIT License.
π€ We encourage the community to explore, evaluate, and extend this benchmark. Contributions and feedback are welcome via GitHub or the project website.