--- language: - en pretty_name: WoW-1 Benchmark Samples tags: - robotics - physical-reasoning - causal-reasoning - action-understanding - video-understanding - embodied-ai - wow - arxiv:2509.22642 license: mit task_categories: - video-classification - action-generation dataset_type: benchmark size_categories: - 1K **[WoW: Towards a World omniscient World model Through Embodied Interaction](https://arxiv.org/abs/2509.22642)** > *Xiaowei Chi et al., 2025 โ€” arXiv:2509.22642* Please cite this paper if you use the dataset: ```bibtex @article{chi2025wow, title={WoW: Towards a World omniscient World model Through Embodied Interaction}, author={Chi, Xiaowei and Jia, Peidong and Fan, Chun-Kai and Ju, Xiaozhu and Mi, Weishi and Qin, Zhiyuan and Zhang, Kevin and Tian, Wanxin and Ge, Kuangzhi and Li, Hao and others}, journal={arXiv preprint arXiv:2509.22642}, year={2025} } ``` ## ๐ŸŒ Project Links - ๐Ÿ”ฌ Project site: [wow-world-model.github.io](https://wow-world-model.github.io/) - ๐Ÿ’ป GitHub: [github.com/wow-world-model/wow-world-model](https://github.com/wow-world-model/wow-world-model) - ๐Ÿ“œ ArXiv: [arxiv.org/abs/2509.22642](https://arxiv.org/abs/2509.22642) ## ๐Ÿชช License This dataset is released under the [MIT License](https://opensource.org/licenses/MIT). --- ๐Ÿค— We encourage the community to explore, evaluate, and extend this benchmark. Contributions and feedback are welcome via GitHub or the project website.