VR-Bench / README.md
amagipeng's picture
Update README.md
f1e51ce verified
metadata
license: mit
task_categories:
  - visual-question-answering
  - video-classification
tags:
  - spatial-reasoning
  - vision-language
  - video-generation
size_categories:
  - 10K<n<100K

VR-Bench Dataset

VR-Bench is a benchmark dataset for evaluating spatial reasoning capabilities of Vision-Language Models (VLMs) and Video Generation Models.

Dataset Structure

The dataset is split into two subsets:

dataset_VR_split/
├── train/          # Training set (96 cases)
│   ├── maze/
│   ├── maze3d/
│   ├── pathfinder/
│   ├── sokoban/
│   └── trapfield/
└── eval/           # Evaluation set (24 cases)
    ├── maze/
    ├── maze3d/
    ├── pathfinder/
    ├── sokoban/
    └── trapfield/

Each game directory contains:

  • images/: Initial state images (PNG)
  • states/: Game state metadata (JSON)
  • videos/: Solution trajectory videos (MP4)

Games

  • Maze: 2D grid-based navigation with walls
  • TrapField: 2D grid-based navigation with traps
  • Sokoban: Box-pushing puzzle game
  • PathFinder: Irregular maze with curved paths
  • Maze3D: 3D maze with vertical navigation

Usage

For Video Model Evaluation

from datasets import load_dataset

dataset = load_dataset("your-username/VR-Bench")
train_data = dataset["train"]
eval_data = dataset["eval"]

Each video file shows the optimal solution trajectory for the corresponding game state.

Citation

If you use this dataset, please cite:

@article{yang2025vrbench,
      title={Reasoning via Video: The First Evaluation of Video Models' Reasoning Abilities through Maze-Solving Tasks}, 
      author={Cheng Yang and Haiyuan Wan and Yiran Peng and Xin Cheng and Zhaoyang Yu and Jiayi Zhang and Junchi Yu and Xinlei Yu and Xiawu Zheng and Dongzhan Zhou and Chenglin Wu},
      journal={arXiv preprint arXiv:2511.15065},
      year={2025}
}

License

MIT License