Dataset Viewer
The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('imagefolder', {}), NamedSplit('test'): ('json', {})}
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
VLM-Gym Inference Dataset
This dataset contains pre-defined test episodes and initial states for evaluating Vision-Language Models (VLMs) on the VLM-Gym benchmark.
Dataset Structure
inference-dataset/
├── test_set_easy/ # Easy difficulty test episodes (JSONL)
├── test_set_hard/ # Hard difficulty test episodes (JSONL)
├── initial_states_easy/ # Initial environment states for easy episodes (JSON)
├── initial_states_hard/ # Initial environment states for hard episodes (JSON)
└── partial_datasets/ # Assets required by some environments
├── objaverse/ # 3D models for mental rotation tasks
├── counting/ # Images for counting tasks
├── refcoco+/ # Images for referring expression tasks
└── ...
Tasks Included
| Task | Description |
|---|---|
maze_2d |
2D maze navigation |
maze_3d |
3D maze navigation |
mental_rotation_2d |
2D shape rotation matching |
mental_rotation_3d_cube |
3D cube rotation matching |
mental_rotation_3d_objaverse |
3D object rotation matching |
jigsaw |
Jigsaw puzzle solving |
sliding_block |
Sliding block puzzle |
colorization |
Image colorization |
counting |
Object counting |
patch_reassembly |
Image patch reassembly |
matchstick_equation |
Matchstick equation solving |
matchstick_rotation |
Matchstick rotation |
video_unshuffle |
Video frame ordering |
zoom_in_puzzle |
Zoom-in puzzle solving |
fetch_reach |
Robotic reaching (easy only) |
fetch_pick_and_place |
Robotic manipulation (hard only) |
referring_dot_pointing |
Referring expression grounding (easy only) |
Quick Start
Installation
pip install huggingface_hub
Download Full Dataset
from huggingface_hub import snapshot_download
dataset_path = snapshot_download(
repo_id="VisGym/inference-dataset",
repo_type="dataset",
)
Download Specific Subsets
from huggingface_hub import snapshot_download
# Download only test sets (small, no large assets)
dataset_path = snapshot_download(
repo_id="VisGym/inference-dataset",
repo_type="dataset",
allow_patterns=["test_set_easy/**", "test_set_hard/**"],
)
# Download only easy difficulty
dataset_path = snapshot_download(
repo_id="VisGym/inference-dataset",
repo_type="dataset",
allow_patterns=["*_easy/**"],
)
Using the Loader Script
# Download everything
python load_from_hf.py --output_dir ./inference_dataset
# Download only test sets (no large assets)
python load_from_hf.py --output_dir ./inference_dataset --subset test_sets
# Download only easy difficulty
python load_from_hf.py --output_dir ./inference_dataset --subset easy
File Formats
Test Set Files (JSONL)
Each line in the JSONL files contains an episode specification:
{"seed": 1803372, "env_id": "maze_2d/hard", "episode_seed": 1052368083, "extra_state": null}
Initial State Files (JSON)
JSON files containing the initial state for reproducible episode starts:
{
"object_path": "000-156/fa3dad5169784cec85b96682231e3f44.glb",
"secret_yaw": 1.098,
"secret_pitch": 0.487,
...
}
Usage with VLM-Gym
from pathlib import Path
import json
# Load test episodes
test_file = Path(dataset_path) / "test_set_easy" / "maze_2d__easy" / "*.jsonl"
for jsonl_file in test_file.parent.glob("*.jsonl"):
with open(jsonl_file) as f:
for line in f:
episode = json.loads(line)
env_id = episode["env_id"]
seed = episode["seed"]
episode_seed = episode["episode_seed"]
# Use with VLM-Gym inference runner
Citation
If you use this dataset, please cite:
@misc{vlmgym2024,
title={VLM-Gym: A Benchmark for Vision-Language Models in Interactive Environments},
author={VLM-Gym Team},
year={2024},
url={https://huggingface.co/datasets/VisGym/inference-dataset}
}
License
MIT License
- Downloads last month
- 12