The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
VLA-Arena Dataset (L0 - Large Variant)
About VLA-Arena
VLA-Arena is an open-source benchmark designed for the systematic evaluation of Vision-Language-Action (VLA) models. It provides a complete and unified toolchain covering scene modeling, demonstration collection, model training, and evaluation. Featuring 150+ tasks across 11 specialized suites, VLA-Arena assesses models through hierarchical difficulty levels (L0-L2) to ensure comprehensive metrics for safety, generalization, and efficiency.
Key Evaluation Domains VLA-Arena focuses on four critical dimensions to ensure robotic agents can operate effectively in the real world:
- Safety: Evaluate the ability to operate reliably in the physical world while avoiding static/dynamic obstacles and hazards.
- Distractor: Assess performance stability when facing environmental unpredictability and visual clutter.
- Extrapolation: Test the ability to generalize learned knowledge to novel situations, unseen objects, and new workflows.
- Long Horizon: Challenge agents to combine long sequences of actions to achieve complex, multi-step goals.
Highlights
- End-to-End Toolchain: From scene construction to final evaluation metrics.
- Systematic Difficulty Scaling: Tasks range from basic object manipulation (L0) to complex, constraint-heavy scenarios (L2).
- Flexible Customization: Powered by CBDDL (Constrained Behavior Domain Definition Language) for easy task definition.
Resources
- Project Homepage: VLA-Arena Website
- GitHub Repository: PKU-Alignment/VLA-Arena
- Documentation: Read the Docs
Dataset Description
This dataset is the Level 0 (L0) - Large (L) variant of the VLA-Arena benchmark data. It contains a balanced set of human demonstrations suitable for standard training scenarios.
- Tasks Covered: 60 distinct tasks at Difficulty Level 0.
- Total Trajectories: 3,000 (50 trajectories per task).
- Task Suites: Covers Safety, Distractor, Extrapolation, and Long Horizon domains.
Format and Compatibility
This dataset is strictly formatted according to the RLDS format.
The data structure includes standardized features for:
- Observation: High-resolution RGB images (256x256) and robot state vectors.
- Action: 7-DoF continuous control signals (End-effector pose + Gripper).
- Language: Natural language task instructions.
Dataset Construction and Preprocessing
To ensure high data quality and fair comparison, the dataset underwent several rigorous construction and quality control steps:
1. High-Resolution Regeneration The demonstrations were re-rendered at a higher resolution of 256 x 256. Simple upscaling of the original 128 x 128 benchmark images resulted in poor visual fidelity. We re-executed the recorded action trajectories in the simulator to capture superior visual observations suitable for modern VLA backbones.
2. Camera Selection and Rotation
- Viewpoint: Only the static third-person camera images are utilized. Wrist camera images were discarded to ensure fair comparison across baselines.
- Rotation: All third-person camera images were rotated by 180 degrees at both train and test time to correct for the visual inversion observed in the simulation environment.
3. Success Filtering All demonstrations were replayed in the simulation environments. Any trajectory that failed to meet the task's success criteria during replay was filtered out.
4. Action Filtering (Iterative Optimization) Standard data cleaning often involves filtering out all no-operation (no-op) actions. However, we found that completely removing no-ops significantly decreased the trajectory success rate upon playback in the VLA-Arena setup. To address this, we adopted an iterative optimization strategy:
- Instead of removing all no-ops, we sequentially attempted to preserve N no-operation actions (N = 4, 8, 12, 16), specifically around critical state transition points (e.g., gripper closure and opening).
- Only trajectories that remained successful during validation playback were retained.
Evaluation & Usage
This dataset is designed to be used within the VLA-Arena benchmark ecosystem. It allows for the training of models that are subsequently tested across 11 specialized suites with difficulty levels ranging from L0 (Basic) to L2 (Advanced).
For detailed evaluation instructions, metrics, and scripts, please refer to the VLA-Arena repository.
- Downloads last month
- 4