GuanhuaJi's picture
Add files using upload-large-folder tool
44ec9c4 verified
metadata
pretty_name: language_table_train_55000_60000_augmented
license: cc-by-4.0
tags:
  - robotics
  - lerobot
  - oxe-auge
  - dataset
task_categories:
  - robotics
oxe_aug:
  codebase_version: v3.0
  robots:
    - google_robot
    - images
    - jaco
    - kinova3
    - kuka_iiwa
    - panda
    - sawyer
    - ur5e
  fps: 10
  total_episodes: 5000
  total_frames: 79295
  total_videos: null
configs:
  - config_name: default
    data_files:
      - split: train
        path:
          - data/chunk-*/file-*.parquet

language_table_train_55000_60000_augmented

Overview

  • Codebase version: v3.0
  • Robots: google_robot, images, jaco, kinova3, kuka_iiwa, panda, sawyer, ur5e
  • FPS: 10
  • Episodes: 5,000
  • Frames: 79,295
  • Splits:
    • train: 0:5000

Data Layout

data_path : data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet
video_path: videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4

Features

Feature dtype shape description
observation.images.google_robot video 360×640×3 Augmented image for google_robot robot
observation.images.image video 360×640×3 Source robot's image from original dataset
observation.images.jaco video 360×640×3 Augmented image for jaco robot
observation.images.kinova3 video 360×640×3 Augmented image for kinova3 robot
observation.images.kuka_iiwa video 360×640×3 Augmented image for kuka_iiwa robot
observation.images.panda video 360×640×3 Augmented image for panda robot
observation.images.sawyer video 360×640×3 Augmented image for sawyer robot
observation.images.ur5e video 360×640×3 Augmented image for ur5e robot
episode_index int64 1 Index of the current episode within the dataset.
frame_index int64 1 Index of the current frame within its episode.
index int64 1 Global frame index across the whole dataset.
natural_language_instruction int32 512 Natural language command describing the task
observation.ee_pose float32 7 Source robot's eef position
observation.google_robot.base_orientation float32 1 Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0)
observation.google_robot.base_position float32 3 Base translation applied so the trajectory remains achievable
observation.google_robot.ee_error float32 7 The eef difference between the augmented google_robot robot and the original robot
observation.google_robot.ee_pose float32 7 The eef position of google_robot robot
observation.google_robot.joints float32 8 The joint position of google_robot robot
observation.jaco.base_orientation float32 1 Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0)
observation.jaco.base_position float32 3 Base translation applied so the trajectory remains achievable
observation.jaco.ee_error float32 7 The eef difference between the augmented jaco robot and the original robot
observation.jaco.ee_pose float32 7 The eef position of jaco robot
observation.jaco.joints float32 7 The joint position of jaco robot
observation.joints float32 8 Joint angle of source robot
observation.kinova3.base_orientation float32 1 Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0)
observation.kinova3.base_position float32 3 Base translation applied so the trajectory remains achievable
observation.kinova3.ee_error float32 7 The eef difference between the augmented kinova3 robot and the original robot
observation.kinova3.ee_pose float32 7 The eef position of kinova3 robot
observation.kinova3.joints float32 8 The joint position of kinova3 robot
observation.kuka_iiwa.base_orientation float32 1 Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0)
observation.kuka_iiwa.base_position float32 3 Base translation applied so the trajectory remains achievable
observation.kuka_iiwa.ee_error float32 7 The eef difference between the augmented kuka_iiwa robot and the original robot
observation.kuka_iiwa.ee_pose float32 7 The eef position of kuka_iiwa robot
observation.kuka_iiwa.joints float32 8 The joint position of kuka_iiwa robot
observation.panda.base_orientation float32 1 Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0)
observation.panda.base_position float32 3 Base translation applied so the trajectory remains achievable
observation.panda.ee_error float32 7 The eef difference between the augmented panda robot and the original robot
observation.panda.ee_pose float32 7 The eef position of panda robot
observation.panda.joints float32 8 The joint position of panda robot
observation.sawyer.base_orientation float32 1 Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0)
observation.sawyer.base_position float32 3 Base translation applied so the trajectory remains achievable
observation.sawyer.ee_error float32 7 The eef difference between the augmented sawyer robot and the original robot
observation.sawyer.ee_pose float32 7 The eef position of sawyer robot
observation.sawyer.joints float32 8 The joint position of sawyer robot
observation.state float32 2 Copy of the state field in source robot's RLDS dataset
observation.ur5e.base_orientation float32 1 Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0)
observation.ur5e.base_position float32 3 Base translation applied so the trajectory remains achievable
observation.ur5e.ee_error float32 7 The eef difference between the augmented ur5e robot and the original robot
observation.ur5e.ee_pose float32 7 The eef position of ur5e robot
observation.ur5e.joints float32 7 The joint position of ur5e robot
task_index int64 1 Integer ID of the high-level task this episode/frame belongs to.
timestamp float32 1 Time stamp of the current frame within the episode (in second).

Website

Paper

Citation Policy

If you use OXE-AugE datasets, please cite both our dataset and the upstream datasets.

Upstream Dataset Citation (original dataset)

@article{lynch2022interactive,
  title   = {Interactive Language: Talking to Robots in Real Time},
  author  = {Corey Lynch and Ayzaan Wahid and Jonathan Tompson and Tianli Ding and James Betker and Robert Baruch and Travis Armstrong and Pete Florence},
  journal = {arXiv preprint arXiv:2210.06407},
  year    = {2022},
  url     = {https://arxiv.org/abs/2210.06407}
}

OXE-AugE Dataset Citation (ours)

@misc{
  ji2025oxeaug,
  title  = {OXE-AugE: A Large-Scale Robot Augmentation of OXE for Scaling Cross-Embodiment Policy Learning},
  author = {Ji, Guanhua and Polavaram, Harsha and Chen, Lawrence Yunliang and Bajamahal, Sandeep and Ma, Zehan and Adebola, Simeon and Xu, Chenfeng and Goldberg, Ken},
  year   = {2025},
  note   = {Manuscript}
}