shauryadewan commited on
Commit
78f1365
·
verified ·
1 Parent(s): 1ef079f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -16
README.md CHANGED
@@ -7,14 +7,14 @@ task_categories:
7
  tags:
8
  - robotics
9
  ---
10
- ## Dataset Description
11
 
12
  This is a fully annotated, synthetically generated dataset consisting of 1,000 demonstrations of a single Franka Panda robot arm performing a fixed-order three-cube stacking task in Isaac Lab. The robot consistently stacks cubes in the order: blue (bottom) → red (middle) → green (top).
13
 
14
  The dataset was produced using the following pipeline:
15
  - Collected 10 human teleoperation demonstrations of the stacking task.
16
- - Used Isaac Lab’s **Mimic** tool to simulate 1,000 high-quality trajectories in Isaac Sim.
17
- - Applied **Cosmos Transfer1** model to augment the RGB visuals from the table camera with photorealistic domain adaptation.
18
 
19
  Each demonstration includes synchronized multimodal data:
20
  - RGB videos from both a table-mounted and wrist-mounted camera.
@@ -22,32 +22,49 @@ Each demonstration includes synchronized multimodal data:
22
  - Full low-level robot and object states (joints, end-effector, gripper, cube poses).
23
  - Action sequences executed by the robot.
24
 
25
- This dataset is ideal for behavior cloning, policy learning, and generalist robotic manipulation research.
26
 
27
- ## Intended Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  This dataset is intended for:
30
  - Training robot manipulation policies using behavior cloning.
31
  - Research in generalist robotics and task-conditioned agents.
32
  - Sim-to-real transfer studies and visual domain adaptation.
33
 
34
- ## Dataset Characterization
35
 
36
- **Data Collection Method**
37
- * Human Demonstration (seed data)
38
- * Synthetic Simulation (Isaac Lab Mimic)
39
- * Visual Augmentation (Cosmos Transfer1)
40
 
41
- 10 human teleoperated demonstrations were used to bootstrap a Mimic-based simulation in Isaac Sim. All 1,000 demos are generated automatically followed by domain-randomized visual augmentation.
42
 
43
- ## Dataset Format
 
44
 
45
- We provide the Mimic generated 1000 demonstrations and the 1000 Cosmos augmented demonstrations in separate HDF5 dataset files. Each demo in each file consists of a time-indexed sequence of the following modalities:
46
 
47
- ### Actions
 
 
48
  - 7D vector: 6D relative end-effector motion + 1D gripper action
49
 
50
- ### Observations
51
  - Robot states: Joint positions, velocities, and gripper open/close state
52
  - EEF states: End-effector 6-DOF pose
53
  - Cube states: Poses (positions + orientations) for blue, red, and green cubes
@@ -57,4 +74,47 @@ We provide the Mimic generated 1000 demonstrations and the 1000 Cosmos augmented
57
  - 200×200 Segmentation mask
58
  - 200×200 Surface normal map
59
  - Wrist camera visuals:
60
- - 200×200 RGB
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  tags:
8
  - robotics
9
  ---
10
+ ## Dataset Description:
11
 
12
  This is a fully annotated, synthetically generated dataset consisting of 1,000 demonstrations of a single Franka Panda robot arm performing a fixed-order three-cube stacking task in Isaac Lab. The robot consistently stacks cubes in the order: blue (bottom) → red (middle) → green (top).
13
 
14
  The dataset was produced using the following pipeline:
15
  - Collected 10 human teleoperation demonstrations of the stacking task.
16
+ - Used Isaac Lab’s **Mimic** tool [1] to simulate 1,000 high-quality trajectories in Isaac Sim.
17
+ - Applied **Cosmos Transfer1** model [2] to augment the RGB visuals from the table camera with photorealistic domain adaptation.
18
 
19
  Each demonstration includes synchronized multimodal data:
20
  - RGB videos from both a table-mounted and wrist-mounted camera.
 
22
  - Full low-level robot and object states (joints, end-effector, gripper, cube poses).
23
  - Action sequences executed by the robot.
24
 
25
+ This dataset is ideal for behavior cloning, policy learning, and generalist robotic manipulation research.
26
 
27
+ This dataset is ready for commercial use.
28
+
29
+ ## Dataset Owner(s):
30
+
31
+ NVIDIA Corporation
32
+
33
+ ## Dataset Creation Date:
34
+
35
+ 06/04/2025
36
+
37
+ ## License/Terms of Use:
38
+
39
+ CC BY 4.0
40
+
41
+ ## Intended Usage:
42
 
43
  This dataset is intended for:
44
  - Training robot manipulation policies using behavior cloning.
45
  - Research in generalist robotics and task-conditioned agents.
46
  - Sim-to-real transfer studies and visual domain adaptation.
47
 
48
+ ## Dataset Characterization:
49
 
50
+ **Data Collection Method**
51
+ * Automated
52
+ * Automatic/Sensors
53
+ * Synthetic
54
 
55
+ 10 human teleoperated demonstrations were used to bootstrap a Mimic-based simulation [1] in Isaac Sim. All 1,000 demos are generated automatically followed by domain-randomized visual augmentation using Cosmos Transfer1 [2].
56
 
57
+ **Labeling Method**
58
+ * Not Applicable
59
 
60
+ ## Dataset Format:
61
 
62
+ We provide the Mimic generated 1000 demonstrations and the 1000 Cosmos augmented demonstrations in separate HDF5 dataset files (`mimic_dataset_1k.hdf5` and `cosmos_dataset_1k.hdf5` respectively). Each demo in each file consists of a time-indexed sequence of the following modalities:
63
+
64
+ **Actions**
65
  - 7D vector: 6D relative end-effector motion + 1D gripper action
66
 
67
+ **Observations**
68
  - Robot states: Joint positions, velocities, and gripper open/close state
69
  - EEF states: End-effector 6-DOF pose
70
  - Cube states: Poses (positions + orientations) for blue, red, and green cubes
 
74
  - 200×200 Segmentation mask
75
  - 200×200 Surface normal map
76
  - Wrist camera visuals:
77
+ - 200×200 RGB
78
+
79
+ ## Dataset Quantification:
80
+
81
+ **Record Count**
82
+ * `mimic_dataset_1k`
83
+ * Number of demonstrations/trajectories: 1000
84
+ * Number of RGB videos: 2000 (1000 table camera + 1000 wrist camera)
85
+ * Number of depth videos: 1000 (table camera)
86
+ * Number of segmentation videos: 1000 (table camera)
87
+ * Number of normal map videos: 1000 (table camera)
88
+
89
+ * `cosmos_dataset_1k`
90
+ * Number of demonstrations/trajectories: 1000
91
+ * Number of RGB videos: 2000 (1000 table camera + 1000 wrist camera)
92
+ * Number of depth videos: 1000 (table camera)
93
+ * Number of segmentation videos: 1000 (table camera)
94
+ * Number of normal map videos: 1000 (table camera)
95
+
96
+
97
+ **Total Storage**
98
+ * 69.4 GB
99
+
100
+ ## Reference(s):
101
+
102
+ ```
103
+ [1] @inproceedings{mandlekar2023mimicgen,
104
+ title={MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations},
105
+ author={Mandlekar, Ajay and Nasiriany, Soroush and Wen, Bowen and Akinola, Iretiayo and Narang, Yashraj and Fan, Linxi and Zhu, Yuke and Fox, Dieter},
106
+ booktitle={7th Annual Conference on Robot Learning},
107
+ year={2023}
108
+ }
109
+ [2] @misc{nvidia2025cosmostransfer1conditionalworldgeneration,
110
+ title = {Cosmos-Transfer1: Conditional World Generation with Adaptive Multimodal Control},
111
+ author = {NVIDIA and Abu Alhaija, Hassan and Alvarez, Jose and Bala, Maciej and Cai, Tiffany and Cao, Tianshi and Cha, Liz and Chen, Joshua and Chen, Mike and Ferroni, Francesco and Fidler, Sanja and Fox, Dieter and Ge, Yunhao and Gu, Jinwei and Hassani, Ali and Isaev, Michael and Jannaty, Pooya and Lan, Shiyi and Lasser, Tobias and Ling, Huan and Liu, Ming-Yu and Liu, Xian and Lu, Yifan and Luo, Alice and Ma, Qianli and Mao, Hanzi and Ramos, Fabio and Ren, Xuanchi and Shen, Tianchang and Tang, Shitao and Wang, Ting-Chun and Wu, Jay and Xu, Jiashu and Xu, Stella and Xie, Kevin and Ye, Yuchong and Yang, Xiaodong and Zeng, Xiaohui and Zeng, Yu},
112
+ journal = {arXiv preprint arXiv:2503.14492},
113
+ year = {2025},
114
+ url = {https://arxiv.org/abs/2503.14492}
115
+ }
116
+ ```
117
+
118
+ ## Ethical Considerations:
119
+
120
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.