license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- language_table
- openx
- google_robot
configs:
- config_name: default
data_files: data/*.parquet
Language Table (LeRobot) — Embedding-Only Release (DINOv3 + SigLIP2 image features; EmbeddingGemma task-text features)
This repository packages a re-encoded variant of IPEC-COMMUNITY/fractal20220817_data_lerobot where raw videos are replaced by fixed-length image embeddings, and task strings are augmented with text embeddings. All indices, splits, and semantics remain consistent with the source dataset while storage and I/O are substantially lighter. To make the dataset practical to upload/download and stream from the Hub, we also consolidated tiny per-episode Parquet files into N large Parquet shards under a single data/ folder. The file meta/sharded_index.json preserves a precise mapping from each original episode (referenced by a normalized identifier of the form data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet) to its shard path and row range, so you keep original addressing without paying the small-file tax.
- Robot: google_robot
- Modalities kept: states, actions, timestamps, frame/episode indices, image embeddings, task-text embeddings
- Removed:
- observation.images.image
- License: apache-2.0 (inherits from source)
Quick Stats
From meta/info.json and meta/task_text_embeddings_info.json:
- Episodes: 87,212
- Frames: 3,786,400
- Tasks (unique): 599
- Chunks (original layout): 88 (chunks_size=1000)
- Shards (this release): 64 Parquet files under data/ (see meta/sharded_index.json)
- FPS: 3
- Image embeddings (per frame):
- observation.images.image_dinov3 → float32 [1024] (DINOv3 ViT-L/16 CLS)
- observation.images.image_siglip2 → float32 [768] (SigLIP2-base)
- Task-text embeddings (per unique task):
- embedding → float32 [768] from google/embeddinggemma-300m
- Count: 599 rows (one per task)
Note: This is an embedding-only package. The original pixel arrays listed under “Removed” are dropped.
Contents
. |-- meta/ | |-- info.json | |-- sharded_index.json | |-- tasks.jsonl | |-- episodes.jsonl | `-- task_text_embeddings_info.json |-- data/ | |-- shard-00000-of-000NN.parquet | |-- shard-00001-of-000NN.parquet | |-- ... | `-- task_text_embeddings.parquet `-- README.md
How This Was Generated (Reproducible Pipeline)
- Episode → Image Embeddings (drop pixels) convert_lerobot_to_embeddings_mono.py (GPU-accelerated preprocessing). Adds:
- observation.images.image_dinov3 (float32[1024])
- observation.images.image_siglip2 (float32[768]) Removes:
- observation.images.image
Task-Text Embeddings (one row per unique task) build_task_text_embeddings.py with SentenceTransformer("google/embeddinggemma-300m") → data/task_text_embeddings.parquet + meta/task_text_embeddings_info.json.
Data Consolidation (this release) All per-episode Parquets were consolidated into N large Parquet shards in one data/ folder.
- The index meta/sharded_index.json records, for each episode, its normalized source identifier data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet, the destination shard path, and the (row_offset, num_rows) range inside that shard.
- This preserves original addressing while making Hub sync/clone/stream far faster and more reliable.
Metadata (Excerpts)
meta/task_text_embeddings_info.json
{
"model": "google/embeddinggemma-300m",
"dimension": 768,
"normalized": true,
"count": 599,
"file": "task_text_embeddings.parquet"
}
meta/info.json (embedding-only + shards)
{
"codebase_version": "v2.0-embeddings-sharded",
"robot_type": "google_robot",
"total_episodes": 87212,
"total_frames": 3786400,
"total_tasks": 599,
"total_videos": 87212,
"total_chunks": 88,
"chunks_size": 1000,
"fps": 3,
"splits": {
"train": "0:87212"
},
"data_path": "data/shard-{shard_id:05d}-of-{num_shards:05d}.parquet",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
8
],
"names": {
"motors": [
"x",
"y",
"z",
"rx",
"ry",
"rz",
"rw",
"gripper"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper"
]
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"observation.images.image_dinov3": {
"dtype": "float32",
"shape": [
1024
],
"names": null
},
"observation.images.image_siglip2": {
"dtype": "float32",
"shape": [
768
],
"names": null
}
},
"video_keys": [
"observation.images.image"
],
"num_shards": 64,
"index_path": "meta/sharded_index.json"
}
Environment & Dependencies
Python ≥ 3.9 • PyTorch ≥ 2.1 • transformers • sentence-transformers • pyarrow • tqdm • decord (and optionally av)
Provenance, License, and Citation
- Source dataset: IPEC-COMMUNITY/fractal20220817_data_lerobot
- License: apache-2.0 (inherits from the source)
- Encoders to cite:
- facebook/dinov3-vitl16-pretrain-lvd1689m
- google/siglip2-base-patch16-384
- google/embeddinggemma-300m
Changelog
- v2.0-embeddings-sharded — Replaced video tensors with DINOv3 + SigLIP2 features; added EmbeddingGemma task-text embeddings; consolidated per-episode Parquets into N shards with a repo-local index; preserved original indexing/splits via normalized episode identifiers.