license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- language_table
- openx
- xarm
configs:
- config_name: default
data_files: data/*.parquet
Language Table (LeRobot) — Embedding-Only Release (DINOv3 + SigLIP2 image features; EmbeddingGemma task-text features)
This repository packages a re-encoded variant of IPEC-COMMUNITY/language_table_lerobot where raw videos are replaced by fixed-length image embeddings, and task strings are augmented with text embeddings. All indices, splits, and semantics remain consistent with the source dataset while storage and I/O are substantially lighter. To make the dataset practical to upload/download and stream from the Hub, we also consolidated ~0.5M tiny per-episode Parquet files into N large Parquet shards under a single data/ folder. The file meta/sharded_index.json preserves a precise mapping from each original episode (referenced by a normalized identifier of the form data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet) to its shard path and row range, so you keep original addressing without paying the small-file tax.
- Robot: xArm
- Modalities kept: states, actions, timestamps, frame/episode indices, image embeddings, task-text embeddings
- Removed: raw video tensors (column observation.images.rgb)
- License: apache-2.0 (inherits from source)
Quick Stats
From meta/info.json and meta/task_text_embeddings_info.json:
- Episodes: 442,226
- Frames: 7,045,476
- Tasks (unique): 127,605
- Chunks (original layout): 443 (chunks_size=1000)
- Shards (this release): N Parquet files under data/ (see meta/sharded_index.json)
- FPS: 10
- Image embeddings (per frame):
- observation.images.rgb_dinov3 → float32 [1024] (DINOv3 ViT-L/16 CLS)
- observation.images.rgb_siglip2 → float32 [768] (SigLIP2-base)
- Task-text embeddings (per unique task):
- embedding → float32 [768] from google/embeddinggemma-300m
- Count: 127,605 rows (one per task)
Note: This is an embedding-only package. video_path is omitted and the original observation.images.rgb pixels are dropped.
Contents
. ├─ meta/ │ ├─ info.json # dataset overview & feature schema (updated to shards) │ ├─ sharded_index.json # mapping: original-episode-id → shard path + row range │ ├─ tasks.jsonl # {"task_index": int, "task": str} │ ├─ episodes.jsonl # {"task_index": int, "task": str, "length": int} │ └─ task_text_embeddings_info.json # model/dim/normalized/count/file for task embeddings ├─ data/ │ ├─ shard-00000-of-000NN.parquet │ ├─ shard-00001-of-000NN.parquet │ ├─ … # N large Parquet shards for fast HF upload/streaming │ └─ task_text_embeddings.parquet # task_index, task, 768-D EmbeddingGemma vector └─ README.md
How This Was Generated (Reproducible Pipeline)
- Episode → Image Embeddings (drop pixels) convert_lerobot_to_embeddings_mono.py (GPU-accelerated preprocessing). Adds:
- observation.images.rgb_dinov3 (float32[1024])
- observation.images.rgb_siglip2 (float32[768]) Removes:
- observation.images.rgb (raw frames)
Task-Text Embeddings (one row per unique task) build_task_text_embeddings.py with SentenceTransformer("google/embeddinggemma-300m") → data/task_text_embeddings.parquet + meta/task_text_embeddings_info.json.
Data Consolidation (this release) All per-episode Parquets were consolidated into N large Parquet shards in one data/ folder.
- The index meta/sharded_index.json records, for each episode, its normalized source identifier data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet, the destination shard path, and the (row_offset, num_rows) range inside that shard.
- This preserves original addressing while making Hub sync/clone/stream far faster and more reliable.
Metadata (Excerpts)
meta/task_text_embeddings_info.json
{
"model": "google/embeddinggemma-300m",
"dimension": 768,
"normalized": false,
"count": 127605,
"file": "task_text_embeddings.parquet"
}
meta/info.json (embedding-only + shards)
json { "codebase_version": "v2.0-embeddings-sharded", "robot_type": "xarm", "total_episodes": 442226, "total_frames": 7045476, "total_tasks": 127605, "total_videos": 442226, "total_chunks": 443, "chunks_size": 1000, "fps": 10, "splits": { "train": "0:442226" }, "data_path": "data/shard-{shard_id:05d}-of-{num_shards:05d}.parquet", "features": { "observation.state": { "dtype": "float32", "shape": [ 8 ], "names": { "motors": [ "x", "y", "z", "roll", "pitch", "yaw", "pad", "gripper" ] } }, "action": { "dtype": "float32", "shape": [ 7 ], "names": { "motors": [ "x", "y", "z", "roll", "pitch", "yaw", "gripper" ] } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "observation.images.rgb_dinov3": { "dtype": "float32", "shape": [ 1024 ], "names": null }, "observation.images.rgb_siglip2": { "dtype": "float32", "shape": [ 768 ], "names": null } }, "num_shards": 64, "index_path": "meta/sharded_index.json" }
Environment & Dependencies
Python ≥ 3.9 • PyTorch ≥ 2.1 • transformers • sentence-transformers • pyarrow • tqdm • decord (and optionally av)
Provenance, License, and Citation
- Source dataset: IPEC-COMMUNITY/language_table_lerobot
- License: apache-2.0 (inherits from the source)
- Encoders to cite:
- facebook/dinov3-vitl16-pretrain-lvd1689m
- google/siglip2-base-patch16-384
- google/embeddinggemma-300m
Changelog
- v2.0-embeddings-sharded — Replaced video tensors with DINOv3 + SigLIP2 features; added EmbeddingGemma task-text embeddings; consolidated per-episode Parquets into N shards with a repo-local index; preserved original indexing/splits via normalized episode identifiers.