Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Tags:
code
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

The information you provide will be collected, stored, processed and shared in accordance with the MeshCoder Privacy Policy.

MeshCoder COMMUNITY LICENSE AGREEMENT MeshCoder Release Date: November 3, 2025 All the data and code within this repo are under CC-BY-NC-SA 4.0.

Log in or Sign Up to review the conditions and access this dataset content.

MeshCoderDataset

This is the dataset of the paper MeshCoder.

MeshCoder

Project Website Hugging Face Paper Hugging Face Model Hugging Face Dataset

Bingquan Dai*, Li Luo*, Qihong Tang, Jie Wang, Xinyu Lian, Hao Xu, Minghan Qin, Xudong Xu, Bo Dai, Haoqian Wang, Zhaoyang Lyu Jiangmiao Pang

* Equal contribution
Corresponding author
Project lead: Zhaoyang Lyu

Overview

MeshCoder is a framework that converts 3D point clouds into editable Blender Python scripts, enabling programmatic reconstruction and editing of complex human-made objects. It overcomes prior limitations by developing expressive APIs for modeling intricate geometries, building a large-scale dataset of 1 million object-code pairs across 41 categories, and training a multimodal LLM to generate accurate, part-segmented code from point clouds. The approach outperforms existing methods in reconstruction quality, supports intuitive shape and topology editing via code modifications, and enhances 3D reasoning capabilities in LLMs.

Usage

For model checkpoint, please see https://huggingface.co/InternRobotics/MeshCoder/ for more details. For model usage, please see the Github repository: https://github.com/InternRobotics/MeshCoder regarding installation, training and inference instructions.

🔑 Key Features

MeshCoder integrates a wide variety of complex objects, generating paired Blender Python code that describes each object in semantic parts. This ensures:

  • 📊 Large scale : 100 thousand object-code pairs spanning 40 categories , with objects composed of up to 100+ parts.
  • 💻 Structured & Editable Code : Generates human-readable Blender Python scripts decomposed into distinct semantic parts , enabling intuitive geometric and topological editing.
  • 🧊 Intricate Geometries : Built on a comprehensive set of expressive Blender Python APIs capable of synthesizing complex shapes (e.g., via translation, bridge loops, booleans, and arrays) far beyond simple primitives.
  •    

⚙️ Getting Started

Download the Dataset

To download the full dataset, you can use the following code. If you encounter any issues, please refer to the official Hugging Face documentation.

# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install

# When prompted for a password, use an access token with write permissions.
# Generate one from your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/InternRobotics/MeshCoderDataset

# If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/InternRobotics/MeshCoderDataset

Dataset Structure

In order to upload conveniently and save some storage, we zip our point clouds according to the class of object. The following is the example of the directory tree.

MeshCoderDataset/
|-- dataset_train/              # training set of point clouds
  |-- ArmChairFactory.zip       # zip file for each object class
  |-- BarChairFactory.zip  
  |-- .../                      
|-- dataset_val/                # validation set of point clouds
  |-- ArmChairFactory.zip
  |-- BarChairFactory.zip     
  |-- .../                      
|-- dataset_test/               # test set of point clouds
  |-- ArmChairFactory.zip
  |-- BarChairFactory.zip     
  |-- .../      
train.json
val.json
test.json                    
train_show.jsonl  # for the huggingface rendering.
val_show.jsonl
test_show.jsonl

After a period time of unzipping process, you would obtain the directory tree like below, which matches the pcd_path of the json file:

MeshCoderDataset/
|-- dataset_train/              # training set of point clouds
  |-- ArmChairFactory_1/        # point clouds for ArmChair
  |-- .../                      # plenties of other object class
|-- dataset_val/                # validation set of point clouds
  |-- ArmChairFactory_1/        # point clouds for ArmChair   
  |-- .../                      
|-- dataset_test/               # test set of point clouds
  |-- ArmChairFactory_1/        # point clouds for ArmChair   
  |-- .../                      
train.json/
val.json/
test.json/

Note that the pcd_path is relative path. The layout format of train set is listed as follows:

[
    {
        "id": "10879",
        "name": "cocktail table",
        "category": "TableCocktailFactory",
        "script": "import bpy\nfrom math import radians, pi\nfrom bpy_lib import *\n\ndelete_all()\n\n# object name: cocktail table\n# part_1: leg\ncreate_polygon(name='hexagon_1', sides=6, radius=0.02)\ncreate_curve(name='leg_1', profile_name='hexagon_1', control_points=[[-0.32, -1.0, -0.32], [-0.29, 0.28, -0.29], [-0.28, 0.94, -0.28]], points_radius=[1.0, 2.07, 2.07], handle_type=[1, 1, 1, 1, 1, 1], thickness=0.0, fill_caps='both')\n\n# part_2: leg\ncreate_polygon(name='hexagon_2', sides=6, radius=0.02)\ncreate_curve(name='leg_2', profile_name='hexagon_2', control_points=[[-0.32, -1.0, 0.32], [-0.29, 0.26, 0.29], [-0.28, 0.94, 0.28]], points_radius=[1.0, 2.22, 2.22], handle_type=[1, 1, 1, 1, 1, 1], thickness=0.0, fill_caps='both')\n\n# part_3: leg\ncreate_polygon(name='hexagon_3', sides=6, radius=0.02)\ncreate_curve(name='leg_3', profile_name='hexagon_3', control_points=[[0.32, -1.0, -0.32], [0.29, 0.28, -0.29], [0.28, 0.94, -0.28]], points_radius=[1.0, 2.07, 2.07], handle_type=[1, 1, 1, 1, 1, 1], thickness=0.0, fill_caps='both')\n\n# part_4: leg\ncreate_polygon(name='hexagon_4', sides=6, radius=0.02)\ncreate_curve(name='leg_4', profile_name='hexagon_4', control_points=[[0.32, -1.0, 0.32], [0.29, 0.29, 0.29], [0.28, 0.94, 0.28]], points_radius=[1.0, 2.22, 2.22], handle_type=[1, 1, 1, 1, 1, 1], thickness=0.0, fill_caps='both')\n\n# part_5: strecher\ncreate_primitive(name='strecher_5', primitive_type='cylinder', location=[-0.0, -0.09, -0.0], scale=[0.02, 0.02, 0.42], rotation=[0.01, -0.38, -0.0, 0.92])\n\n# part_6: strecher\ncreate_primitive(name='strecher_6', primitive_type='cylinder', location=[0.0, -0.09, 0.0], scale=[0.02, 0.02, 0.42], rotation=[0.66, 0.27, 0.27, 0.65])\n\n# part_7: table top\ncreate_primitive(name='table top_7', primitive_type='cube', location=[-0.0, 0.97, -0.0], scale=[0.4, 0.4, 0.03], rotation=[0.5, -0.5, 0.5, 0.5])\nbevel(name='table top_7', width=0.09, segments=8)",
        "pcd_path": [
            "dataset_train/TableCocktailFactory_1_other/point_cloud/10879.npz",
            "dataset_train/TableCocktailFactory_1_other/point_cloud_gt/10879.npz"
        ],
        "cd_loss": [
            1e-08,
            0.00012
        ]
    }
...
]

The layout format of val/test set is listed as follows:

[
    {
        "id": 41627,
        "name": "Microwave",
        "category": "MicrowaveFactory",
        "script": "import bpy\nfrom math import radians, pi\nfrom bpy_lib import *\n\ndelete_all()\n\n# object name: Microwave\n# part_1: door\ncreate_primitive(name='door_1', primitive_type='cube', location=[0.78, -0.0, 0.22], scale=[0.78, 0.51, 0.05], rotation=[0.71, 0.0, 0.7, -0.0])\n\n# part_2: body\ncreate_primitive(name='body_2', primitive_type='cube', location=[-0.05, 0.0, 0.0], scale=[1.0, 0.5, 0.78], rotation=[0.0, 0.71, -0.0, 0.71])\ncreate_primitive(name='Bool2_2', primitive_type='cube', location=[-0.05, 0.0, 0.25], scale=[0.69, 0.43, 0.79], rotation=[0.0, 0.71, -0.0, 0.71])\nboolean_operation(name1='body_2', name2='Bool2_2', operation='DIFFERENCE')\n\n# part_3: sidedoor\ncreate_primitive(name='sidedoor_3', primitive_type='cube', location=[0.78, -0.0, -0.78], scale=[0.5, 0.22, 0.05], rotation=[0.5, 0.5, -0.5, -0.5])\n\n# part_4: plate\ncreate_curve(name='curve_4', control_points=[[0.0, 0.0, 0.0], [0.235, 0.0, 0.0], [0.372, 0.059, 0.0], [0.489, 0.117, 0.0]], handle_type=[0, 3, 0, 0, 0, 0])\nbezier_rotation(name='plate_4', profile_name='curve_4', location=[0.02, -0.43, 0.24], rotation=[0.54, -0.55, 0.46, 0.45], thickness=0.0)",
        "completeness": 1.0,
        "rec_parts": [
            "0.obj",
            "1.obj",
            "2.obj",
            "3.obj"
        ],
        "ori_parts": [
            "0.obj",
            "1.obj",
            "2.obj",
            "3.obj"
        ],
        "cd_loss": [
            0.00469
        ],
        "pcd_path": [
            "dataset_val/MicrowaveFactory_3/point_cloud_gt/41627.npz"
        ]
    }
...
]

Note that if the dataset split is train, the pcd path consists of two sources, which is sampled from the mesh generated by code (like xxx/point_cloud/41627.npz) and the mesh generated by Infinigen indoor objects (like xxx/point_cloud_gt/41627.npz). All pcd_path is relative path. For the list of cd loss ,the first value is with itself, which is set to 1e-8 by default, whereas the second one is calculated between the point cloud sampled from code and point cloud sampled from mesh generated by Infinigen.

Join Us

We are seeking engineers, interns, researchers, and PhD candidates. If you have an interest in 3D content generation, please send your resume to [email protected].

🧷 Citation

@article{dai2025meshcoder,
  title={Meshcoder: Llm-powered structured mesh code generation from point clouds},
  author={Dai, Bingquan and Luo, Li Ray and Tang, Qihong and Wang, Jie and Lian, Xinyu and Xu, Hao and Qin, Minghan and Xu, Xudong and Dai, Bo and Wang, Haoqian and others},
  journal={arXiv preprint arXiv:2508.14879},
  year={2025}
}

📄License

Creative Commons License

This work is under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Downloads last month
34