chenyilun95's picture
Update README.md
2b0caa8 verified
metadata
license: cc-by-nc-sa-4.0
tags:
  - robotics
  - vision-language-action-model
  - vision-language-model

Model Card for InternVLA-M1_object

InternVLA-M1 is an open-source, end-to-end vision–language–action (VLA) framework for building and researching generalist robot policies.

Training Details

action_chunk: 8
batch_size: 128
training_steps: 30k

Citation

@misc{internvla2024,
  title  = {InternVLA-M1: Latent Spatial Grounding for Instruction-Following Robotic Manipulation},
  author = {InternVLA-M1 Contributors},
  year   = {2025},
  booktitle={arXiv},
}