---
pipeline_tag: robotics
library_name: transformers
license: cc-by-nc-sa-4.0
tags:
- vision-language-model
- video-language-model
- navigation
---
# InternVLA-N1: An Open Dual-System Navigation Foundation Model with Learned Latent Plans
[](https://github.com/InternRobotics/InternNav)
The technical report will be public in the coming open-source week. Please stay tuned!
## ⚠️ Important Notice
* This repository was previously named **InternVLA-N1**, but is now renamed to **InternVLA-N1-Preview**.
* The **official and latest release** is available at 👉 [InternVLA-N1](https://huggingface.co/InternRobotics/InternVLA-N1).
* We recommend using the official release for new research and deployment, while this preview version is kept for **reproducibility and reference**.
## Highlights
- Dual-System Framework
The first navigation foundation model that achieves joint-tuning and asychronous inference of System-2 reasoning and System-1 action, resulting in smooth and efficient execution during the instruction-followed navigation procedure.
- State-of-the-art
The whole navigation foundation model with each system achieves state-of-the-art performance on both mainstream and our new established challenging benchmarks, including VLN-CE R2R & RxR, GRScenes-100, VLN-PE, etc.
- Sim2Real Zero-shot Generalization
The training is based on simulation data InternData-N1 only, with diverse scenes, embodiments and other randomization, while achieving great zero-shot generalization capabilities in the real world.
## Usage
Please refer to [InternNav](https://github.com/InternRobotics/InternNav) for its inference, evaluation and gradio demo.
## Citation
If you find our work helpful, please consider starring this repo 🌟 and cite:
```bibtex
@misc{internvla-n1,
title = {{InternVLA-N1: An} Open Dual-System Navigation Foundation Model with Learned Latent Plans},
author = {InternVLA-N1 Team},
year = {2025},
booktitle={arXiv},
}
```
## License
This work is under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
## Acknowledgements
This repository is based on [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL).