behavior
cloning
gaming
agent

Model Overview

Description:

NitroGen is a unified vision-to-action model designed to play video games directly from raw frames. It takes video game footage as input and outputs gamepad actions. Unlike models trained with rewards or task objectives, NitroGen is trained purely through large-scale imitation learning on videos of human gameplay. NitroGen works best on games designed for gamepad controls (e.g., action, platformer, and racing games) and is less effective on games that rely heavily on mouse and keyboard (e.g., RTS, MOBA).

The goal of the NitroGen project is to explore whether large-scale training on diverse human gameplay leads to emergent, general-purpose embodied abilities, similar to how scaling has unlocked emergent behaviors in large language models.

Potential applications include next-generation game AI, automated QA for video games, and advancing research in general embodied AI.

NitroGen 1 was developed by NVIDIA and is the first model of the series. This model is for research and development only.

License/Terms of Use:

Governing Terms:  NVIDIA License.

Additional Information: Apache License for https://huggingface.co/google/siglip2-base-patch16-224.

Deployment Geography:

Global

Use Case:

Researchers, engineers, open source community, companies, gamers. Potential applications include next-generation game AI, automated testing for video games, and generally advancing research in embodied AI.

Release Date:

GitHub 12/19/2025 via
GitHub 12/19/2025 via https://huggingface.co/nvidia/NitroGen

References:

VPT, a Minecraft agent trained from internet videos. SIMA, a multi-game agent trained to follow text instructions. GR00T N1, an open foundation model for generalist humanoid robots.

Model Architecture:

Architecture Type: Vision Transformer, Diffusion Transformer

Network Architecture:

  • RGB frames are processed through a pre-trained vision transformer (SigLip2).
  • A diffusion matching transformer (DiT) then generates actions, conditioned on SigLip output.

This model was developed based on SigLip2

Number of model parameters: $4.93 × 10^8$

Input(s):

Input Type(s): Image

Input Format(s): Red, Green, Blue (RGB)

Input Parameters: Two-Dimensional (2D)

Other Properties Related to Input: 256x256 Images

Output(s)

Output Type(s): Actions for gamepad/game controllers

Output Format(s): Tabular

Output Parameters: 2D: one action dimension and one temporal dimension

Other Properties Related to Output: The output has shape 21x16, two 2D Continuous-value vectors for each joystick, 17 binary values for each button.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engine(s): No runtime engine was used.

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Blackwell
  • NVIDIA Hopper

Preferred/Supported Operating System(s):

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

  • Linux
  • Windows

Model Version(s):

V1

Training, Testing, and Evaluation Datasets:

Training Dataset:

Data Modality

  • Image
  • Video

Image Training Data Size

  • More than 1 Billion Images

Video Training Data Size

  • 10,000 to 1 Million Hours

Data Collection Method by dataset

  • Automated

Labeling Method by dataset

  • Synthetic

Properties: 40,000 publicly available videos, labeled with frame-wise actions

Testing Dataset:

Data Collection Method by dataset

  • Automated

Labeling Method by dataset

  • Synthetic

Properties: 40,000 publicly available videos, labeled with frame-wise actions

Evaluation Dataset:

Data Collection Method by dataset

  • Automated

Labeling Method by dataset

  • Synthetic

Properties: 40,000 publicly available videos, labeled with frame-wise actions

Inference:

Acceleration Engine: None
Test Hardware: H100

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

For more detailed information on ethical considerations for this model, please see the Model Card++ Bias, Explainability, Safety & Security, and Privacy Subcards.

Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train nvidia/NitroGen

Spaces using nvidia/NitroGen 2