File size: 4,982 Bytes
0284994
0cca8fa
0284994
43e959f
 
 
 
0cca8fa
0284994
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0cca8fa
0284994
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db23fac
0284994
 
 
 
 
 
 
 
 
 
 
43e959f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
library_name: hermes
license: apache-2.0
tags:
- ICCV 25
- Driving World Model
- Unified Understanding and Generation
pipeline_tag: image-to-3d
---

<div  align="center">    
 <img src="./figures/logo.jpg" width = "150"  align=center />
</div>

</div>

<div align="center">
<h3>HERMES: A Unified Self-Driving World Model for Simultaneous <br>3D Scene Understanding and Generation</h3>




[Xin Zhou](https://lmd0311.github.io/)<sup>1\*</sup>, [Dingkang Liang](https://dk-liang.github.io/)<sup>1\*</sup>, Sifan Tu<sup>1</sup>, [Xiwu Chen](https://scholar.google.com/citations?user=PVMQa-IAAAAJ&hl=en)<sup>3</sup>, [Yikang Ding](https://scholar.google.com/citations?user=gdP9StQAAAAJ&hl=en)<sup>2†</sup>, Dingyuan Zhang<sup>1</sup>, Feiyang Tan<sup>3</sup>,<br> [Hengshuang Zhao](https://scholar.google.com/citations?user=4uE10I0AAAAJ&hl=en)<sup>4</sup>, [Xiang Bai](https://scholar.google.com/citations?user=UeltiQ4AAAAJ&hl=en)<sup>1</sup>

<sup>1</sup>  Huazhong University of Science & Technology, <sup>2</sup>  MEGVII Technology, <br><sup>3</sup>  Mach Drive, <sup>4</sup>  The University of Hong Kong

(\*) Equal contribution. (†) Project leader.

[![arXiv](https://img.shields.io/badge/Arxiv-2501.14729-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2501.14729)
[![Project](https://img.shields.io/badge/Homepage-project-orange.svg?logo=googlehome)](https://lmd0311.github.io/HERMES/)
[![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)

Check our *awesome* for the latest World Models! [![Awesome World Model](https://img.shields.io/badge/GitHub-awesome_world_model-blue?logo=github)](https://github.com/LMD0311/Awesome-World-Model)
![Stars](https://img.shields.io/github/stars/LMD0311/Awesome-World-Model)



</div>

## 📣 News
- **[2025.07.14]** Code, pretrained weights, and used processed data are now open-sourced.
- **[2025.06.26]**  HERMES is accepted to **ICCV 2025**! 🥳
- **[2025.01.24]** Release the demo. Check it out and give it a star 🌟!
- **[2025.01.24]** Release the [paper](https://arxiv.org/abs/2501.14729).

 <div  align="center">    
 <img src="./figures/intro.png" width = "888"  align=center />
</div>

## Abstract

Driving World Models (DWMs) have become essential for autonomous driving by enabling future scene prediction. However, existing DWMs are limited to scene generation and fail to incorporate scene understanding, which involves interpreting and reasoning about the driving environment. In this paper, we present a unified Driving World Model named **HERMES**. Through a unified framework, we seamlessly integrate scene understanding and future scene evolution (generation) in driving scenarios. Specifically, **HERMES** leverages a Bird's-Eye View (BEV) representation to consolidate multi-view spatial information while preserving geometric relationships and interactions. Additionally, we introduce world queries, which incorporate world knowledge into BEV features via causal attention in the Large Language Model (LLM), enabling contextual enrichment for both understanding and generation tasks. We conduct comprehensive studies on nuScenes and OmniDrive-nuScenes datasets to validate the effectiveness of our method. **HERMES** achieves state-of-the-art performance, reducing generation error by 32.4% and improving understanding metrics such as CIDEr by 8.0%.

## Overview

<div  align="center">    
 <img src="./figures/pipeline.jpg" width = "888"  align=center />
</div>



## Demo

<div  align="center">    
 <img src="./figures/scene1.gif" width = "999"  align=center />
 <center> Example 1 </center> <br>
</div>

<div  align="center">    
 <img src="./figures/scene2.gif" width = "999"  align=center />
 <center> Example 2 </center> <br>
</div>

<div  align="center">    
 <img src="./figures/scene3.gif" width = "999"  align=center />
 <center> Example 3 </center> <br>
</div>


## Main Results

<div  align="center">    
 <img src="./figures/main_results.png" width = "888"  align=center />
</div>

## Acknowledgement

This project is based on BEVFormer v2 ([code](https://github.com/fundamentalvision/BEVFormer)), InternVL ([code](https://github.com/OpenGVLab/InternVL)), UniPAD ([code](https://github.com/Nightmare-n/UniPAD)), OminiDrive ([code](https://github.com/NVlabs/OmniDrive)), DriveMonkey ([code](https://github.com/zc-zhao/DriveMonkey)). Thanks for their wonderful works.

## Citation

If you find this repository useful in your research, please consider giving a star ⭐ and a citation.
```bibtex
@inproceedings{zhou2025hermes,
  title={HERMES: A Unified Self-Driving World Model for Simultaneous 3D Scene Understanding and Generation},
  author={Zhou, Xin and Liang, Dingkang and Tu, Sifan and Chen, Xiwu and Ding, Yikang and Zhang, Dingyuan and Tan, Feiyang and Zhao, Hengshuang and Bai, Xiang},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2025}
}
```