File size: 2,053 Bytes
47c73f7
 
 
ab3b92d
 
47c73f7
 
 
 
 
 
ab3b92d
47c73f7
 
 
 
8fc1fd2
47c73f7
 
 
8fc1fd2
47c73f7
 
ab3b92d
8fc1fd2
b3e88fb
47c73f7
 
 
ab3b92d
47c73f7
 
 
 
8fc1fd2
47c73f7
b3e88fb
 
 
 
 
 
 
 
8fc1fd2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: cc-by-nc-4.0
task_categories:
- text-generation
- image-to-text
- summarization
- question-answering
language:
- en
---

# 🎨 Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want

The interaction between humans and artificial intelligence (AI) is a crucial factor that reflects the effectiveness of multimodal large language models (MLLMs). However, current MLLMs primarily focus on image-level comprehension and limit interaction to textual instructions, thereby constraining their flexibility in usage and depth of response. Therefore, we introduce the **Draw-and-Understand project**: a new model, a multi-domain dataset, and a challenging benchmark for visual prompting.


## Training and Evaluation Dataset Card

- MDVP-Data is a comprehensive dataset for multi-domain visual-prompt instruction tuning. This dataset encompasses data for both point-level and region-level understanding, designed to enhance a model’s comprehension ability and robustness.

- We also introduce MDVP-Bench, a challenging benchmark designed to evaluate tasks that require a combination of detailed description referrals, inter-relationship analysis, and complex reasoning.


## Paper and Code
Project Page: [Draw-and-Understand](https://draw-and-understand.github.io/) \
Paper: [https://arxiv.org/abs/2403.20271](https://arxiv.org/abs/2403.20271) \
Code: [https://github.com/AFeng-x/Draw-and-Understand](https://github.com/AFeng-x/Draw-and-Understand)


## License
Attribution-NonCommercial 4.0 International \
It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use.


## Citations
```
@misc{lin2024drawandunderstand,
      title={Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want}, 
      author={Weifeng Lin and Xinyu Wei and Ruichuan An and Peng Gao and Bocheng Zou and Yulin Luo and Siyuan Huang and Shanghang Zhang and Hongsheng Li},
      year={2024},
      eprint={2403.20271},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
```