|
--- |
|
license: cc-by-nc-4.0 |
|
task_categories: |
|
- object-detection |
|
language: |
|
- en |
|
pretty_name: Griffon V2 12M Dataset Card |
|
--- |
|
|
|
# Griffon v2 12M Dataset Card |
|
|
|
## News |
|
|
|
** [2025/08/10] ** We are happy to announce that [Griffon v2](https://arxiv.org/abs/2403.09333) is accepted to ICCV 2025. |
|
|
|
## Dataset details |
|
|
|
We provide 12M data used in the stage 2 training of Griffon v2. In this repo, we provide the processed annotation files for Obejct Detection, REC/REG, Visual Grounding, and Non-existing Judging tasks in the paper, and also the self-collected object counting data. |
|
|
|
### Self-Counting Data |
|
|
|
The counting data includes three parts, ```CT-datasets-new.tar.gz```, ```CountAnythingV1_clean.tar.gz```, and ```train_visual_openimages_cocostyle_cls601.json```. For the file ends with tar.gz, it contains both the annotation file and also the images. While for the data collected from the OpenImages, please download the OpenImages2019 train images. |
|
|
|
### Other Data |
|
|
|
For other annotations, please download the images from the following datasets: COCO(train2014 & train2017), Visual Genemo, Objects365-2023, V3Det, and Flickrs30K Entities. If meet any problem like missing images, please contact us from the Github issues. |
|
|
|
|
|
## License |
|
|
|
Attribution-NonCommercial 4.0 International. |
|
It should abide by the policy of the original data sources. |
|
|
|
## Citation |
|
|
|
To use this data, please cite |
|
```bibtex |
|
@article{zhan2024griffon, |
|
title={Griffon v2: Advancing multimodal perception with high-resolution scaling and visual-language co-referring}, |
|
author={Zhan, Yufei and Zhu, Yousong and Zhao, Hongyin and Yang, Fan and Tang, Ming and Wang, Jinqiao}, |
|
journal={arXiv preprint arXiv:2403.09333}, |
|
year={2024} |
|
} |
|
``` |