JefferyZhan commited on
Commit
ac3e546
·
verified ·
1 Parent(s): d8b39ec

Update Readme

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ task_categories:
4
+ - object-detection
5
+ language:
6
+ - en
7
+ pretty_name: Griffon V2 12M Dataset Card
8
+ ---
9
+
10
+ # Griffon v2 12M Dataset Card
11
+
12
+ ## News
13
+
14
+ ** [2025/08/10] ** We are happy to announce that [Griffon v2](https://arxiv.org/abs/2403.09333) is accepted to ICCV 2025.
15
+
16
+ ## Dataset details
17
+
18
+ We provide 12M data used in the stage 2 training of Griffon v2. In this repo, we provide the processed annotation files for Obejct Detection, REC/REG, Visual Grounding, and Non-existing Judging tasks in the paper, and also the self-collected object counting data.
19
+
20
+ ### Self-Counting Data
21
+
22
+ The counting data includes three parts, ```CT-datasets-new.tar.gz```, ```CountAnythingV1_clean.tar.gz```, and ```train_visual_openimages_cocostyle_cls601.json```. For the file ends with tar.gz, it contains both the annotation file and also the images. While for the data collected from the OpenImages, please download the OpenImages2019 train images.
23
+
24
+ ### Other Data
25
+
26
+ For other annotations, please download the images from the following datasets: COCO(train2014 & train2017), Visual Genemo, Objects365-2023, V3Det, and Flickrs30K Entities. If meet any problem like missing images, please contact us from the Github issues.
27
+
28
+
29
+ ## License
30
+
31
+ Attribution-NonCommercial 4.0 International.
32
+ It should abide by the policy of the original data sources.
33
+
34
+ ## Citation
35
+
36
+ To use this data, please cite
37
+ ```bibtex
38
+ @article{zhan2024griffon,
39
+ title={Griffon v2: Advancing multimodal perception with high-resolution scaling and visual-language co-referring},
40
+ author={Zhan, Yufei and Zhu, Yousong and Zhao, Hongyin and Yang, Fan and Tang, Ming and Wang, Jinqiao},
41
+ journal={arXiv preprint arXiv:2403.09333},
42
+ year={2024}
43
+ }
44
+ ```