language: | |
- en | |
license: mit | |
size_categories: | |
- 1M<n<10M | |
task_categories: | |
- visual-question-answering | |
- image-text-to-text | |
pretty_name: ABC-Pretraining-Data | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
dataset_info: | |
features: | |
- name: caption | |
dtype: string | |
- name: url | |
dtype: string | |
- name: id | |
dtype: int64 | |
- name: image | |
dtype: string | |
- name: negatives | |
sequence: int64 | |
splits: | |
- name: train | |
num_bytes: 2289772991 | |
num_examples: 2252041 | |
download_size: 1855548818 | |
dataset_size: 2289772991 | |
tags: | |
- visual | |
## ABC Pretraining Data | |
<!-- Provide a quick summary of the dataset. --> | |
This the the pretraining data for ABC. This dataset is derived from Google's [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/) dataset. | |
The each item in the dataset contain a URL where the corresponding image can be downloaded. Full dataaset is ~300 GB of images. | |
## Paper and Website | |
For more information, please refer to [Website](https://tiger-ai-lab.github.io/ABC/). | |
## Citation | |
``` | |
@misc{schneider2025abcachievingbettercontrol, | |
title={ABC: Achieving Better Control of Multimodal Embeddings using VLMs}, | |
author={Benjamin Schneider and Florian Kerschbaum and Wenhu Chen}, | |
year={2025}, | |
eprint={2503.00329}, | |
archivePrefix={arXiv}, | |
primaryClass={cs.CV}, | |
url={https://arxiv.org/abs/2503.00329}, | |
} | |
``` |