metadata
dataset_info:
features:
- name: images
list: image
- name: texts
list:
- name: assistant
dtype: string
- name: source
dtype: string
- name: user
dtype: string
splits:
- name: train
num_bytes: 11515853
num_examples: 50
download_size: 11496341
dataset_size: 11515853
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
How was it built?
dataset = load_dataset("HuggingFaceM4/the_cauldron", "ai2d", split="train", streaming=True)
dataset_iter = iter(dataset)
sliced_dataset = []
for i in range(50):
sliced_dataset.append(next(dataset_iter))
ds = Dataset.from_list(sliced_dataset)
ds.push_to_hub("ariG23498/the_cauldron_ai2d_sliced")