metadata
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: markdown
dtype: string
- name: html
dtype: string
- name: file_name
dtype: string
- name: ocr
dtype: string
splits:
- name: train
num_bytes: 27893994569.17
num_examples: 113091
download_size: 25517477643
dataset_size: 27893994569.17
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
HuCCPDF
Approximately 113k pages of Hungarian PDFs from the Common Crawl, as featured in our paper "Synthetic Document Question Answering in Hungarian". The text
field is extracted using PyMuPDF, and the ocr
field is extracted using pytesseract.
See other datasets from the paper:
- HuDocVQA (https://huggingface.co/datasets/jlli/HuDocVQA)
- HuDocVQA-manual (https://huggingface.co/datasets/jlli/HuDocVQA-manual)