--- pretty_name: MultiCaRe Case Images (representative) license: cc-by-4.0 task_categories: - image-to-text - document-question-answering language: - en size_categories: - 10K140 classes). - Scale: 85k+ OA case reports, 160k+ images/subimages (v2.0). - Tasks enabled: image-text retrieval, caption grounding, VQA/doc-QA, image classification, multimodal modeling. - Citation: DATA journal paper — https://www.mdpi.com/2306-5729/10/8/123; Zenodo — https://zenodo.org/records/13936721. This repository: one representative image per figure One row per figure (image_id) with a representative processed image and textual context that mentions the figure in the case narrative. Highlights - Representative image chosen per figure (preference: undivided > 'a' > first available). - Includes text_references snippets to ground the figure in the case text. - Join to image-, case-, and article-level datasets using stable keys. Schema - image: datasets.Image (PIL-compatible) - image_id: original figure identifier (groups subimages) - file: processed image filename used as the representative - caption: figure caption (original) - text_references: newline-joined excerpts that mention this figure - tag: PubMed file tag - case_id: equals cases.case_id - article_id: PMCID - file_id, patient_id, license: cross-links and license string Quick start ```python from datasets import load_dataset ds = load_dataset("openmed-community/multicare-case-images", split="train") row = ds[0] row["image"].show() print(row["caption"]) # caption of the figure print(row["text_references"]) # where it appears in the case text ``` Joins - image_id ↔ images.main_image - case_id ↔ cases.case_id - article_id ↔ articles.article_id Notes - Prefer patient/article-level splits downstream. - Per-item OA licenses are preserved.