File size: 3,250 Bytes
6093e38
ed22c6b
 
 
 
 
 
 
 
 
 
 
6093e38
 
ed22c6b
 
6093e38
 
ed22c6b
 
6093e38
 
ed22c6b
6093e38
ed22c6b
 
6093e38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ed22c6b
 
 
 
6093e38
 
 
 
 
 
ed22c6b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
license: cc-by-4.0
task_categories:
- image-text-to-text
language:
- en
tags:
- medical
- multimodal
- in-context-learning
- vqa
- benchmark
dataset_info:
  features:
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: image
    dtype: image
  - name: image_url
    dtype: string
  - name: problem_id
    dtype: string
  - name: order
    dtype: int64
  - name: parquet_path
    dtype: string
  - name: speciality
    dtype: string
  - name: flag_answer_format
    dtype: string
  - name: flag_image_type
    dtype: string
  - name: flag_cognitive_process
    dtype: string
  - name: flag_rarity
    dtype: string
  - name: flag_difficulty_llms
    dtype: string
  splits:
  - name: train
    num_bytes: 94510405.0
    num_examples: 517
  download_size: 90895608
  dataset_size: 94510405.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning

[Paper](https://huggingface.co/papers/2506.21355) | [Project page](https://smmile-benchmark.github.io) | [Code](https://github.com/eth-medical-ai-lab/smmile)
<div align="center">
  <img src="./logo_final.png" alt="SMMILE Logo" width="400">
</div>

## Introduction

Multimodal in-context learning (ICL) remains underexplored despite the profound potential it could have in complex application domains such as medicine. Clinicians routinely face a long tail of tasks which they need to learn to solve from few examples, such as considering few relevant previous cases or few differential diagnoses. While MLLMs have shown impressive advances in medical visual question answering (VQA) or multi-turn chatting, their ability to learn multimodal tasks from context is largely unknown.

We introduce **SMMILE** (Stanford Multimodal Medical In-context Learning Evaluation), the first multimodal medical ICL benchmark. A set of clinical experts curated ICL problems to scrutinize MLLM's ability to learn multimodal tasks at inference time from context.

## Dataset Access

The SMMILE dataset is available on HuggingFace:

```python
from datasets import load_dataset
load_dataset('smmile/SMMILE', token=YOUR_HF_TOKEN)
load_dataset('smmile/SMMILE-plusplus', token=YOUR_HF_TOKEN)
```

Note: You need to set your HuggingFace token as an environment variable:
```bash
export HF_TOKEN=your_token_here
```

## License

This work is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).

## Citation

If you find our dataset useful for your research, please cite the following paper:

```bibtex
@article{rieff2025smmile,
      title={SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning},
      author={Melanie Rieff and Maya Varma and Ossian Rabow and Subathra Adithan and Julie Kim and Ken Chang and Hannah Lee and Nidhi Rohatgi and Christian Bluethgen and Mohamed S. Muneer and Jean-Benoit Delbrouck and Michael Moor},
      year={2025},
      eprint={2506.21355},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2506.21355},
}
```

## Acknowledgments

We thank the clinical experts who contributed to curating the benchmark dataset.