--- language: - en - hi license: cc-by-nc-sa-4.0 size_categories: - 1K If you keep a single JSONL (e.g., `test_all.jsonl`), use a **list** for `images` in every row. Single-image rows should have a one-element list. On the Hub, we expose two test splits. --- ## Data Schema Each row is a JSON object: | Field | Type | Description | |------------|---------------------|----------------------------------------------| | `id` | `string` | Unique identifier | | `images` | `list[string]` | Paths to one or more scorecard images | | `question` | `string` | Question text (English) | | `answer` | `string` | Ground-truth answer (canonicalized) | | `category` | `string` (`C1/C2/C3`)| Reasoning category | | `subset`* | `string` (`single/multi`) | Optional convenience field | **Example (single-image):** ```json {"id":"english-single-9","images":["English-apr/single_image/1198246_2innings_with_color1.png"],"question":"Which bowler has conceded the most extras?","answer":"Wahab Riaz","category":"C2","subset":"single"} ``` ## Loading & Preview ### Load from the Hub (two-split layout) ```python from datasets import load_dataset # Loads: DatasetDict({'test_single': ..., 'test_multi': ...}) ds = load_dataset("DIALab/MMCricBench") print(ds) # Peek a single-image example ex = ds["test_single"][0] print(ex["id"]) print(ex["question"], "->", ex["answer"]) # Preview images (each example stores a list of PIL images) from IPython.display import display for img in ex["images"]: display(img) ``` ## Baseline Results (from the paper) Accuracy (%) on MMCricBench by split and language. | Model | #Params | Single-EN (Avg) | Single-HI (Avg) | Multi-EN (Avg) | Multi-HI (Avg) | |-------------------|:------:|:---------------:|:---------------:|:--------------:|:--------------:| | SmolVLM | 500M | 19.2 | 19.0 | 11.8 | 11.6 | | Qwen2.5VL | 3B | 40.2 | 33.3 | 31.2 | 22.0 | | LLaVA-NeXT | 7B | 28.3 | 26.6 | 16.2 | 14.8 | | mPLUG-DocOwl2 | 8B | 20.7 | 19.9 | 15.2 | 14.4 | | Qwen2.5VL | 7B | 49.1 | 42.6 | 37.0 | 32.2 | | InternVL-2 | 8B | 29.4 | 23.4 | 18.6 | 18.2 | | Llama-3.2-V | 11B | 27.3 | 24.8 | 26.2 | 20.4 | | **GPT-4o** | — | **57.3** | **45.1** | **50.6** | **43.6** | *Numbers are exact-match accuracy (higher is better). For C1/C2/C3 breakdowns, see Table 3 (single-image) and Table 5 (multi-image) in the paper.* ## Contact For questions or issues, please open a discussion on the dataset page or email **Abhirama Subramanyam** at penamakuri.1@iitj.ac.in