Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
arrow
Languages:
English
Size:
10K - 100K
License:
license: cc | |
language: | |
- en | |
tags: | |
- medical | |
- spelling | |
- counting | |
- qa | |
- grpo | |
task_categories: | |
- question-answering | |
pretty_name: MedSpellCount-QA | |
# MedSpellCount-QA | |
## Dataset Summary | |
**MedSpellCount-QA** is a lightweight dataset for **orthographic counting** framed as question-answering over medical terms. | |
Each `input` is a short natural-language question like: | |
> *“How many **r** are in **warfarine**?”* | |
The `output` is the **correct count** as an integer (e.g., `1`). This format is convenient for **GRPO** (Group Relative Policy Optimization) or other RL-style post-training, where a simple correctness reward compares multiple candidates per prompt. | |
*Why a distinct dataset?* Counting letters in real medical vocabulary is a simple, objective task that stresses **spelling attention** and **string reasoning** without requiring external knowledge. | |
## Use Cases | |
- **GRPO training**: generate K candidates per prompt and reward exact correctness. | |
- **Instruction/QA fine-tuning** for robustness to orthographic queries. | |
- **Eval** of character-level attention and tokenization effects on medical terms. | |
## Languages | |
- **English** prompts; terms are predominantly **medical**. | |
## Dataset Structure | |
### Data Fields | |
- **input** *(string, required)*: the question, e.g., | |
`How many 'r' in 'warfarine'?` | |
- **output** *(integer, required)*: the correct count as text, e.g., `1`. | |
### Data Instances | |
```json | |
{ | |
"input": "How many 'r' in 'warfarine'?", | |
"output": 1 | |
} | |
```python | |
from datasets import load_dataset | |
ds = load_dataset("mkurman/MedSpellCount-QA", split='train') | |
print(ds) | |
print(ds[0]) | |
``` | |