File size: 7,337 Bytes
4c11fe6
 
 
 
40e2738
4c11fe6
 
 
 
 
 
 
40e2738
4c11fe6
 
 
 
34255bf
7784faa
 
 
4c11fe6
 
40e2738
4c11fe6
40e2738
 
 
 
 
 
 
 
 
4c11fe6
 
 
 
 
 
40e2738
4c11fe6
40e2738
4c11fe6
 
 
 
 
 
 
 
40e2738
4c11fe6
 
 
 
 
 
 
40e2738
4c11fe6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40e2738
 
4c11fe6
40e2738
4c11fe6
 
 
40e2738
4c11fe6
40e2738
4c11fe6
 
 
 
 
40e2738
4c11fe6
 
 
 
40e2738
4c11fe6
 
 
 
40e2738
4c11fe6
 
 
40e2738
4c11fe6
 
 
40e2738
4c11fe6
 
 
 
 
 
40e2738
4c11fe6
 
 
 
 
 
 
 
 
 
 
 
40e2738
4c11fe6
40e2738
4c11fe6
40e2738
 
 
 
4c11fe6
 
 
40e2738
4c11fe6
 
 
40e2738
4c11fe6
 
40e2738
c16b5fa
40e2738
c16b5fa
 
 
 
4c11fe6
40e2738
 
 
 
 
 
 
 
 
 
 
 
 
 
4c11fe6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
---
license: apache-2.0
language:
  - en
pretty_name: Tiny QA Benchmark (Original EN Core for TQB++)
size_categories:
  - n<1K
tags:
  - question-answering
  - evaluation
  - benchmark
  - toy-dataset
  - tqb++-core
task_categories:
  - question-answering
task_ids:
  - extractive-qa
  - closed-book-qa
arxiv: 2505.12058
datasets:
  - vincentkoc/tiny_qa_benchmark_pp
---

# Tiny QA Benchmark (Original English Core for TQB++)

**This dataset (`vincentkoc/tiny_qa_benchmark`) is the original 52-item English Question-Answering set. It now serves as the immutable "gold standard" core for the expanded [Tiny QA Benchmark++ (TQB++)](https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp) project.**

The TQB++ project builds upon this core dataset by introducing a powerful synthetic generation toolkit, pre-built multilingual datasets, and a comprehensive framework for rapid LLM smoke testing.

**For the full TQB++ toolkit, the latest research paper, multilingual datasets, and the synthetic generator, please visit:**
*   **TQB++ Hugging Face Dataset Collection & Toolkit:** [vincentkoc/tiny_qa_benchmark_pp](https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp)
*   **TQB++ GitHub Repository (Code, Paper & Toolkit):** [vincentkoc/tiny_qa_benchmark_pp](https://github.com/vincentkoc/tiny_qa_benchmark_pp)

This original dataset (`vincentkoc/tiny_qa_benchmark`) contains 52 hand-crafted general-knowledge QA pairs covering geography, history, math, science, literature, and more. It remains ideal for quick sanity checks, pipeline smoke-tests, and as a foundational component of TQB++. Each example includes:

- **text**: the question prompt  
- **label**: the “gold” answer  
- **metadata.context**: a one-sentence fact  
- **tags**: additional annotations (`category`, `difficulty`)

It’s intentionally tiny (<100 KB) so you can iterate on data loading, evaluation scripts, or CI steps in under a second using these specific 52 items.

## Supported Tasks and Formats (for this core dataset)

- **Tasks**:  
  - Extractive QA  
  - Generative QA  
- **Format**: JSON  
- **Splits**:  
  - `train` (all 52 examples)  

## Languages (for this core dataset)

- English (`en`)

## Dataset Structure

### Data Fields

Each example in `data/train.json` (as loaded by `datasets`) has:

| field               | type   | description                                  |
|---------------------|--------|----------------------------------------------|
| `text`              | string | The question prompt.                         |
| `label`             | string | The correct answer.                          |
| `metadata`          | object | Additional info.                             |
| `metadata.context`  | string | A one-sentence fact supporting the answer.   |
| `tags.category`     | string | Broad question category (e.g. `geography`).  |
| `tags.difficulty`   | string | Rough difficulty level (e.g. `easy`).        |

## Data Example

```json
[
  {
    "text": "What is the capital of France?",
    "label": "Paris",
    "metadata": {
      "context": "France is a country in Europe. Its capital is Paris."
    },
    "tags": {
      "category": "geography",
      "difficulty": "easy"
    }
  },
  {
    "text": "What is 2 + 2?",
    "label": "4",
    "metadata": {
      "context": "Basic arithmetic: 2 + 2 equals 4."
    },
    "tags": {
      "category": "math",
      "difficulty": "easy"
    }
  }
]
```
*(Note: The actual file on the Hub might be a `.jsonl` file where each line is a JSON object, but `load_dataset` handles this.)*

## Data Splits

Only one split for this core dataset:

- **train**: 52 examples, used for development, quick evaluation, and as the TQB++ core.

## Data Creation

### Curation Rationale

The "Tiny QA Benchmark" (this 52-item set) was originally created to:

1. Smoke-test QA pipelines (loading, preprocessing, evaluation).  
2. Demo Hugging Face Datasets integration in tutorials.  
3. Verify model–eval loops run without downloading large corpora.
4. **Serve as the immutable "gold standard" English core for the [Tiny QA Benchmark++ (TQB++)](https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp) project.**

### Source Data

Hand-crafted by the dataset creator from well-known, public-domain facts.
It was initially developed as a dataset for sample projects to demonstrate [Opik](https://github.com/comet-ml/opik/) and now forms the foundational English core of TQB++.

### Annotations

Self-annotated. Each `metadata.context` and `tags` field is manually created for these 52 items.

## Usage

Load this specific 52-item core dataset with:

```python
from datasets import load_dataset

ds = load_dataset("vincentkoc/tiny_qa_benchmark")
print(ds["train"][0])
# Expected output:
# {
#   "text": "What is the capital of France?",
#   "label": "Paris",
#   "metadata": {
#     "context": "France is a country in Europe. Its capital is Paris."
#   },
#   "tags": {
#     "category": "geography",
#     "difficulty": "easy"
#   }
# }
```
For accessing the full TQB++ suite, including multilingual packs and the synthetic generator, refer to the [TQB++ Hugging Face Dataset Collection](https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp).

## Considerations for Use

*   **Immutable Core for TQB++:** This dataset is the stable, hand-curated English core of the TQB++ project. Its 52 items are not intended to change.
*   **Not a Comprehensive Benchmark (on its own):** While excellent for quick checks, these 52 items are too few for statistically significant model ranking. For broader evaluation, use in conjunction with the TQB++ synthetic generator and its multilingual capabilities found at [vincentkoc/tiny_qa_benchmark_pp](https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp).
*   **Do Not Train:** Primarily intended for evaluation, smoke-tests, or demos.
*   **No Sensitive Data:** All facts are public domain.

## Licensing

Apache-2.0. See the `LICENSE` file in the [TQB++ GitHub repository](https://github.com/vincentkoc/tiny_qa_benchmark_pp) for details (as this dataset is now part of that larger project).

## Citation

If you use this specific 52-item core English dataset, please cite it. You can use the following BibTeX entry, which has been updated to reflect its role:

```bibtex
@misc{koctinyqabenchmark_original_core,
	author       = { Vincent Koc },
	title        = { Tiny QA Benchmark (Original 52-item English Core for TQB++) },
	year         = 2025,
	url          = { https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark },
	doi          = { 10.57967/hf/5417 },
	publisher    = { Hugging Face }
}
```

For the complete **Tiny QA Benchmark++ (TQB++)** project (which includes this core set, the synthetic generator, multilingual packs, and the associated research paper), please refer to and cite the TQB++ project directly:

```bibtex
@misc{koctinyqabenchmark_pp_dataset,
  author       = {Vincent Koc},
  title        = {Tiny QA Benchmark++ (TQB++) Datasets and Toolkit},
  year         = {2025},
  publisher    = {Hugging Face & GitHub},
  doi          = {10.57967/hf/5531}, /* DOI for the TQB++ collection */
  howpublished = {\\url{https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp}},
  note         = {See also: \\url{https://github.com/vincentkoc/tiny_qa_benchmark_pp}}
}
```