license: apache-2.0
language:
- en
pretty_name: Tiny QA Benchmark (Original EN Core for TQB++)
size_categories:
- n<1K
tags:
- question-answering
- evaluation
- benchmark
- toy-dataset
- tqb++-core
task_categories:
- question-answering
task_ids:
- extractive-qa
- closed-book-qa
arxiv: 2505.12058
datasets:
- vincentkoc/tiny_qa_benchmark_pp
Tiny QA Benchmark (Original English Core for TQB++)
This dataset (vincentkoc/tiny_qa_benchmark
) is the original 52-item English Question-Answering set. It now serves as the immutable "gold standard" core for the expanded Tiny QA Benchmark++ (TQB++) project.
The TQB++ project builds upon this core dataset by introducing a powerful synthetic generation toolkit, pre-built multilingual datasets, and a comprehensive framework for rapid LLM smoke testing.
For the full TQB++ toolkit, the latest research paper, multilingual datasets, and the synthetic generator, please visit:
- TQB++ Hugging Face Dataset Collection & Toolkit: vincentkoc/tiny_qa_benchmark_pp
- TQB++ GitHub Repository (Code, Paper & Toolkit): vincentkoc/tiny_qa_benchmark_pp
This original dataset (vincentkoc/tiny_qa_benchmark
) contains 52 hand-crafted general-knowledge QA pairs covering geography, history, math, science, literature, and more. It remains ideal for quick sanity checks, pipeline smoke-tests, and as a foundational component of TQB++. Each example includes:
- text: the question prompt
- label: the “gold” answer
- metadata.context: a one-sentence fact
- tags: additional annotations (
category
,difficulty
)
It’s intentionally tiny (<100 KB) so you can iterate on data loading, evaluation scripts, or CI steps in under a second using these specific 52 items.
Supported Tasks and Formats (for this core dataset)
- Tasks:
- Extractive QA
- Generative QA
- Format: JSON
- Splits:
train
(all 52 examples)
Languages (for this core dataset)
- English (
en
)
Dataset Structure
Data Fields
Each example in data/train.json
(as loaded by datasets
) has:
field | type | description |
---|---|---|
text |
string | The question prompt. |
label |
string | The correct answer. |
metadata |
object | Additional info. |
metadata.context |
string | A one-sentence fact supporting the answer. |
tags.category |
string | Broad question category (e.g. geography ). |
tags.difficulty |
string | Rough difficulty level (e.g. easy ). |
Data Example
[
{
"text": "What is the capital of France?",
"label": "Paris",
"metadata": {
"context": "France is a country in Europe. Its capital is Paris."
},
"tags": {
"category": "geography",
"difficulty": "easy"
}
},
{
"text": "What is 2 + 2?",
"label": "4",
"metadata": {
"context": "Basic arithmetic: 2 + 2 equals 4."
},
"tags": {
"category": "math",
"difficulty": "easy"
}
}
]
(Note: The actual file on the Hub might be a .jsonl
file where each line is a JSON object, but load_dataset
handles this.)
Data Splits
Only one split for this core dataset:
- train: 52 examples, used for development, quick evaluation, and as the TQB++ core.
Data Creation
Curation Rationale
The "Tiny QA Benchmark" (this 52-item set) was originally created to:
- Smoke-test QA pipelines (loading, preprocessing, evaluation).
- Demo Hugging Face Datasets integration in tutorials.
- Verify model–eval loops run without downloading large corpora.
- Serve as the immutable "gold standard" English core for the Tiny QA Benchmark++ (TQB++) project.
Source Data
Hand-crafted by the dataset creator from well-known, public-domain facts. It was initially developed as a dataset for sample projects to demonstrate Opik and now forms the foundational English core of TQB++.
Annotations
Self-annotated. Each metadata.context
and tags
field is manually created for these 52 items.
Usage
Load this specific 52-item core dataset with:
from datasets import load_dataset
ds = load_dataset("vincentkoc/tiny_qa_benchmark")
print(ds["train"][0])
# Expected output:
# {
# "text": "What is the capital of France?",
# "label": "Paris",
# "metadata": {
# "context": "France is a country in Europe. Its capital is Paris."
# },
# "tags": {
# "category": "geography",
# "difficulty": "easy"
# }
# }
For accessing the full TQB++ suite, including multilingual packs and the synthetic generator, refer to the TQB++ Hugging Face Dataset Collection.
Considerations for Use
- Immutable Core for TQB++: This dataset is the stable, hand-curated English core of the TQB++ project. Its 52 items are not intended to change.
- Not a Comprehensive Benchmark (on its own): While excellent for quick checks, these 52 items are too few for statistically significant model ranking. For broader evaluation, use in conjunction with the TQB++ synthetic generator and its multilingual capabilities found at vincentkoc/tiny_qa_benchmark_pp.
- Do Not Train: Primarily intended for evaluation, smoke-tests, or demos.
- No Sensitive Data: All facts are public domain.
Licensing
Apache-2.0. See the LICENSE
file in the TQB++ GitHub repository for details (as this dataset is now part of that larger project).
Citation
If you use this specific 52-item core English dataset, please cite it. You can use the following BibTeX entry, which has been updated to reflect its role:
@misc{koctinyqabenchmark_original_core,
author = { Vincent Koc },
title = { Tiny QA Benchmark (Original 52-item English Core for TQB++) },
year = 2025,
url = { https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark },
doi = { 10.57967/hf/5417 },
publisher = { Hugging Face }
}
For the complete Tiny QA Benchmark++ (TQB++) project (which includes this core set, the synthetic generator, multilingual packs, and the associated research paper), please refer to and cite the TQB++ project directly:
@misc{koctinyqabenchmark_pp_dataset,
author = {Vincent Koc},
title = {Tiny QA Benchmark++ (TQB++) Datasets and Toolkit},
year = {2025},
publisher = {Hugging Face & GitHub},
doi = {10.57967/hf/5531}, /* DOI for the TQB++ collection */
howpublished = {\\url{https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp}},
note = {See also: \\url{https://github.com/vincentkoc/tiny_qa_benchmark_pp}}
}