metadata
license: cc-by-sa-4.0
task_categories:
- question-answering
- multiple-choice
language:
- ja
configs:
- config_name: v1.0
data_files:
- split: test
path: v1.0/test-*
- split: dev
path: v1.0/dev-*
dataset_info:
config_name: v1.0
features:
- name: qid
dtype: string
- name: category
dtype: string
- name: question
dtype: string
- name: choice0
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: choice3
dtype: string
- name: answer_index
dtype: int64
splits:
- name: dev
num_bytes: 7097
num_examples: 32
- name: test
num_bytes: 516185
num_examples: 2309
download_size: 588759
dataset_size: 523282
Dataset Card for JamC-QA
English/Japanese
Dataset Summary
This benchmark evaluates knowledge specific to Japan through multiple-choice questions. It covers eight categories: culture, custom, regional identity, geography, history, government, law, and healthcare. Achieving high performance requires broad and detailed understanding of Japan across these domains.
Leaderboard
Exact match score:
| Model | Micro-average | culture | custom | regional identity | geography | history | government | law | healthcare |
|---|---|---|---|---|---|---|---|---|---|
| sarashina2-8x70b | 0.7364 | 0.7220 | 0.8088 | 0.7855 | 0.6522 | 0.7839 | 0.7719 | 0.6436 | 0.8462 |
| sarashina2-70b | 0.7245 | 0.6988 | 0.7892 | 0.7556 | 0.6558 | 0.7781 | 0.7544 | 0.6733 | 0.7885 |
| Llama-3.3-Swallow-70B-v0.4 | 0.6950 | 0.6894 | 0.7353 | 0.6185 | 0.5688 | 0.7781 | 0.7719 | 0.7459 | 0.8462 |
| RakutenAI-2.0-8x7B | 0.6160 | 0.6056 | 0.6814 | 0.6160 | 0.4855 | 0.6888 | 0.6754 | 0.5941 | 0.6923 |
| Mixtral-8x7B-v0.1-japanese | 0.5950 | 0.5885 | 0.7500 | 0.5985 | 0.4601 | 0.6052 | 0.6404 | 0.5710 | 0.7308 |
| plamo-100b | 0.5908 | 0.6102 | 0.6422 | 0.6384 | 0.4565 | 0.6398 | 0.5526 | 0.5182 | 0.6731 |
| llm-jp-3.1-8x13b | 0.5737 | 0.5839 | 0.6275 | 0.6060 | 0.4674 | 0.6110 | 0.6404 | 0.4884 | 0.6538 |
| Meta-Llama-3.1-405B | 0.5724 | 0.5699 | 0.5245 | 0.4688 | 0.5435 | 0.6571 | 0.6579 | 0.6403 | 0.5962 |
| Nemotron-4-340B-Base | 0.5600 | 0.5761 | 0.6176 | 0.5062 | 0.4601 | 0.5821 | 0.6491 | 0.5776 | 0.6346 |
| Qwen2.5-72B | 0.5421 | 0.5419 | 0.6324 | 0.4763 | 0.4746 | 0.5677 | 0.6053 | 0.5644 | 0.6154 |
Languages
Japanese
Dataset Structure
Data Instances
An example from culture category looks as follows:
{
"qid": "jamcqa_test_culture_00001",
"category": "culture",
"question": "「狂った世で気が狂うなら気は確かだ」の名言を残した映画はどれ?",
"choice0": "乱",
"choice1": "羅生門",
"choice2": "隠し砦の三悪人",
"choice3": "影武者",
"answer_index": 0,
}
Data Fields
qid (str): A unique identifier for each question.category (str): The category of the question.- culture, custom, regional identity, geography, history, government, law, healthcare
question (str): The question text.- Converted from full-width to half-width characters, excluding katakana characters.
- Does not contain any line breaks (
\n). - Leading and trailing whitespace is removed.
choice{0..3} (str): Four answer options (choice0 to choice3).- Converted from full-width to half-width characters, excluding katakana characters.
- Does not contain any line breaks (
\n). - Leading and trailing whitespace is removed.
answer_index (int): The index of the correct answer amongchoice0tochoice3(0–3).
Data Splits
dev: 7 examples, meant for few-shot settingtest: there are 2,341 examples
| Category | Number of Questions |
|---|---|
| culture | 644 |
| custom | 204 |
| regional identity | 401 |
| geography | 276 |
| history | 347 |
| government | 114 |
| law | 303 |
| healthcare | 52 |
| total | 2,341 |
Licensing Information
How to use
$ python
>>> import datasets
>>> jamcqa_test = datasets.load_dataset('sbintuitions/JamC-QA', 'v1.0', split='test')
>>> print(jamcqa_test)
Dataset({
features: ['qid', 'category', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'answer_index'],
num_rows: 2341
})
>>> print(jamcqa_test[0])
{'qid': 'jamcqa_test_culture_00001', 'category': '文化', 'question': '「狂った世で気が狂うなら気は確かだ」の名言を残した映画はどれ?', 'choice0': '乱', 'choice1': '羅生門', 'choice2': '隠し砦の三悪人', 'choice3': '影武者', 'answer_index': 0}
>>>
Citation Information
@inproceedings{Oka2025,
author={岡 照晃, 柴田 知秀, 吉田 奈央},
title={JamC-QA: 日本固有の知識を問う多肢選択式質問応答ベンチマークの構築},
year={2025},
month={March},
booktitle={言語処理学会第31回年次大会(NLP2025)},
pages={839--844},
}