Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
jj-zhao commited on
Commit
53a968c
·
verified ·
1 Parent(s): 5f4b1d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -3
README.md CHANGED
@@ -1,3 +1,107 @@
1
- ---
2
- {}
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WideSearch: Benchmarking Agentic Broad Info-Seeking
2
+
3
+ ## Dataset Summary
4
+
5
+ WideSearch is a benchmark designed to evaluate the capabilities of Large Language Model (LLM) driven agents in **broad information-seeking** tasks. Unlike existing benchmarks that focus on finding a single, hard-to-find fact, WideSearch assesses an agent's ability to handle tasks that require gathering a large amount of scattered, yet easy-to-find, information.
6
+
7
+ The challenge in these tasks lies not in cognitive difficulty, but in the operational scale, repetitiveness, and the need for **Completeness** and **Factual Fidelity** in the final result. For example, a financial analyst gathering key metrics for all companies in a sector, or a job seeker collecting every vacancy that meets their criteria.
8
+
9
+ The benchmark, originating from the research paper "WideSearch: Benchmarking Agentic Broad Info-Seeking," contains 200 meticulously designed tasks (100 in English, 100 in Chinese).
10
+
11
+
12
+
13
+ ## Dataset Structure
14
+
15
+ The dataset consists of these components: an English task file, a Chinese task file, and a directory containing the ground-truth answers.
16
+
17
+ ```
18
+
19
+ /
20
+ ├── widesearch\_en.jsonl
21
+ ├── widesearch\_zh.jsonl
22
+ └── widesearch\_gold/
23
+ ├── ws\_001.csv
24
+ ├── ws\_002.csv
25
+ └── ...
26
+
27
+ ```
28
+
29
+ ### Data Instances
30
+
31
+ `widesearch_en.jsonl` and `widesearch_zh.jsonl` are JSON Lines files, where each line represents a single task.
32
+
33
+ **Example:**
34
+
35
+ ```json
36
+ {
37
+ "instance_id": "ws_en_001",
38
+ "query": "My son is about to start his university applications but he’s still uncertain about both his major and which universities to apply to. Could you help me find the top five universities in each of the five broad subjects from the QS World University Rankings by Subject 2025, and also check their standings in the QS World University Rankings 2025 and the Times Higher Education World University Rankings 2025? And I need the home page of the university's official website, standard application deadline for regular decison as well as the application fee without the fee wavier.Please organize the results in one Markdown table with the following columns:\nSubject, University, QS World University Rankings by Subject 2025, QS World University Rankings 2025, Times Higher Education World University Rankings 2025, Home Page, Application Deadline, Application Fee\nPlease use the universities’ full official names in English. \nUse only Arabic numerals in the ranking, for example: 1.\n\nThe output format is ```markdown\n{data_content}\n```.",
39
+ "evaluation": {
40
+ "unique_columns": ["subject", "university"],
41
+ "required": ["subject", "university", "qsworlduniversityrankingsbysubject2025", "qsworlduniversityrankings2025", "timeshighereducationworlduniversityrankings2025", "homepage", "applicationdeadline", "applicationfee"],
42
+ "eval_pipeline": {
43
+ "applicationdeadline": {"preprocess": ["norm_str"], "metric": ["llm_judge"], "criterion": "It is sufficient if the semantics are approximately the same as the reference answer or if they point to the same entity. There is no need for a word-for-word correspondence.\nThe month and day must be correct"},
44
+ "applicationfee": {"preprocess": ["norm_str"], "metric": ["llm_judge"], "criterion": "It is sufficient if the semantics are approximately the same as the reference answer or if they point to the same entity. There is no need for a word-for-word correspondence.\nIf there are multiple fees in the reference answer, all must be included."},
45
+ "homepage": {"preprocess": ["norm_str"], "metric": ["url_match"]},
46
+ "subject": {"preprocess": ["norm_str"], "metric": ["exact_match"]},
47
+ "university": {"preprocess": ["norm_str"], "metric": ["exact_match"]},
48
+ "qsworlduniversityrankingsbysubject2025": {"preprocess": ["norm_str"], "metric": ["exact_match"]},
49
+ "qsworlduniversityrankings2025": {"preprocess": ["norm_str"], "metric": ["exact_match"]},
50
+ "timeshighereducationworlduniversityrankings2025": {"preprocess": ["norm_str"], "metric": ["exact_match"]}
51
+ }
52
+ },
53
+ "language": "en"
54
+ }
55
+ ```
56
+
57
+ ```json
58
+ {
59
+ "instance_id": "ws_001",
60
+ "query": "我要做电影研究,需要你列出来2020年-2024年每年中国、美国本国票房前五的电影,表头需要包括年份、国家(如中国、美国)、电影名、导演、本国整体累计票房收益(不局限于当年,以亿为单位,保留到小数点后一位,例如7.9亿元,需要带上各国货币单位,中国电影以亿元为单位,美国电影为亿美元为单位)、电影类型。请以Markdown表格的格式输出整理后的数据,全部输出采用中文。请注意,对于当年12月末上映的电影、大部分票房收益落在下一年的,视为下一年的电影。请以Markdown表格的格式输出整理后的数据。\n表格中的列名依次为:\n年份、国家、电影名、导演、本国累计票房收益、电影��型\n\n格式为```markdown\n{数据内容}\n```。",
61
+ "evaluation": {
62
+ "unique_columns": ["国家", "电影名"],
63
+ "required": ["年份", "国家", "电影名", "导演", "本国累计票房收益", "电影类型"],
64
+ "eval_pipeline": {
65
+ "国家": {"preprocess": ["norm_str"], "metric": ["exact_match"]},
66
+ "年份": {"preprocess": ["norm_str"], "metric": ["exact_match"]},
67
+ "本国累计票房收益": {"preprocess": ["extract_number"], "metric": ["number_near"], "criterion": 0.1},
68
+ "导演": {"preprocess": ["norm_str"], "metric": ["llm_judge"], "criterion": "和参考答案语义相同大致、或者指向的实体一致即可,不需要字字对应。\n答出子集且未答出参考答案以外的内容时可算正确"},
69
+ "电影类型": {"preprocess": ["norm_str"], "metric": ["llm_judge"], "criterion": "和参考答案语义相同大致、或者指向的实体一致即可,不需要字字对应。\n答出参考答案中的部分类型(即子集)即视为正确、基于权威来源及官方依据的类型标注同样正确、答出其中一个子集其他类型内容合理也视为正确。"},
70
+ "电影名": {"preprocess": ["norm_str"], "metric": ["llm_judge"], "criterion": "和参考答案语义相同大致、或者指向的实体一致即可,不需要字字对应。"}
71
+ }
72
+ },
73
+ "language": "zh"
74
+ }
75
+ ```
76
+
77
+
78
+ ### Data Fields
79
+
80
+ * `instance_id` (string): A unique identifier for the task. This ID corresponds to the filename of the ground-truth CSV file in the `widesearch_gold` directory (e.g., `ws_001` corresponds to `ws_001.csv`).
81
+ * `query` (string): The natural language instruction given to the AI agent. It details the task requirements, the data columns to be collected, and the final Markdown table format.
82
+ * `evaluation` (dict): An object containing all the information necessary for automated evaluation.
83
+ * `unique_columns` (list): The primary key column(s) used to uniquely identify a row in the table.
84
+ * `required` (list): All column names that must be present in the agent's generated response.
85
+ * `eval_pipeline` (dict): Defines the evaluation method for each column.
86
+ * `preprocess` (list): Preprocessing steps to be applied to the cell data before evaluation (e.g., `norm_str` to normalize strings, `extract_number` to extract numbers).
87
+ * `metric` (list): The metric used to compare the predicted value with the ground truth (e.g., `exact_match`, `number_near` for numerical approximation, `llm_judge` for judgment by an LLM).
88
+ * `criterion` (float or string): Specific criteria for the metric. For `number_near`, this is the allowed relative tolerance; for `llm_judge`, it's the scoring guide for the "judge" LLM.
89
+ * `language` (string): The language of the task (`en` or `zh`).
90
+
91
+ ### Ground Truth Data
92
+
93
+ The `widesearch_gold/` directory contains the ground-truth answers for each task, stored in CSV format. Filenames correspond to the `instance_id`. These files were created by human experts through exhaustive web searches and cross-validation, representing a high-quality "gold standard".
94
+
95
+ ## Citation
96
+
97
+ If you use this dataset in your research, please cite the following paper:
98
+
99
+ ```bibtex
100
+ @article{wong2025widesearch,
101
+ title={WideSearch: Benchmarking Agentic Broad Info-Seeking},
102
+ author={Wong, Ryan and Wang, Jiawei and Zhao, Junjie and Chen, Li and Gao, Yan and Zhang, Long and Zhou, Xuan and Wang, Zuo and Xiang, Kai and Wang, Yang and Wang, Ke},
103
+ journal={arXiv preprint arXiv:XXXX.XXXXX},
104
+ year={2025},
105
+ note={Project Page: [https://github.com/xx/WideSearch](https://github.com/xx/WideSearch)}
106
+ }
107
+ ```