amitbcp commited on
Commit
912d176
·
1 Parent(s): b156b41

repo updated

Browse files
Files changed (2) hide show
  1. README.md +121 -3
  2. nomiracl.py +220 -0
README.md CHANGED
@@ -1,3 +1,121 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - ar
6
+ - bn
7
+ - en
8
+ - es
9
+ - fa
10
+ - fi
11
+ - fr
12
+ - hi
13
+ - id
14
+ - ja
15
+ - ko
16
+ - ru
17
+ - sw
18
+ - te
19
+ - th
20
+ - zh
21
+ multilinguality:
22
+ - multilingual
23
+ pretty_name: NoMIRACL
24
+ size_categories:
25
+ - 10K<n<100K
26
+ source_datasets:
27
+ - miracl/miracl
28
+ task_categories:
29
+ - text-classification
30
+ license:
31
+ - apache-2.0
32
+ ---
33
+
34
+ # Dataset Card for NoMIRACL
35
+
36
+ Retrieval Augmented Generation (RAG) is a powerful approach to incorporate external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of generated responses. However, evaluating LLM robustness in RAG across different language families has been a challenge, leading to gaps in understanding the model's performance against errors in external retrieved knowledge. To address this, we present NoMIRACL, a human-annotated dataset designed for evaluating LLM robustness in RAG across 18 diverse languages.
37
+
38
+ NoMIRACL includes both a `non-relevant` and a `relevant` subset. The `non-relevant` subset contains queries with all passages manually judged as non-relevant or noisy, while the `relevant` subset includes queries with at least one judged relevant passage. LLM robustness is measured using two key metrics: hallucination rate and error rate.
39
+
40
+ All the topics are generated by native speakers of each language from our work in [MIRACL](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00595/117438/MIRACL-A-Multilingual-Retrieval-Dataset-Covering), who also label the relevance between the topics and a given document list. The queries with no-relevant documents are used to create the `non-relevant` subset whereas queries with atleast one relevant document (i.e., queries in MIRACL dev and test) are used to create `relevant` subset.
41
+
42
+ This repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found be [here](https://huggingface.co/datasets/miracl/miracl-corpus).
43
+
44
+ ## Quickstart
45
+
46
+ ```
47
+ import datasets
48
+
49
+ language = 'german' # or any of the 18 languages
50
+ subset = 'relevant' # or 'non_relevant'
51
+ split = 'test' # or 'dev' for development split
52
+
53
+ # four combinations available: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
54
+ nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')
55
+ ```
56
+
57
+
58
+ ## Dataset Description
59
+ * **Repository:** https://github.com/project-miracl/nomiracl
60
+ * **Paper:** https://arxiv.org/abs/2312.11361
61
+
62
+ ## Dataset Structure
63
+ 1. To download the files:
64
+
65
+ Under folders `data/{lang}`,
66
+ the subset of corpus is saved in `.jsonl.gz` format, with each line to be:
67
+ ```
68
+ {"docid": "28742#27",
69
+ "title": "Supercontinent",
70
+ "text": "Oxygen levels of the Archaean Eon were negligible and today they are roughly 21 percent. [ ... ]"}
71
+ ```
72
+
73
+ Under folders `data/{lang}/topics`,
74
+ the topics are saved in `.tsv` format, with each line to be:
75
+ ```
76
+ qid\tquery
77
+ ```
78
+
79
+ Under folders `miracl-v1.0-{lang}/qrels`,
80
+ the qrels are saved in standard TREC format, with each line to be:
81
+ ```
82
+ qid Q0 docid relevance
83
+ ```
84
+
85
+ 2. To access the data using HuggingFace `datasets`:
86
+ ```
87
+ import datasets
88
+
89
+ language = 'german' # or any of the 18 languages
90
+ subset = 'relevant' # or 'non_relevant'
91
+ split = 'test' # or 'dev' for development split
92
+
93
+ # four combinations: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
94
+ nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')
95
+
96
+ # training set:
97
+ for data in nomiracl: # or 'dev', 'testA'
98
+ query_id = data['query_id']
99
+ query = data['query']
100
+ positive_passages = data['positive_passages']
101
+ negative_passages = data['negative_passages']
102
+
103
+ for entry in positive_passages: # OR 'negative_passages'
104
+ docid = entry['docid']
105
+ title = entry['title']
106
+ text = entry['text']
107
+ ```
108
+
109
+ ## Dataset Statistics
110
+ For NoMIRACL dataset statistics, please refer to our publication [here](https://arxiv.org/abs/2312.11361).
111
+
112
+
113
+ ## Citation Information
114
+ ```
115
+ @article{thakur2023nomiracl,
116
+ title={NoMIRACL: Knowing When You Don't Know for Robust Multilingual Retrieval-Augmented Generation},
117
+ author={Nandan Thakur and Luiz Bonifacio and Xinyu Zhang and Odunayo Ogundepo and Ehsan Kamalloo and David Alfonso-Hermelo and Xiaoguang Li and Qun Liu and Boxing Chen and Mehdi Rezagholizadeh and Jimmy Lin},
118
+ journal={ArXiv},
119
+ year={2023},
120
+ volume={abs/2312.11361}
121
+ ```
nomiracl.py ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """NoMIRACL: A dataset to evaluation LLM robustness across 18 languages."""
18
+
19
+ import os
20
+ import json
21
+ import csv
22
+ import datasets
23
+
24
+ from collections import defaultdict
25
+
26
+
27
+ _CITATION = """\
28
+ @article{thakur2023nomiracl,
29
+ title={NoMIRACL: Knowing When You Don't Know for Robust Multilingual Retrieval-Augmented Generation},
30
+ author={Nandan Thakur and Luiz Bonifacio and Xinyu Zhang and Odunayo Ogundepo and Ehsan Kamalloo and David Alfonso-Hermelo and Xiaoguang Li and Qun Liu and Boxing Chen and Mehdi Rezagholizadeh and Jimmy Lin},
31
+ journal={ArXiv},
32
+ year={2023},
33
+ volume={abs/2312.11361}
34
+ }
35
+ """
36
+
37
+ _DESCRIPTION = """\
38
+ Data Loader for the NoMIRACL dataset.
39
+ """
40
+
41
+ _URL = "https://github.com/project-miracl/nomiracl"
42
+
43
+ _DL_URL_FORMAT = "data/{name}"
44
+
45
+
46
+ def load_topics(filepath: str):
47
+ """
48
+ Loads queries from a file and stores them in a dictionary.
49
+ """
50
+ queries = {}
51
+ with open(filepath, 'r', encoding='utf-8') as f:
52
+ reader = csv.reader(f, delimiter='\t', quoting=csv.QUOTE_NONE)
53
+ for row in reader:
54
+ queries[row[0]] = row[1]
55
+ return queries
56
+
57
+ def load_corpus(filepath: str):
58
+ """
59
+ Loads the corpus file as a dictionary.
60
+ """
61
+ corpus = {}
62
+ with open(filepath, encoding='utf8') as fIn:
63
+ for line in fIn:
64
+ line = json.loads(line)
65
+ corpus[line.get("docid")] = {
66
+ "text": line.get("text", "").strip(),
67
+ "title": line.get("title", "").strip(),
68
+ }
69
+ return corpus
70
+
71
+
72
+ def load_qrels(filepath: str):
73
+ if filepath is None:
74
+ return None
75
+
76
+ qrels = defaultdict(dict)
77
+ with open(filepath, encoding="utf-8") as f:
78
+ for line in f:
79
+ qid, _, docid, rel = line.strip().split('\t')
80
+ qrels[qid][docid] = int(rel)
81
+ return qrels
82
+
83
+
84
+ class NoMIRACLConfig(datasets.BuilderConfig):
85
+ """BuilderConfig for NoMIRACL."""
86
+
87
+ def __init__(self, name, **kwargs):
88
+ """
89
+ Args:
90
+ name: `string`, name of dataset config (=language)
91
+ **kwargs: keyword arguments forwarded to super.
92
+ """
93
+ super(NoMIRACLConfig, self).__init__(
94
+ version=datasets.Version("1.0.0", ""), name=name.lower(), **kwargs
95
+ )
96
+ # relative path to full data inside a repo (for example `data/german`)
97
+ self.data_root_url = _DL_URL_FORMAT.format(name=name)
98
+
99
+
100
+ class NoMIRACL(datasets.GeneratorBasedBuilder):
101
+ """Multilingual NoMIRACL dataset."""
102
+
103
+ BUILDER_CONFIGS = [
104
+ NoMIRACLConfig(name="arabic", description="Arabic NoMIRACL dataset"),
105
+ NoMIRACLConfig(name="chinese", description="Chinese NoMIRACL dataset"),
106
+ NoMIRACLConfig(name="finnish", description="Finnish NoMIRACL dataset"),
107
+ NoMIRACLConfig(name="german", description="German NoMIRACL dataset"),
108
+ NoMIRACLConfig(name="indonesian", description="Indonesian NoMIRACL dataset"),
109
+ NoMIRACLConfig(name="korean", description="Korean NoMIRACL dataset"),
110
+ NoMIRACLConfig(name="russian", description="Russian NoMIRACL dataset"),
111
+ NoMIRACLConfig(name="swahili", description="Swahili NoMIRACL dataset"),
112
+ NoMIRACLConfig(name="thai", description="Thai NoMIRACL dataset"),
113
+ NoMIRACLConfig(name="bengali", description="Bengali NoMIRACL dataset"),
114
+ NoMIRACLConfig(name="english", description="English NoMIRACL dataset"),
115
+ NoMIRACLConfig(name="french", description="French NoMIRACL dataset"),
116
+ NoMIRACLConfig(name="hindi", description="Hindi NoMIRACL dataset"),
117
+ NoMIRACLConfig(name="japanese", description="Japanese NoMIRACL dataset"),
118
+ NoMIRACLConfig(name="persian", description="Persian NoMIRACL dataset"),
119
+ NoMIRACLConfig(name="spanish", description="Spanish NoMIRACL dataset"),
120
+ NoMIRACLConfig(name="telugu", description="Telugu NoMIRACL dataset"),
121
+ NoMIRACLConfig(name="yoruba", description="Yoruba NoMIRACL dataset"),
122
+ ]
123
+
124
+ def _info(self):
125
+ return datasets.DatasetInfo(
126
+ description=_DESCRIPTION,
127
+ features=datasets.Features({
128
+ 'query_id': datasets.Value('string'),
129
+ 'query': datasets.Value('string'),
130
+ 'positive_passages': [{
131
+ 'docid': datasets.Value('string'),
132
+ 'text': datasets.Value('string'),
133
+ 'title': datasets.Value('string')
134
+ }],
135
+ 'negative_passages': [{
136
+ 'docid': datasets.Value('string'),
137
+ 'text': datasets.Value('string'),
138
+ 'title': datasets.Value('string'),
139
+ }],
140
+ }),
141
+ supervised_keys=("file", "text"),
142
+ homepage=_URL,
143
+ citation=_CITATION,
144
+ task_templates=None,
145
+ )
146
+
147
+ def _split_generators(self, dl_manager):
148
+
149
+ # Download downloaded_files
150
+ downloaded_files = dl_manager.download_and_extract({
151
+ "corpus": self.config.data_root_url + "/corpus.jsonl.gz",
152
+ "dev": {"qrels": {"relevant": self.config.data_root_url + "/qrels/dev.relevant.tsv",
153
+ "non_relevant": self.config.data_root_url + "/qrels/dev.non_relevant.tsv"},
154
+ "topics": {"relevant": self.config.data_root_url + "/topics/dev.relevant.tsv",
155
+ "non_relevant": self.config.data_root_url + "/topics/dev.non_relevant.tsv"}},
156
+ "test": {"qrels": {"relevant": self.config.data_root_url + "/qrels/test.relevant.tsv",
157
+ "non_relevant": self.config.data_root_url + "/qrels/test.non_relevant.tsv"},
158
+ "topics": {"relevant": self.config.data_root_url + "/topics/test.relevant.tsv",
159
+ "non_relevant": self.config.data_root_url + "/topics/test.non_relevant.tsv"}},
160
+ })
161
+
162
+ splits = [
163
+ datasets.SplitGenerator(
164
+ name="dev.relevant",
165
+ gen_kwargs={
166
+ "corpus_path": downloaded_files["corpus"],
167
+ "qrels_path": downloaded_files["dev"]["qrels"]["relevant"],
168
+ "topics_path": downloaded_files["dev"]["topics"]["relevant"],
169
+ }
170
+ ),
171
+ datasets.SplitGenerator(
172
+ name="dev.non_relevant",
173
+ gen_kwargs={
174
+ "corpus_path": downloaded_files["corpus"],
175
+ "qrels_path": downloaded_files["dev"]["qrels"]["non_relevant"],
176
+ "topics_path": downloaded_files["dev"]["topics"]["non_relevant"],
177
+ },
178
+ ),
179
+ datasets.SplitGenerator(
180
+ name="test.relevant",
181
+ gen_kwargs={
182
+ "corpus_path": downloaded_files["corpus"],
183
+ "qrels_path": downloaded_files["test"]["qrels"]["relevant"],
184
+ "topics_path": downloaded_files["test"]["topics"]["relevant"],
185
+ }
186
+ ),
187
+ datasets.SplitGenerator(
188
+ name="test.non_relevant",
189
+ gen_kwargs={
190
+ "corpus_path": downloaded_files["corpus"],
191
+ "qrels_path": downloaded_files["test"]["qrels"]["non_relevant"],
192
+ "topics_path": downloaded_files["test"]["topics"]["non_relevant"],
193
+ },
194
+ ),
195
+ ]
196
+
197
+ return splits
198
+
199
+ def _generate_examples(self, corpus_path, qrels_path, topics_path):
200
+
201
+ corpus = load_corpus(corpus_path)
202
+ qrels = load_qrels(qrels_path)
203
+ topics = load_topics(topics_path)
204
+
205
+ for qid in topics:
206
+ data = {}
207
+ data['query_id'] = qid
208
+ data['query'] = topics[qid]
209
+
210
+ pos_docids = [docid for docid, rel in qrels[qid].items() if rel == 1] if qrels is not None else []
211
+ neg_docids = [docid for docid, rel in qrels[qid].items() if rel == 0] if qrels is not None else []
212
+ data['positive_passages'] = [{
213
+ 'docid': docid,
214
+ **corpus[docid]
215
+ } for docid in pos_docids if docid in corpus]
216
+ data['negative_passages'] = [{
217
+ 'docid': docid,
218
+ **corpus[docid]
219
+ } for docid in neg_docids if docid in corpus]
220
+ yield qid, data