soerenray commited on
Commit
6b7fb51
·
1 Parent(s): 5e0dc6e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +306 -37
README.md CHANGED
@@ -1,40 +1,309 @@
1
  ---
2
  license: openrail
3
- dataset_info:
4
- features:
5
- - name: label_string
6
- dtype: string
7
- - name: probability
8
- dtype: float64
9
- - name: probability_vector
10
- sequence: float32
11
- - name: prediction
12
- dtype: int64
13
- - name: prediction_string
14
- dtype: string
15
- - name: embedding_reduced
16
- sequence: float32
17
- - name: __index_level_0__
18
- dtype: int64
19
- splits:
20
- - name: train
21
- num_bytes: 9172467
22
- num_examples: 51093
23
- - name: validation
24
- num_bytes: 1220334
25
- num_examples: 6799
26
- - name: test
27
- num_bytes: 552941
28
- num_examples: 3081
29
- download_size: 0
30
- dataset_size: 10945742
31
- configs:
32
- - config_name: default
33
- data_files:
34
- - split: train
35
- path: data/train-*
36
- - split: validation
37
- path: data/validation-*
38
- - split: test
39
- path: data/test-*
40
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: openrail
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
+
5
+ ---
6
+ annotations_creators:
7
+ - other
8
+ language_creators:
9
+ - crowdsourced
10
+ language:
11
+ - en
12
+ license:
13
+ - cc-by-4.0
14
+ multilinguality:
15
+ - monolingual
16
+ size_categories:
17
+ - 100K<n<1M
18
+ - 10K<n<100K
19
+ source_datasets:
20
+ - extended|speech_commands
21
+ task_categories:
22
+ - audio-classification
23
+ task_ids:
24
+ - keyword-spotting
25
+ pretty_name: SpeechCommands
26
+ config_names:
27
+ - v0.01
28
+ - v0.02
29
+ tags:
30
+ - spotlight
31
+ - enriched
32
+ - renumics
33
+ - enhanced
34
+ - audio
35
+ - classification
36
+ - extended
37
+ ---
38
+
39
+ # Dataset Card for SpeechCommands
40
+
41
+ ## Dataset Description
42
+
43
+ - **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=speech-commands-enrichment_only)
44
+ - **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
45
+ - **Dataset Homepage** [tensorflow.org/datasets](https://www.tensorflow.org/datasets/catalog/speech_commands)
46
+ - **Paper:** [Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition](https://arxiv.org/pdf/1804.03209.pdf)
47
+ - **Leaderboard:** [More Information Needed]
48
+
49
+ ### Dataset Summary
50
+
51
+ 📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
52
+ At [Renumics](https://renumics.com/?hf-dataset-card=speech-commands-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
53
+
54
+ 🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
55
+ 1. Enable new researchers to quickly develop a profound understanding of the dataset.
56
+ 2. Popularize data-centric AI principles and tooling in the ML community.
57
+ 3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
58
+
59
+ 📚 This dataset is an enriched version of the [SpeechCommands Dataset](https://huggingface.co/datasets/speech_commands).
60
+
61
+
62
+ ### Explore the Dataset
63
+
64
+ The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
65
+
66
+ Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
67
+
68
+ ```python
69
+ !pip install renumics-spotlight datasets[audio]
70
+ ```
71
+ > **_Notice:_** On Linux, non-Python dependency on libsndfile package must be installed manually. See [Datasets - Installation](https://huggingface.co/docs/datasets/installation#audio) for more information.
72
+
73
+ Load the dataset from huggingface in your notebook:
74
+
75
+ [//]: <> (TODO: Update this!)
76
+ Start exploring with a simple view:
77
+
78
+ ```python
79
+ from renumics import spotlight
80
+ import datasets
81
+
82
+ dataset = datasets.load_dataset("renumics/speech_commands_enriched", "v0.01")
83
+ spotlight.show(dataset, port=8000, dtype={"file": spotlight.Audio})
84
+ ```
85
+ You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
86
+
87
+
88
+ ### SpeechCommands Dataset
89
+
90
+ This is a set of one-second .wav audio files, each containing a single spoken
91
+ English word or background noise. These words are from a small set of commands, and are spoken by a
92
+ variety of different speakers. This data set is designed to help train simple
93
+ machine learning models. It is covered in more detail at [https://arxiv.org/abs/1804.03209](https://arxiv.org/abs/1804.03209).
94
+
95
+ Version 0.01 of the data set (configuration `"v0.01"`) was released on August 3rd 2017 and contains
96
+ 64,727 audio files.
97
+
98
+ Version 0.02 of the data set (configuration `"v0.02"`) was released on April 11th 2018 and
99
+ contains 105,829 audio files.
100
+
101
+
102
+ ### Supported Tasks and Leaderboards
103
+
104
+ * `keyword-spotting`: the dataset can be used to train and evaluate keyword
105
+ spotting systems. The task is to detect preregistered keywords by classifying utterances
106
+ into a predefined set of words. The task is usually performed on-device for the
107
+ fast response time. Thus, accuracy, model size, and inference time are all crucial.
108
+
109
+ ### Languages
110
+
111
+ The language data in SpeechCommands is in English (BCP-47 `en`).
112
+
113
+ ## Dataset Structure
114
+
115
+ ### Data Instances
116
+
117
+ Example of a core word (`"label"` is a word, `"is_unknown"` is `False`):
118
+ ```python
119
+ {
120
+ "file": "no/7846fd85_nohash_0.wav",
121
+ "audio": {
122
+ "path": "no/7846fd85_nohash_0.wav",
123
+ "array": array([ -0.00021362, -0.00027466, -0.00036621, ..., 0.00079346,
124
+ 0.00091553, 0.00079346]),
125
+ "sampling_rate": 16000
126
+ },
127
+ "label": 1, # "no"
128
+ "is_unknown": False,
129
+ "speaker_id": "7846fd85",
130
+ "utterance_id": 0
131
+ }
132
+ ```
133
+
134
+ Example of an auxiliary word (`"label"` is a word, `"is_unknown"` is `True`)
135
+ ```python
136
+ {
137
+ "file": "tree/8b775397_nohash_0.wav",
138
+ "audio": {
139
+ "path": "tree/8b775397_nohash_0.wav",
140
+ "array": array([ -0.00854492, -0.01339722, -0.02026367, ..., 0.00274658,
141
+ 0.00335693, 0.0005188]),
142
+ "sampling_rate": 16000
143
+ },
144
+ "label": 28, # "tree"
145
+ "is_unknown": True,
146
+ "speaker_id": "1b88bf70",
147
+ "utterance_id": 0
148
+ }
149
+ ```
150
+
151
+ Example of background noise (`_silence_`) class:
152
+
153
+ ```python
154
+ {
155
+ "file": "_silence_/doing_the_dishes.wav",
156
+ "audio": {
157
+ "path": "_silence_/doing_the_dishes.wav",
158
+ "array": array([ 0. , 0. , 0. , ..., -0.00592041,
159
+ -0.00405884, -0.00253296]),
160
+ "sampling_rate": 16000
161
+ },
162
+ "label": 30, # "_silence_"
163
+ "is_unknown": False,
164
+ "speaker_id": "None",
165
+ "utterance_id": 0 # doesn't make sense here
166
+ }
167
+ ```
168
+
169
+ ### Data Fields
170
+
171
+ * `file`: relative audio filename inside the original archive.
172
+ * `audio`: dictionary containing a relative audio filename,
173
+ a decoded audio array, and the sampling rate. Note that when accessing
174
+ the audio column: `dataset[0]["audio"]` the audio is automatically decoded
175
+ and resampled to `dataset.features["audio"].sampling_rate`.
176
+ Decoding and resampling of a large number of audios might take a significant
177
+ amount of time. Thus, it is important to first query the sample index before
178
+ the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred
179
+ over `dataset["audio"][0]`.
180
+ * `label`: either word pronounced in an audio sample or background noise (`_silence_`) class.
181
+ Note that it's an integer value corresponding to the class name.
182
+ * `is_unknown`: if a word is auxiliary. Equals to `False` if a word is a core word or `_silence_`,
183
+ `True` if a word is an auxiliary word.
184
+ * `speaker_id`: unique id of a speaker. Equals to `None` if label is `_silence_`.
185
+ * `utterance_id`: incremental id of a word utterance within the same speaker.
186
+
187
+ ### Data Splits
188
+
189
+ The dataset has two versions (= configurations): `"v0.01"` and `"v0.02"`. `"v0.02"`
190
+ contains more words (see section [Source Data](#source-data) for more details).
191
+
192
+ | | train | validation | test |
193
+ |----- |------:|-----------:|-----:|
194
+ | v0.01 | 51093 | 6799 | 3081 |
195
+ | v0.02 | 84848 | 9982 | 4890 |
196
+
197
+ Note that in train and validation sets examples of `_silence_` class are longer than 1 second.
198
+ You can use the following code to sample 1-second examples from the longer ones:
199
+
200
+ ```python
201
+ def sample_noise(example):
202
+ # Use this function to extract random 1 sec slices of each _silence_ utterance,
203
+ # e.g. inside `torch.utils.data.Dataset.__getitem__()`
204
+ from random import randint
205
+
206
+ if example["label"] == "_silence_":
207
+ random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1)
208
+ example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]]
209
+
210
+ return example
211
+ ```
212
+
213
+ ## Dataset Creation
214
+
215
+ ### Curation Rationale
216
+
217
+ The primary goal of the dataset is to provide a way to build and test small
218
+ models that can detect a single word from a set of target words and differentiate it
219
+ from background noise or unrelated speech with as few false positives as possible.
220
+
221
+ ### Source Data
222
+
223
+ #### Initial Data Collection and Normalization
224
+
225
+ The audio files were collected using crowdsourcing, see
226
+ [aiyprojects.withgoogle.com/open_speech_recording](https://github.com/petewarden/extract_loudest_section)
227
+ for some of the open source audio collection code that was used. The goal was to gather examples of
228
+ people speaking single-word commands, rather than conversational sentences, so
229
+ they were prompted for individual words over the course of a five minute
230
+ session.
231
+
232
+ In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left",
233
+ "Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine",
234
+ "Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow".
235
+
236
+
237
+ In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual".
238
+
239
+ In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left",
240
+ "Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation
241
+ it is marked by `True` value of `"is_unknown"` feature). Their function is to teach a model to distinguish core words
242
+ from unrecognized ones.
243
+
244
+ The `_silence_` label contains a set of longer audio clips that are either recordings or
245
+ a mathematical simulation of noise.
246
+
247
+ #### Who are the source language producers?
248
+
249
+ The audio files were collected using crowdsourcing.
250
+
251
+ ### Annotations
252
+
253
+ #### Annotation process
254
+
255
+ Labels are the list of words prepared in advances.
256
+ Speakers were prompted for individual words over the course of a five minute
257
+ session.
258
+
259
+ #### Who are the annotators?
260
+
261
+ [More Information Needed]
262
+
263
+ ### Personal and Sensitive Information
264
+
265
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
266
+
267
+ ## Considerations for Using the Data
268
+
269
+ ### Social Impact of Dataset
270
+
271
+ [More Information Needed]
272
+
273
+ ### Discussion of Biases
274
+
275
+ [More Information Needed]
276
+
277
+ ### Other Known Limitations
278
+
279
+ [More Information Needed]
280
+
281
+ ## Additional Information
282
+
283
+ ### Dataset Curators
284
+
285
+ [More Information Needed]
286
+
287
+ ### Licensing Information
288
+
289
+ Creative Commons BY 4.0 License ((CC-BY-4.0)[https://creativecommons.org/licenses/by/4.0/legalcode]).
290
+
291
+ ### Citation Information
292
+
293
+ ```
294
+ @article{speechcommandsv2,
295
+ author = { {Warden}, P.},
296
+ title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}",
297
+ journal = {ArXiv e-prints},
298
+ archivePrefix = "arXiv",
299
+ eprint = {1804.03209},
300
+ primaryClass = "cs.CL",
301
+ keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
302
+ year = 2018,
303
+ month = apr,
304
+ url = {https://arxiv.org/abs/1804.03209},
305
+ }
306
+ ```
307
+ ### Contributions
308
+
309
+ [More Information Needed]