Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
Samoed commited on
Commit
66b8b27
·
verified ·
1 Parent(s): 9a43f43

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +245 -0
README.md CHANGED
@@ -1,4 +1,133 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: afr_Latn
@@ -254,4 +383,120 @@ configs:
254
  data_files:
255
  - split: test
256
  path: data/test-*
 
 
 
257
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-annotated
4
+ language:
5
+ - afr
6
+ - amh
7
+ - arb
8
+ - aze
9
+ - bak
10
+ - bel
11
+ - bem
12
+ - ben
13
+ - bod
14
+ - bos
15
+ - bul
16
+ - cat
17
+ - ces
18
+ - ckb
19
+ - cym
20
+ - dan
21
+ - deu
22
+ - div
23
+ - dzo
24
+ - ell
25
+ - eng
26
+ - eus
27
+ - ewe
28
+ - fao
29
+ - fas
30
+ - fij
31
+ - fil
32
+ - fin
33
+ - fra
34
+ - fuc
35
+ - gle
36
+ - glg
37
+ - guj
38
+ - hau
39
+ - heb
40
+ - hin
41
+ - hmn
42
+ - hrv
43
+ - hun
44
+ - hye
45
+ - ibo
46
+ - ind
47
+ - isl
48
+ - ita
49
+ - jpn
50
+ - kan
51
+ - kat
52
+ - kaz
53
+ - khm
54
+ - kin
55
+ - kir
56
+ - kmr
57
+ - kor
58
+ - lao
59
+ - lav
60
+ - lit
61
+ - ltz
62
+ - mal
63
+ - mar
64
+ - mey
65
+ - mkd
66
+ - mlg
67
+ - mlt
68
+ - mon
69
+ - mri
70
+ - msa
71
+ - mya
72
+ - nde
73
+ - nep
74
+ - nld
75
+ - nno
76
+ - nob
77
+ - nso
78
+ - nya
79
+ - orm
80
+ - pan
81
+ - pol
82
+ - por
83
+ - prs
84
+ - pus
85
+ - ron
86
+ - rus
87
+ - shi
88
+ - sin
89
+ - slk
90
+ - slv
91
+ - smo
92
+ - sna
93
+ - snd
94
+ - som
95
+ - spa
96
+ - sqi
97
+ - srp
98
+ - ssw
99
+ - swa
100
+ - swe
101
+ - tah
102
+ - tam
103
+ - tat
104
+ - tel
105
+ - tgk
106
+ - tha
107
+ - tir
108
+ - ton
109
+ - tsn
110
+ - tuk
111
+ - tur
112
+ - uig
113
+ - ukr
114
+ - urd
115
+ - uzb
116
+ - ven
117
+ - vie
118
+ - wol
119
+ - xho
120
+ - yor
121
+ - yue
122
+ - zho
123
+ - zul
124
+ license: cc-by-sa-4.0
125
+ multilinguality: translated
126
+ source_datasets:
127
+ - mteb/NTREX
128
+ task_categories:
129
+ - translation
130
+ task_ids: []
131
  dataset_info:
132
  features:
133
  - name: afr_Latn
 
383
  data_files:
384
  - split: test
385
  path: data/test-*
386
+ tags:
387
+ - mteb
388
+ - text
389
  ---
390
+ <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
391
+
392
+ <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
393
+ <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">NTREXBitextMining</h1>
394
+ <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
395
+ <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
396
+ </div>
397
+
398
+ NTREX is a News Test References dataset for Machine Translation Evaluation, covering translation from English into 128 languages. We select language pairs according to the M2M-100 language grouping strategy, resulting in 1916 directions.
399
+
400
+ | | |
401
+ |---------------|---------------------------------------------|
402
+ | Task category | t2t |
403
+ | Domains | News, Written |
404
+ | Reference | https://huggingface.co/datasets/davidstap/NTREX |
405
+
406
+ Source datasets:
407
+ - [mteb/NTREX](https://huggingface.co/datasets/mteb/NTREX)
408
+
409
+
410
+ ## How to evaluate on this task
411
+
412
+ You can evaluate an embedding model on this dataset using the following code:
413
+
414
+ ```python
415
+ import mteb
416
+
417
+ task = mteb.get_task("NTREXBitextMining")
418
+ evaluator = mteb.MTEB([task])
419
+
420
+ model = mteb.get_model(YOUR_MODEL)
421
+ evaluator.run(model)
422
+ ```
423
+
424
+ <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
425
+ To learn more about how to run models on `mteb` task check out the [GitHub repository](https://github.com/embeddings-benchmark/mteb).
426
+
427
+ ## Citation
428
+
429
+ If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
430
+
431
+ ```bibtex
432
+
433
+ @inproceedings{federmann-etal-2022-ntrex,
434
+ address = {Online},
435
+ author = {Federmann, Christian and Kocmi, Tom and Xin, Ying},
436
+ booktitle = {Proceedings of the First Workshop on Scaling Up Multilingual Evaluation},
437
+ month = {nov},
438
+ pages = {21--24},
439
+ publisher = {Association for Computational Linguistics},
440
+ title = {{NTREX}-128 {--} News Test References for {MT} Evaluation of 128 Languages},
441
+ url = {https://aclanthology.org/2022.sumeval-1.4},
442
+ year = {2022},
443
+ }
444
+
445
+
446
+ @article{enevoldsen2025mmtebmassivemultilingualtext,
447
+ title={MMTEB: Massive Multilingual Text Embedding Benchmark},
448
+ author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
449
+ publisher = {arXiv},
450
+ journal={arXiv preprint arXiv:2502.13595},
451
+ year={2025},
452
+ url={https://arxiv.org/abs/2502.13595},
453
+ doi = {10.48550/arXiv.2502.13595},
454
+ }
455
+
456
+ @article{muennighoff2022mteb,
457
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Loïc and Reimers, Nils},
458
+ title = {MTEB: Massive Text Embedding Benchmark},
459
+ publisher = {arXiv},
460
+ journal={arXiv preprint arXiv:2210.07316},
461
+ year = {2022}
462
+ url = {https://arxiv.org/abs/2210.07316},
463
+ doi = {10.48550/ARXIV.2210.07316},
464
+ }
465
+ ```
466
+
467
+ # Dataset Statistics
468
+ <details>
469
+ <summary> Dataset Statistics</summary>
470
+
471
+ The following code contains the descriptive statistics from the task. These can also be obtained using:
472
+
473
+ ```python
474
+ import mteb
475
+
476
+ task = mteb.get_task("NTREXBitextMining")
477
+
478
+ desc_stats = task.metadata.descriptive_stats
479
+ ```
480
+
481
+ ```json
482
+ {
483
+ "test": {
484
+ "num_samples": 3826252,
485
+ "number_of_characters": 988355274,
486
+ "unique_pairs": 3820263,
487
+ "min_sentence1_length": 1,
488
+ "average_sentence1_length": 129.15449296073547,
489
+ "max_sentence1_length": 773,
490
+ "unique_sentence1": 241259,
491
+ "min_sentence2_length": 1,
492
+ "average_sentence2_length": 129.15449296073547,
493
+ "max_sentence2_length": 773,
494
+ "unique_sentence2": 241259
495
+ }
496
+ }
497
+ ```
498
+
499
+ </details>
500
+
501
+ ---
502
+ *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*