gsaltintas commited on
Commit
e93089c
·
verified ·
1 Parent(s): fd7e438

Uploading tokenizer_robustness_completion_stem_canonical subset

Browse files
README.md CHANGED
@@ -7,6 +7,136 @@ pretty_name: Tokenization Robustness
7
  tags:
8
  - multilingual
9
  - tokenization
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  # Dataset Card for Tokenization Robustness
 
7
  tags:
8
  - multilingual
9
  - tokenization
10
+ dataset_info:
11
+ config_name: tokenizer_robustness_completion_stem_canonical
12
+ features:
13
+ - name: question
14
+ dtype: string
15
+ - name: choices
16
+ list: string
17
+ - name: answer
18
+ dtype: int64
19
+ - name: answer_label
20
+ dtype: string
21
+ - name: split
22
+ dtype: string
23
+ - name: subcategories
24
+ dtype: string
25
+ - name: lang
26
+ dtype: string
27
+ - name: second_lang
28
+ dtype: string
29
+ - name: notes
30
+ dtype: string
31
+ - name: id
32
+ dtype: string
33
+ - name: set_id
34
+ dtype: string
35
+ - name: variation_id
36
+ dtype: string
37
+ - name: question_general_category
38
+ dtype: string
39
+ - name: vanilla_cos_sim_to_canonical
40
+ struct:
41
+ - name: CohereLabs/aya-expanse-8b
42
+ dtype: float64
43
+ - name: Qwen/Qwen3-8B
44
+ dtype: float64
45
+ - name: bigscience/bloom
46
+ dtype: float64
47
+ - name: common-pile/comma-v0.1-1t
48
+ dtype: float64
49
+ - name: facebook/xglm-564M
50
+ dtype: float64
51
+ - name: google-bert/bert-base-multilingual-cased
52
+ dtype: float64
53
+ - name: google/byt5-small
54
+ dtype: float64
55
+ - name: google/gemma-2-2b
56
+ dtype: float64
57
+ - name: gpt2
58
+ dtype: float64
59
+ - name: meta-llama/Llama-3.2-1B
60
+ dtype: float64
61
+ - name: microsoft/Phi-3-mini-4k-instruct
62
+ dtype: float64
63
+ - name: mistralai/tekken
64
+ dtype: float64
65
+ - name: tiktoken/gpt-4o
66
+ dtype: float64
67
+ - name: tokenmonster/englishcode-32000-consistent-v1
68
+ dtype: float64
69
+ - name: trimmed_cos_sim_to_canonical
70
+ struct:
71
+ - name: CohereLabs/aya-expanse-8b
72
+ dtype: float64
73
+ - name: Qwen/Qwen3-8B
74
+ dtype: float64
75
+ - name: bigscience/bloom
76
+ dtype: float64
77
+ - name: common-pile/comma-v0.1-1t
78
+ dtype: float64
79
+ - name: facebook/xglm-564M
80
+ dtype: float64
81
+ - name: google-bert/bert-base-multilingual-cased
82
+ dtype: float64
83
+ - name: google/byt5-small
84
+ dtype: float64
85
+ - name: google/gemma-2-2b
86
+ dtype: float64
87
+ - name: gpt2
88
+ dtype: float64
89
+ - name: meta-llama/Llama-3.2-1B
90
+ dtype: float64
91
+ - name: microsoft/Phi-3-mini-4k-instruct
92
+ dtype: float64
93
+ - name: mistralai/tekken
94
+ dtype: float64
95
+ - name: tiktoken/gpt-4o
96
+ dtype: float64
97
+ - name: tokenmonster/englishcode-32000-consistent-v1
98
+ dtype: float64
99
+ - name: token_counts
100
+ struct:
101
+ - name: CohereLabs/aya-expanse-8b
102
+ dtype: int64
103
+ - name: Qwen/Qwen3-8B
104
+ dtype: int64
105
+ - name: bigscience/bloom
106
+ dtype: int64
107
+ - name: common-pile/comma-v0.1-1t
108
+ dtype: int64
109
+ - name: facebook/xglm-564M
110
+ dtype: int64
111
+ - name: google-bert/bert-base-multilingual-cased
112
+ dtype: int64
113
+ - name: google/byt5-small
114
+ dtype: int64
115
+ - name: google/gemma-2-2b
116
+ dtype: int64
117
+ - name: gpt2
118
+ dtype: int64
119
+ - name: meta-llama/Llama-3.2-1B
120
+ dtype: int64
121
+ - name: microsoft/Phi-3-mini-4k-instruct
122
+ dtype: int64
123
+ - name: mistralai/tekken
124
+ dtype: int64
125
+ - name: tiktoken/gpt-4o
126
+ dtype: int64
127
+ - name: tokenmonster/englishcode-32000-consistent-v1
128
+ dtype: int64
129
+ splits:
130
+ - name: test
131
+ num_bytes: 23517
132
+ num_examples: 44
133
+ download_size: 32406
134
+ dataset_size: 23517
135
+ configs:
136
+ - config_name: tokenizer_robustness_completion_stem_canonical
137
+ data_files:
138
+ - split: test
139
+ path: tokenizer_robustness_completion_stem_canonical/test-*
140
  ---
141
 
142
  # Dataset Card for Tokenization Robustness
tokenizer_robustness_completion_stem_canonical/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1eb194cd6be84a58225522bd80d32bffe66e56b02c57ce7dfcde00c43aa1ac93
3
+ size 32406