Datasets:
- tokenizer_robustness_completion_stem_canonical
- tokenizer_robustness_completion_stem_character_deletion
- tokenizer_robustness_completion_stem_colloquial
- tokenizer_robustness_completion_stem_compounds
- tokenizer_robustness_completion_stem_diacriticized_styling
- tokenizer_robustness_completion_stem_double_struck
- tokenizer_robustness_completion_stem_enclosed_characters
-
2.46 kB
-
27.9 kB