Upload tokenizer files (vocab, config, README)
Browse files- README.md +1 -1
- vocab.json +0 -0
README.md
CHANGED
@@ -15,6 +15,6 @@ This tokenizer was trained using a Python-only pipeline (no `transformers` or `t
|
|
15 |
|
16 |
```python
|
17 |
from transformers import PreTrainedTokenizerFast
|
18 |
-
tokenizer = PreTrainedTokenizerFast.from_pretrained("
|
19 |
```
|
20 |
|
|
|
15 |
|
16 |
```python
|
17 |
from transformers import PreTrainedTokenizerFast
|
18 |
+
tokenizer = PreTrainedTokenizerFast.from_pretrained("goabonga/wikitext-2-raw-v1")
|
19 |
```
|
20 |
|
vocab.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|