Datasets:
Tasks:
Image-to-Text
Modalities:
Text
Formats:
webdataset
Languages:
English
Size:
1K - 10K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -224,7 +224,7 @@ Estimating the number of tokens is done using a `LlamaTokenizer` from `tokenizer
|
|
224 |
#### Train
|
225 |
* `pdfa-eng-train-*.tar`
|
226 |
* Downloaded on 2024/01/22
|
227 |
-
* 1800 shards, 2,159,432 samples,
|
228 |
|
229 |
## Additional Information
|
230 |
|
|
|
224 |
#### Train
|
225 |
* `pdfa-eng-train-*.tar`
|
226 |
* Downloaded on 2024/01/22
|
227 |
+
* 1800 shards, 2,159,432 samples, 18M pages, 9.7 billion tokens (around 5 billion words)
|
228 |
|
229 |
## Additional Information
|
230 |
|