Commit
·
4a88c50
1
Parent(s):
706126a
Update README.md
Browse files
README.md
CHANGED
|
@@ -25,6 +25,8 @@ To train ElhBERTeu, we collected different corpora sources from several domains:
|
|
| 25 |
|
| 26 |
ElhBERTeu is a base, cased monolingual BERT model for Basque, with a vocab size of 50K, which has 124M parameters in total.
|
| 27 |
|
|
|
|
|
|
|
| 28 |
ElhBERTeu was trained following the design decisions for [BERTeus](https://huggingface.co/ixa-ehu/berteus-base-cased). The tokenizer and the hyper-parameter settings remained the same (batch_size=256), with the only difference being that the full pre-training of the model (1M steps) was performed with a sequence length of 512 on a v3-8 TPU.
|
| 29 |
|
| 30 |
The model has been evaluated on the recently created [BasqueGLUE](https://github.com/Elhuyar/BasqueGLUE) NLU benchmark:
|
|
|
|
| 25 |
|
| 26 |
ElhBERTeu is a base, cased monolingual BERT model for Basque, with a vocab size of 50K, which has 124M parameters in total.
|
| 27 |
|
| 28 |
+
There is a medium-size model available here: [ElhBERTeu-medium](https://huggingface.co/orai-nlp/ElhBERTeu-medium)
|
| 29 |
+
|
| 30 |
ElhBERTeu was trained following the design decisions for [BERTeus](https://huggingface.co/ixa-ehu/berteus-base-cased). The tokenizer and the hyper-parameter settings remained the same (batch_size=256), with the only difference being that the full pre-training of the model (1M steps) was performed with a sequence length of 512 on a v3-8 TPU.
|
| 31 |
|
| 32 |
The model has been evaluated on the recently created [BasqueGLUE](https://github.com/Elhuyar/BasqueGLUE) NLU benchmark:
|