Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,10 @@
|
|
| 1 |
---
|
| 2 |
library_name: keras-hub
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
## Model Overview
|
| 5 |
BERT (Bidirectional Encoder Representations from Transformers) is a set of language models published by Google. They are intended for classification and embedding of text, not for text-generation. See the model card below for benchmarks, data sources, and intended use cases.
|
|
@@ -140,4 +145,4 @@ classifier = keras_hub.models.BertClassifier.from_preset(
|
|
| 140 |
preprocessor=None,
|
| 141 |
)
|
| 142 |
classifier.fit(x=features, y=labels, batch_size=2)
|
| 143 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
library_name: keras-hub
|
| 3 |
+
license: apache-2.0
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
tags:
|
| 7 |
+
- text-classification
|
| 8 |
---
|
| 9 |
## Model Overview
|
| 10 |
BERT (Bidirectional Encoder Representations from Transformers) is a set of language models published by Google. They are intended for classification and embedding of text, not for text-generation. See the model card below for benchmarks, data sources, and intended use cases.
|
|
|
|
| 145 |
preprocessor=None,
|
| 146 |
)
|
| 147 |
classifier.fit(x=features, y=labels, batch_size=2)
|
| 148 |
+
```
|