doberst commited on
Commit
2c7f027
·
verified ·
1 Parent(s): 58dfbe1

Upload 11 files

Browse files
README.md CHANGED
@@ -1,3 +1,217 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ tags:
7
+ - reranker
8
+ - cross-encoder
9
+ - transformers.js
10
+ pipeline_tag: text-classification
11
+ ---
12
+
13
+ <br><br>
14
+
15
+ <p align="center">
16
+ <img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px">
17
+ </p>
18
+
19
+ <p align="center">
20
+ <b>Trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
21
+ </p>
22
+
23
+ # jina-reranker-v1-tiny-en
24
+
25
+ This model is designed for **blazing-fast** reranking while maintaining **competitive performance**. What's more, it leverages the power of our [JinaBERT](https://arxiv.org/abs/2310.19923) model as its foundation. `JinaBERT` itself is a unique variant of the BERT architecture that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409). This allows `jina-reranker-v1-tiny-en` to process significantly longer sequences of text compared to other reranking models, up to an impressive **8,192** tokens.
26
+
27
+ To achieve the remarkable speed, the `jina-reranker-v1-tiny-en` employ a technique called knowledge distillation. Here, a complex, but slower, model (like our original [jina-reranker-v1-base-en](https://jina.ai/reranker/)) acts as a teacher, condensing its knowledge into a smaller, faster student model. This student retains most of the teacher's knowledge, allowing it to deliver similar accuracy in a fraction of the time.
28
+
29
+ Here's a breakdown of the reranker models we provide:
30
+
31
+ | Model Name | Layers | Hidden Size | Parameters (Millions) |
32
+ | ------------------------------------------------------------------------------------ | ------ | ----------- | --------------------- |
33
+ | [jina-reranker-v1-base-en](https://jina.ai/reranker/) | 12 | 768 | 137.0 |
34
+ | [jina-reranker-v1-turbo-en](https://huggingface.co/jinaai/jina-reranker-v1-turbo-en) | 6 | 384 | 37.8 |
35
+ | [jina-reranker-v1-tiny-en](https://huggingface.co/jinaai/jina-reranker-v1-tiny-en) | 4 | 384 | 33.0 |
36
+
37
+ > Currently, the `jina-reranker-v1-base-en` model is not available on Hugging Face. You can access it via the [Jina AI Reranker API](https://jina.ai/reranker/).
38
+
39
+ As you can see, the `jina-reranker-v1-turbo-en` offers a balanced approach with **6 layers** and **37.8 million** parameters. This translates to fast search and reranking while preserving a high degree of accuracy. The `jina-reranker-v1-tiny-en` prioritizes speed even further, achieving the fastest inference speeds with its **4-layer**, **33.0 million** parameter architecture. This makes it ideal for scenarios where absolute top accuracy is less crucial.
40
+
41
+ # Usage
42
+
43
+ 1. The easiest way to starting using `jina-reranker-v1-tiny-en` is to use Jina AI's [Reranker API](https://jina.ai/reranker/).
44
+
45
+ ```bash
46
+ curl https://api.jina.ai/v1/rerank \
47
+ -H "Content-Type: application/json" \
48
+ -H "Authorization: Bearer YOUR_API_KEY" \
49
+ -d '{
50
+ "model": "jina-reranker-v1-tiny-en",
51
+ "query": "Organic skincare products for sensitive skin",
52
+ "documents": [
53
+ "Eco-friendly kitchenware for modern homes",
54
+ "Biodegradable cleaning supplies for eco-conscious consumers",
55
+ "Organic cotton baby clothes for sensitive skin",
56
+ "Natural organic skincare range for sensitive skin",
57
+ "Tech gadgets for smart homes: 2024 edition",
58
+ "Sustainable gardening tools and compost solutions",
59
+ "Sensitive skin-friendly facial cleansers and toners",
60
+ "Organic food wraps and storage solutions",
61
+ "All-natural pet food for dogs with allergies",
62
+ "Yoga mats made from recycled materials"
63
+ ],
64
+ "top_n": 3
65
+ }'
66
+ ```
67
+
68
+ 2. Alternatively, you can use the latest version of the `sentence-transformers>=0.27.0` library. You can install it via pip:
69
+
70
+ ```bash
71
+ pip install -U sentence-transformers
72
+ ```
73
+
74
+ Then, you can use the following code to interact with the model:
75
+
76
+ ```python
77
+ from sentence_transformers import CrossEncoder
78
+
79
+ # Load the model, here we use our tiny sized model
80
+ model = CrossEncoder("jinaai/jina-reranker-v1-tiny-en", trust_remote_code=True)
81
+
82
+ # Example query and documents
83
+ query = "Organic skincare products for sensitive skin"
84
+ documents = [
85
+ "Eco-friendly kitchenware for modern homes",
86
+ "Biodegradable cleaning supplies for eco-conscious consumers",
87
+ "Organic cotton baby clothes for sensitive skin",
88
+ "Natural organic skincare range for sensitive skin",
89
+ "Tech gadgets for smart homes: 2024 edition",
90
+ "Sustainable gardening tools and compost solutions",
91
+ "Sensitive skin-friendly facial cleansers and toners",
92
+ "Organic food wraps and storage solutions",
93
+ "All-natural pet food for dogs with allergies",
94
+ "Yoga mats made from recycled materials"
95
+ ]
96
+
97
+ results = model.rank(query, documents, return_documents=True, top_k=3)
98
+ ```
99
+
100
+ 3. You can also use the `transformers` library to interact with the model programmatically.
101
+
102
+ ```python
103
+ !pip install transformers
104
+ from transformers import AutoModelForSequenceClassification
105
+
106
+ model = AutoModelForSequenceClassification.from_pretrained(
107
+ 'jinaai/jina-reranker-v1-tiny-en', num_labels=1, trust_remote_code=True
108
+ )
109
+
110
+ # Example query and documents
111
+ query = "Organic skincare products for sensitive skin"
112
+ documents = [
113
+ "Eco-friendly kitchenware for modern homes",
114
+ "Biodegradable cleaning supplies for eco-conscious consumers",
115
+ "Organic cotton baby clothes for sensitive skin",
116
+ "Natural organic skincare range for sensitive skin",
117
+ "Tech gadgets for smart homes: 2024 edition",
118
+ "Sustainable gardening tools and compost solutions",
119
+ "Sensitive skin-friendly facial cleansers and toners",
120
+ "Organic food wraps and storage solutions",
121
+ "All-natural pet food for dogs with allergies",
122
+ "Yoga mats made from recycled materials"
123
+ ]
124
+
125
+ # construct sentence pairs
126
+ sentence_pairs = [[query, doc] for doc in documents]
127
+
128
+ scores = model.compute_score(sentence_pairs)
129
+ ```
130
+
131
+ 4. You can also use the `transformers.js` library to run the model directly in JavaScript (in-browser, Node.js, Deno, etc.)!
132
+
133
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
134
+ ```bash
135
+ npm i @xenova/transformers
136
+ ```
137
+
138
+ Then, you can use the following code to interact with the model:
139
+ ```js
140
+ import { AutoTokenizer, AutoModelForSequenceClassification } from '@xenova/transformers';
141
+
142
+ const model_id = 'jinaai/jina-reranker-v1-tiny-en';
143
+ const model = await AutoModelForSequenceClassification.from_pretrained(model_id, { quantized: false });
144
+ const tokenizer = await AutoTokenizer.from_pretrained(model_id);
145
+
146
+ /**
147
+ * Performs ranking with the CrossEncoder on the given query and documents. Returns a sorted list with the document indices and scores.
148
+ * @param {string} query A single query
149
+ * @param {string[]} documents A list of documents
150
+ * @param {Object} options Options for ranking
151
+ * @param {number} [options.top_k=undefined] Return the top-k documents. If undefined, all documents are returned.
152
+ * @param {number} [options.return_documents=false] If true, also returns the documents. If false, only returns the indices and scores.
153
+ */
154
+ async function rank(query, documents, {
155
+ top_k = undefined,
156
+ return_documents = false,
157
+ } = {}) {
158
+ const inputs = tokenizer(
159
+ new Array(documents.length).fill(query),
160
+ { text_pair: documents, padding: true, truncation: true }
161
+ )
162
+ const { logits } = await model(inputs);
163
+ return logits.sigmoid().tolist()
164
+ .map(([score], i) => ({
165
+ corpus_id: i,
166
+ score,
167
+ ...(return_documents ? { text: documents[i] } : {})
168
+ })).sort((a, b) => b.score - a.score).slice(0, top_k);
169
+ }
170
+
171
+ // Example usage:
172
+ const query = "Organic skincare products for sensitive skin"
173
+ const documents = [
174
+ "Eco-friendly kitchenware for modern homes",
175
+ "Biodegradable cleaning supplies for eco-conscious consumers",
176
+ "Organic cotton baby clothes for sensitive skin",
177
+ "Natural organic skincare range for sensitive skin",
178
+ "Tech gadgets for smart homes: 2024 edition",
179
+ "Sustainable gardening tools and compost solutions",
180
+ "Sensitive skin-friendly facial cleansers and toners",
181
+ "Organic food wraps and storage solutions",
182
+ "All-natural pet food for dogs with allergies",
183
+ "Yoga mats made from recycled materials",
184
+ ]
185
+
186
+ const results = await rank(query, documents, { return_documents: true, top_k: 3 });
187
+ console.log(results);
188
+ ```
189
+
190
+
191
+ That's it! You can now use the `jina-reranker-v1-tiny-en` model in your projects.
192
+
193
+ # Evaluation
194
+
195
+ We evaluated Jina Reranker on 3 key benchmarks to ensure top-tier performance and search relevance.
196
+
197
+ | Model Name | NDCG@10 (17 BEIR datasets) | NDCG@10 (5 LoCo datasets) | Hit Rate (LlamaIndex RAG) |
198
+ | ------------------------------------------ | -------------------------- | ------------------------- | ------------------------- |
199
+ | `jina-reranker-v1-base-en` | **52.45** | **87.31** | **85.53** |
200
+ | `jina-reranker-v1-turbo-en` | **49.60** | **69.21** | **85.13** |
201
+ | `jina-reranker-v1-tiny-en` (you are here) | **48.54** | **70.29** | **85.00** |
202
+ | `mxbai-rerank-base-v1` | 49.19 | - | 82.50 |
203
+ | `mxbai-rerank-xsmall-v1` | 48.80 | - | 83.69 |
204
+ | `ms-marco-MiniLM-L-6-v2` | 48.64 | - | 82.63 |
205
+ | `ms-marco-MiniLM-L-4-v2` | 47.81 | - | 83.82 |
206
+ | `bge-reranker-base` | 47.89 | - | 83.03 |
207
+
208
+ **Note:**
209
+
210
+ - `NDCG@10` is a measure of ranking quality, with higher scores indicating better search results. `Hit Rate` measures the percentage of relevant documents that appear in the top 10 search results.
211
+ - The results of LoCo datasets on other models are not available since they **do not support** long documents more than 512 tokens.
212
+
213
+ For more details, please refer to our [benchmarking sheets](https://docs.google.com/spreadsheets/d/1V8pZjENdBBqrKMzZzOWc2aL60wtnR0yrEBY3urfO5P4/edit?usp=sharing).
214
+
215
+ # Contact
216
+
217
+ Join our [Discord community](https://discord.jina.ai/) and chat with other community members about ideas.
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "jinaai/jina-bert-implementation",
3
+ "architectures": ["JinaBertModel"],
4
+ "attention_probs_dropout_prob": 0.1,
5
+ "attn_implementation": "torch",
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_bert.JinaBertConfig",
8
+ "AutoModel": "modeling_bert.JinaBertModel",
9
+ "AutoModelForMaskedLM": "modeling_bert.JinaBertForMaskedLM",
10
+ "AutoModelForQuestionAnswering": "modeling_bert.JinaBertForQuestionAnswering",
11
+ "AutoModelForSequenceClassification": "modeling_bert.JinaBertForSequenceClassification",
12
+ "AutoModelForTokenClassification": "modeling_bert.JinaBertForTokenClassification"
13
+ },
14
+ "classifier_dropout": null,
15
+ "emb_pooler": "mean",
16
+ "feed_forward_type": "geglu",
17
+ "gradient_checkpointing": false,
18
+ "hidden_act": "gelu",
19
+ "hidden_dropout_prob": 0.1,
20
+ "hidden_size": 384,
21
+ "initializer_range": 0.02,
22
+ "intermediate_size": 1536,
23
+ "layer_norm_eps": 1e-12,
24
+ "max_position_embeddings": 8192,
25
+ "model_type": "bert",
26
+ "num_attention_heads": 12,
27
+ "num_hidden_layers": 4,
28
+ "pad_token_id": 0,
29
+ "position_embedding_type": "alibi",
30
+ "torch_dtype": "float16",
31
+ "transformers_version": "4.30.2",
32
+ "type_vocab_size": 2,
33
+ "use_cache": true,
34
+ "vocab_size": 61056
35
+ }
configuration_bert.py ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
3
+ # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
4
+ # Copyright (c) 2023 Jina AI GmbH. All rights reserved.
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+ """ BERT model configuration"""
18
+ from collections import OrderedDict
19
+ from typing import Mapping
20
+
21
+ from transformers.configuration_utils import PretrainedConfig
22
+ from transformers.onnx import OnnxConfig
23
+ from transformers.utils import logging
24
+
25
+
26
+ logger = logging.get_logger(__name__)
27
+
28
+
29
+ class JinaBertConfig(PretrainedConfig):
30
+ r"""
31
+ This is the configuration class to store the configuration of a [`JinaBertModel`]. It is used to
32
+ instantiate a BERT model according to the specified arguments, defining the model architecture. Instantiating a
33
+ configuration with the defaults will yield a similar configuration to that of the BERT
34
+ [bert-base-uncased](https://huggingface.co/bert-base-uncased) architecture.
35
+
36
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
37
+ documentation from [`PretrainedConfig`] for more information.
38
+
39
+
40
+ Args:
41
+ vocab_size (`int`, *optional*, defaults to 30522):
42
+ Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the
43
+ `inputs_ids` passed when calling [`BertModel`] or [`TFBertModel`].
44
+ hidden_size (`int`, *optional*, defaults to 768):
45
+ Dimensionality of the encoder layers and the pooler layer.
46
+ num_hidden_layers (`int`, *optional*, defaults to 12):
47
+ Number of hidden layers in the Transformer encoder.
48
+ num_attention_heads (`int`, *optional*, defaults to 12):
49
+ Number of attention heads for each attention layer in the Transformer encoder.
50
+ intermediate_size (`int`, *optional*, defaults to 3072):
51
+ Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
52
+ hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
53
+ The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
54
+ `"relu"`, `"silu"` and `"gelu_new"` are supported.
55
+ hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
56
+ The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
57
+ attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
58
+ The dropout ratio for the attention probabilities.
59
+ max_position_embeddings (`int`, *optional*, defaults to 512):
60
+ The maximum sequence length that this model might ever be used with. Typically set this to something large
61
+ just in case (e.g., 512 or 1024 or 2048).
62
+ type_vocab_size (`int`, *optional*, defaults to 2):
63
+ The vocabulary size of the `token_type_ids` passed when calling [`BertModel`] or [`TFBertModel`].
64
+ initializer_range (`float`, *optional*, defaults to 0.02):
65
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
66
+ layer_norm_eps (`float`, *optional*, defaults to 1e-12):
67
+ The epsilon used by the layer normalization layers.
68
+ position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
69
+ Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
70
+ positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
71
+ [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
72
+ For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
73
+ with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
74
+ is_decoder (`bool`, *optional*, defaults to `False`):
75
+ Whether the model is used as a decoder or not. If `False`, the model is used as an encoder.
76
+ use_cache (`bool`, *optional*, defaults to `True`):
77
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
78
+ relevant if `config.is_decoder=True`.
79
+ classifier_dropout (`float`, *optional*):
80
+ The dropout ratio for the classification head.
81
+ feed_forward_type (`str`, *optional*, defaults to `"original"`):
82
+ The type of feed forward layer to use in the bert layers.
83
+ Can be one of GLU variants, e.g. `"reglu"`, `"geglu"`
84
+ emb_pooler (`str`, *optional*, defaults to `None`):
85
+ The function to use for pooling the last layer embeddings to get the sentence embeddings.
86
+ Should be one of `None`, `"mean"`.
87
+ attn_implementation (`str`, *optional*, defaults to `"torch"`):
88
+ The implementation of the self-attention layer. Can be one of:
89
+ - `None` for the original implementation,
90
+ - `torch` for the PyTorch SDPA implementation,
91
+
92
+ Examples:
93
+
94
+ ```python
95
+ >>> from transformers import JinaBertConfig, JinaBertModel
96
+
97
+ >>> # Initializing a JinaBert configuration
98
+ >>> configuration = JinaBertConfig()
99
+
100
+ >>> # Initializing a model (with random weights) from the configuration
101
+ >>> model = JinaBertModel(configuration)
102
+
103
+ >>> # Accessing the model configuration
104
+ >>> configuration = model.config
105
+
106
+ >>> # Encode text inputs
107
+ >>> embeddings = model.encode(text_inputs)
108
+ ```"""
109
+ model_type = "bert"
110
+
111
+ def __init__(
112
+ self,
113
+ vocab_size=30522,
114
+ hidden_size=768,
115
+ num_hidden_layers=12,
116
+ num_attention_heads=12,
117
+ intermediate_size=3072,
118
+ hidden_act="gelu",
119
+ hidden_dropout_prob=0.1,
120
+ attention_probs_dropout_prob=0.1,
121
+ max_position_embeddings=512,
122
+ type_vocab_size=2,
123
+ initializer_range=0.02,
124
+ layer_norm_eps=1e-12,
125
+ pad_token_id=0,
126
+ position_embedding_type="absolute",
127
+ use_cache=True,
128
+ classifier_dropout=None,
129
+ feed_forward_type="original",
130
+ emb_pooler=None,
131
+ attn_implementation='torch',
132
+ **kwargs,
133
+ ):
134
+ super().__init__(pad_token_id=pad_token_id, **kwargs)
135
+
136
+ self.vocab_size = vocab_size
137
+ self.hidden_size = hidden_size
138
+ self.num_hidden_layers = num_hidden_layers
139
+ self.num_attention_heads = num_attention_heads
140
+ self.hidden_act = hidden_act
141
+ self.intermediate_size = intermediate_size
142
+ self.hidden_dropout_prob = hidden_dropout_prob
143
+ self.attention_probs_dropout_prob = attention_probs_dropout_prob
144
+ self.max_position_embeddings = max_position_embeddings
145
+ self.type_vocab_size = type_vocab_size
146
+ self.initializer_range = initializer_range
147
+ self.layer_norm_eps = layer_norm_eps
148
+ self.position_embedding_type = position_embedding_type
149
+ self.use_cache = use_cache
150
+ self.classifier_dropout = classifier_dropout
151
+ self.feed_forward_type = feed_forward_type
152
+ self.emb_pooler = emb_pooler
153
+ self.attn_implementation = attn_implementation
154
+
155
+ class JinaBertOnnxConfig(OnnxConfig):
156
+ @property
157
+ def inputs(self) -> Mapping[str, Mapping[int, str]]:
158
+ if self.task == "multiple-choice":
159
+ dynamic_axis = {0: "batch", 1: "choice", 2: "sequence"}
160
+ else:
161
+ dynamic_axis = {0: "batch", 1: "sequence"}
162
+ return OrderedDict(
163
+ [
164
+ ("input_ids", dynamic_axis),
165
+ ("attention_mask", dynamic_axis),
166
+ ("token_type_ids", dynamic_axis),
167
+ ]
168
+ )
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7e9ea4e0b0879e9624fd0606f02b85384fe209ce5bc7cf5daecaf7e3fecf82f
3
+ size 66100274
modeling_bert.py ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77bbb3421aa3dca1886e8adcd0731bc1ca529a233266a4183278e43dffcaced8
3
+ size 66106938
special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "cls_token": "<s>",
4
+ "eos_token": "</s>",
5
+ "mask_token": {
6
+ "content": "<mask>",
7
+ "lstrip": true,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "pad_token": "<pad>",
13
+ "sep_token": "</s>",
14
+ "unk_token": "<unk>"
15
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "4": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": true,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "mask_token": "<mask>",
51
+ "model_max_length": 512,
52
+ "pad_token": "<pad>",
53
+ "sep_token": "</s>",
54
+ "tokenizer_class": "RobertaTokenizer",
55
+ "trim_offsets": true,
56
+ "unk_token": "<unk>"
57
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff