Serega6678 commited on
Commit
e4dc402
·
verified ·
1 Parent(s): 8813840

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -19,6 +19,8 @@ inference: false
19
 
20
  This model provides the best embedding for the Entity Recognition task in English.
21
 
 
 
22
  This is the model from our [**Paper**](https://arxiv.org/abs/2402.15343): **NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data**
23
 
24
 
@@ -36,7 +38,7 @@ Read more about evaluation protocol & datasets in our [paper](https://arxiv.org/
36
 
37
  Here is the aggregated performance of the models over several datasets.
38
 
39
- k=X means that as a training data for this evaluation, we took only X examples for each class, trained the model, and evaluated it on the full test set.
40
 
41
  | Model | k=1 | k=4 | k=16 | k=64 |
42
  |----------|----------|----------|----------|----------|
@@ -44,7 +46,7 @@ k=X means that as a training data for this evaluation, we took only X examples f
44
  | RoBERTa-base + NER-BERT pre-training | 32.3 | 50.9 | 61.9 | 67.6 |
45
  | NuNER v1.0 | **39.4** | **59.6** | **67.8** | **71.5** |
46
 
47
- NuNER v1.0 has similar performance to 7B LLMs (70 times bigger that NuNER v1.0) created specifically for NER task.
48
 
49
  | Model | k=8~16| k=64~128 |
50
  |----------|----------|----------|
 
19
 
20
  This model provides the best embedding for the Entity Recognition task in English.
21
 
22
+ We suggest using **newer version of this model: [NuNER v2.0](https://huggingface.co/numind/NuNER-v2.0)**
23
+
24
  This is the model from our [**Paper**](https://arxiv.org/abs/2402.15343): **NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data**
25
 
26
 
 
38
 
39
  Here is the aggregated performance of the models over several datasets.
40
 
41
+ k=X means that as training data for this evaluation, we took only X examples for each class, trained the model, and evaluated it on the full test set.
42
 
43
  | Model | k=1 | k=4 | k=16 | k=64 |
44
  |----------|----------|----------|----------|----------|
 
46
  | RoBERTa-base + NER-BERT pre-training | 32.3 | 50.9 | 61.9 | 67.6 |
47
  | NuNER v1.0 | **39.4** | **59.6** | **67.8** | **71.5** |
48
 
49
+ NuNER v1.0 has similar performance to 7B LLMs (70 times bigger than NuNER v1.0) created specifically for the NER task.
50
 
51
  | Model | k=8~16| k=64~128 |
52
  |----------|----------|----------|