Umean commited on
Commit
323a85e
·
verified ·
1 Parent(s): 9c5a16c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -1
README.md CHANGED
@@ -9,7 +9,7 @@ library_name: peft
9
  ---
10
 
11
  This is the B2NER model's LoRA adapter based on [InternLM2-20B](https://huggingface.co/internlm/internlm2-20b).
12
- **See [github repo](https://github.com/UmeanNever/B2NER) for quick demo usage and more information about this work.**
13
 
14
  ## B2NER
15
 
@@ -21,6 +21,14 @@ Our B2NER models, trained on B2NERD, outperform GPT-4 by 6.8-12.0 F1 points and
21
  - 📀 Data: See [B2NERD](https://huggingface.co/datasets/Umean/B2NERD).
22
  - 💾 Model (LoRA Adapters): Current repo saves the B2NER model LoRA adapter based on InternLM2-20B. See [7B model](https://huggingface.co/Umean/B2NER-Internlm2.5-7B-LoRA) for a 7B adapter.
23
 
 
 
 
 
 
 
 
 
24
  ## Sample Usage - Quick Demo
25
  Here we show how to use our provided lora adapter to do quick demo with customized input. You can also refer to github repo's `src/demo.ipynb` to see our examples and reuse for your own demo.
26
  - Prepare/download our LoRA checkpoint and corresponding backbone model.
 
9
  ---
10
 
11
  This is the B2NER model's LoRA adapter based on [InternLM2-20B](https://huggingface.co/internlm/internlm2-20b).
12
+ **See our [GitHub Repo](https://github.com/UmeanNever/B2NER) for model usage and more information about this work.**
13
 
14
  ## B2NER
15
 
 
21
  - 📀 Data: See [B2NERD](https://huggingface.co/datasets/Umean/B2NERD).
22
  - 💾 Model (LoRA Adapters): Current repo saves the B2NER model LoRA adapter based on InternLM2-20B. See [7B model](https://huggingface.co/Umean/B2NER-Internlm2.5-7B-LoRA) for a 7B adapter.
23
 
24
+ **Feature Highlights:**
25
+ - Curated dataset (B2NERD) refined from the largest bilingual NER dataset collection to date for training Open NER models.
26
+ - Achieves SoTA OOD NER performance across multiple benchmarks with light-weight LoRA adapters (<=50MB).
27
+ - Uses simple natural language format prompt, achieving 4X faster inference speed than previous SoTA which use complex prompts.
28
+ - Easy integration with other IE tasks by adopting UIE-style instructions.
29
+ - Provides a universal entity taxonomy that guides the definition and label naming of new entities.
30
+ - We have open-sourced our data, code, and models, and provided easy-to-follow usage instructions.
31
+
32
  ## Sample Usage - Quick Demo
33
  Here we show how to use our provided lora adapter to do quick demo with customized input. You can also refer to github repo's `src/demo.ipynb` to see our examples and reuse for your own demo.
34
  - Prepare/download our LoRA checkpoint and corresponding backbone model.