hyunwoongko commited on
Commit
caa6744
·
1 Parent(s): 42da130

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -11
README.md CHANGED
@@ -48,25 +48,26 @@ tokenizer = AutoTokenizer.from_pretrained("EleutherAI/polyglot-ko-1.3b")
48
  model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-1.3b")
49
  ```
50
 
51
- ## Privacy considerations and Limitations
52
 
53
- Polyglot-Ko learns an inner representation of the Korean that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
 
54
 
55
  ### Privacy considerations
56
- General training algorithms for pretrained language model have many hazards that memorize personal information in training data. We added the following tokens to vocabulary to mitigate privacy problem and replaced much personal information to these tokens in data preprocessing steps.
57
 
58
  * `<|acc|>` : bank account number
59
  * `<|rrn|>` : resident registration number
60
  * `<|tell|>` : phone number
61
 
62
  ### Limitations and Biases
63
- The core functionality of Polyglot-Ko is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting Polyglot-Ko it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon Polyglot-Ko to produce factually accurate output.Depending upon use case Polyglot-Ko may produce socially unacceptable text.
64
- As with all language models, it is hard to predict in advance how Polyglot-Ko will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
 
65
 
66
  ### Legal Restrictions
67
  Since there are laws in many countries related to data collection, we will collect data with due regard to the laws of those countries.
68
- Additionally, we plan to use dataset to train our models, but we do not plan to make the dataset publicly available.
69
-
70
 
71
  ## Evaluation results
72
  We used the [KOBEST dataset](https://arxiv.org/abs/2204.04541), which consists of five Korean downstream tasks, for evaluation.
@@ -130,7 +131,6 @@ python main.py \
130
  <p><strong>&ast;</strong> Since this model does not provide evaluation results with KOBEST dataset, we evaluated the model using lm-evaluation-harness ourselves. you can reproduce this result using the source code included in the polyglot branch of lm-evaluation-harness.</p>
131
 
132
  ## Citation and Related Information
133
-
134
  ### BibTeX entry
135
  If you find our work useful, please consider citing:
136
  ```bibtex
@@ -144,7 +144,7 @@ If you find our work useful, please consider citing:
144
  ```
145
 
146
  ### Licensing
147
- All our models are licensed under the terms of the Apache License 2.0.
148
 
149
  ```
150
  Licensed under the Apache License, Version 2.0 (the "License");
@@ -164,5 +164,4 @@ However, the model has the potential to generate unpredictable text as mentioned
164
 
165
  ### Acknowledgement
166
 
167
- This project would not have been possible without compute generously provided by [Stability.ai](https://stability.ai), thanks them for providing a large amount of GPU resources. And thanks also go to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.
168
-
 
48
  model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-1.3b")
49
  ```
50
 
51
+ ## Data Risks
52
 
53
+ Polyglot models learn an inner representation of the various languages that can be used to extract features useful for downstream tasks.
54
+ The model is best at what it was pre-trained for, however, generating text from a prompt.
55
 
56
  ### Privacy considerations
57
+ General training algorithms for pre-trained language models have many hazards, that memorize personal information in training data. We added the following tokens to vocabulary to mitigate privacy problems and replaced much personal information with these tokens in data preprocessing steps.
58
 
59
  * `<|acc|>` : bank account number
60
  * `<|rrn|>` : resident registration number
61
  * `<|tell|>` : phone number
62
 
63
  ### Limitations and Biases
64
+ The core functionality of Polyglot is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting Polyglot it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon Polyglot to produce factually accurate output. Depending upon the use case, Polyglot may produce socially unacceptable text.
65
+
66
+ As with all language models, it is hard to predict in advance how Polyglot will respond to particular prompts, and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
67
 
68
  ### Legal Restrictions
69
  Since there are laws in many countries related to data collection, we will collect data with due regard to the laws of those countries.
70
+ Additionally, we plan to use the dataset to train our models, but we do not plan to make the dataset publicly available.
 
71
 
72
  ## Evaluation results
73
  We used the [KOBEST dataset](https://arxiv.org/abs/2204.04541), which consists of five Korean downstream tasks, for evaluation.
 
131
  <p><strong>&ast;</strong> Since this model does not provide evaluation results with KOBEST dataset, we evaluated the model using lm-evaluation-harness ourselves. you can reproduce this result using the source code included in the polyglot branch of lm-evaluation-harness.</p>
132
 
133
  ## Citation and Related Information
 
134
  ### BibTeX entry
135
  If you find our work useful, please consider citing:
136
  ```bibtex
 
144
  ```
145
 
146
  ### Licensing
147
+ All our models are licensed under the terms of the Apache License 2.0.
148
 
149
  ```
150
  Licensed under the Apache License, Version 2.0 (the "License");
 
164
 
165
  ### Acknowledgement
166
 
167
+ This project would not have been possible without the computing resources provided by [Stability.ai](https://stability.ai). Thanks for providing a large amount of GPU resources and to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.