Update README.md
Browse files
README.md
CHANGED
|
@@ -174,16 +174,10 @@ MPT-7B (Base) is **not** intended for deployment without finetuning.
|
|
| 174 |
It should not be used for human-facing interactions without further guardrails and user consent.
|
| 175 |
|
| 176 |
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
|
| 177 |
-
MPT-7B was trained on various public datasets
|
| 178 |
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
|
| 179 |
|
| 180 |
|
| 181 |
-
## Acknowledgements
|
| 182 |
-
|
| 183 |
-
We would like to thank our friends at AI2 for helping us to curate our pretraining dataset, choose a great tokenizer, and for many other helpful conversations along the way ⚔️
|
| 184 |
-
We gratefully acknowledge the work of the researchers who created the [LLaMA series of models](https://arxiv.org/abs/2302.13971), which was the impetus for our efforts.
|
| 185 |
-
and also acknowledge the hard work of the [Together](https://www.together.xyz) team, which put together the RedPajama dataset.
|
| 186 |
-
|
| 187 |
## Citation
|
| 188 |
|
| 189 |
Please cite this model using the following format:
|
|
|
|
| 174 |
It should not be used for human-facing interactions without further guardrails and user consent.
|
| 175 |
|
| 176 |
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
|
| 177 |
+
MPT-7B was trained on various public datasets.
|
| 178 |
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
|
| 179 |
|
| 180 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 181 |
## Citation
|
| 182 |
|
| 183 |
Please cite this model using the following format:
|