Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,12 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
I know, long name. This model was created as an experiment on using LoRA extraction to replicate Openchat-3.5-0106 using Mistral-7B-v0.2 as a base model instead of the original Mistral-7B-v0.1.
|
| 6 |
+
|
| 7 |
+
OpenChat-3.5-0106 is an excellent model but was based on Mistral-7B-v0.1 which has a context window of 8192 tokens. Mistral-7B-v0.2 has a context window of 32768 tokens. I could have extended OpenChat-3.5 context myself with RoPE and/or YaRN but that has been done. There are many models on hf that have done exactly that. Instead I decided to try and replicate OpenChat-3.5-0106 using the LoRA extraction available in mergekit. This are the steps I followed:
|
| 8 |
+
- Extract a LoRA with rank 512 from OpenChat-3.5-0106 using imone's Mistral_7B_with_EOT_token as base.
|
| 9 |
+
- Replicate imone's work by adding the EOT token to Mistral-7B-v0.2, creating Mistral-7B-v0.2_EOT.
|
| 10 |
+
- Merge the LoRA's weights to my modified Mistral-7B-v0.2_EOT model.
|
| 11 |
+
|
| 12 |
+
This is the result. This model is not meant for use, it was created to test if this method is viable for replacing the base model of fine-tuned models. I am uploading here for evaluation.
|