fix typo (#1)
Browse files- fix typo (0fc8d49ef2b037b16cddd9f456667d1db6f84a37)
Co-authored-by: David <[email protected]>
README.md
CHANGED
@@ -55,7 +55,7 @@ Note that this instruction training is light and is meant to allow Lucie to prod
|
|
55 |
|
56 |
Due to its size, Lucie-7B is limited in the information that it can memorize; its ability to produce correct answers could be improved by implementing the model in a retrieval augmented generation pipeline.
|
57 |
|
58 |
-
While Lucie-7B-Instruct is trained on sequences of 4096 tokens, its base model, Lucie-7B has a context size of 32K tokens. Based on Needle-in-a-haystack evaluations, Lucie-7B-Instruct-v1.1 has a context window size of 22K tokens. This window could be
|
59 |
|
60 |
|
61 |
## Training details
|
|
|
55 |
|
56 |
Due to its size, Lucie-7B is limited in the information that it can memorize; its ability to produce correct answers could be improved by implementing the model in a retrieval augmented generation pipeline.
|
57 |
|
58 |
+
While Lucie-7B-Instruct is trained on sequences of 4096 tokens, its base model, Lucie-7B has a context size of 32K tokens. Based on Needle-in-a-haystack evaluations, Lucie-7B-Instruct-v1.1 has a context window size of 22K tokens. This window could be increased by fine-tuning on longer data samples.
|
59 |
|
60 |
|
61 |
## Training details
|