Update README.md
Browse files
README.md
CHANGED
@@ -15,4 +15,7 @@ Finetune of [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Ne
|
|
15 |
|
16 |
> its typically good at writing, v good for 12b, coherent in RP, follows context and starts conversations well
|
17 |
|
18 |
-
> I do legit like it, it feels good to use. When it gives me stable output the output is high quality and on task, its got small model stupid where basic logic holds but it invents things or forgets them (feels like small effective context window maybe?) which, to be clear, is like. Perfectly fine. Very good st synthesizing and inferring information provided in context on a higher level
|
|
|
|
|
|
|
|
15 |
|
16 |
> its typically good at writing, v good for 12b, coherent in RP, follows context and starts conversations well
|
17 |
|
18 |
+
> I do legit like it, it feels good to use. When it gives me stable output the output is high quality and on task, its got small model stupid where basic logic holds but it invents things or forgets them (feels like small effective context window maybe?) which, to be clear, is like. Perfectly fine. Very good st synthesizing and inferring information provided in context on a higher level
|
19 |
+
|
20 |
+
|
21 |
+
This is mostly a proof-of-concept, showcasing that POLAR reward models can be very useful for "out of distribution" tasks like roleplaying. If you're working on your own roleplay finetunes, please consider using POLAR!
|