File size: 1,252 Bytes
f44fc07 c9ad29d 1e26da4 c9ad29d f5654ea c9ad29d 94e93a0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
---
license: apache-2.0
base_model:
- mistralai/Mistral-Nemo-Base-2407
library_name: transformers
---
# silly-v0.2
Finetune of [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) designed to emulate the writing style of character.ai models.
- 2 epochs of SFT on RP data, then about an hour of PPO on 8xH100 with [POLAR-7B RFT](https://github.com/RowitZou/POLAR_RFT)
- Kind of wonky, if you're dealing with longer messages you may need to decrease your temperature
- ChatML chat format
- Reviews:
> its typically good at writing, v good for 12b, coherent in RP, follows context and starts conversations well
> I do legit like it, it feels good to use. When it gives me stable output the output is high quality and on task, its got small model stupid where basic logic holds but it invents things or forgets them (feels like small effective context window maybe?) which, to be clear, is like. Perfectly fine. Very good st synthesizing and inferring information provided in context on a higher level
This is mostly a proof-of-concept, showcasing that POLAR reward models can be very useful for "out of distribution" tasks like roleplaying. If you're working on your own roleplay finetunes, please consider using POLAR!
|