This LoRA is made out of data extracted from Mistral the difference between Mistral-7B and the OG Llama2-7B model.
The goal was to add new data in Llama2 model, I got a hard time finetuning or merging Mistral, and since we don't have access to the SWA anyway, this is a trade off I can accept.
I tested it, and it seem to work on 7B and (surprisingly) on 13B model (and probbaly 20B since it's only made of 13B layers).
I don't have any clue on HOW it worked, but it worked.
Here's a screenshot of KunichiMistral, who is simply Kunichi merged with this LoRA at weight 1, with up to date data :
The amazing thing is even at weight 1 the targetted model seems to keep his RP ability in a certain extent, I need to dig more.
Exemple below of a simple RP with KunichiMistral :
Feel free to use this LoRA for your own testing, feedback of your work is appreciated.
If I can't toy with Mistral atm, well I can maybe toy with their data.