LimaRP-MistralOrca-7B (Alpaca, 8-bit LoRA adapter)

This is a version of LimaRP specifically finetuned for Mistral-7B-OpenOrca, using about 1860 conversations up to 9.5k tokens length and Sliding Window Attention (SWA), for about 8.5M unique tokens in total. This LoRA adapter may not work as intended on the base Mistral-7B-v0.1 (feel free to try, though).

Contrarily to the previous version released for base Mistral-7B, this one has not been preliminarily finetuned on stories, only on conversations.

For more details about LimaRP, see the model page for the previously released v2 version for Llama-2. Most details written there apply for this version as well. Generally speaking, LimaRP is a longform-oriented, novel-style roleplaying chat model intended to replicate the experience of 1-on-1 roleplay on Internet forums. Short-form, IRC/Discord-style RP (aka "Markdown format") is not supported yet. The model does not include instruction tuning, only manually picked and slightly edited RP conversations with persona and scenario data.

Prompt format

Same as before. It uses the extended Alpaca format, with ### Input: immediately preceding user inputs and ### Response: immediately preceding model outputs. While Alpaca wasn't originally intended for multi-turn responses, in practice this is not a problem; the format follows a pattern already used by other models.

Note that although not strictly needed, there should preferably be an empty line before the ### Instruction: sequence. This should make tokenization more consistent with the other sequences.


### Instruction:
Character's Persona: {bot character description}

User's Persona: {user character description}

Scenario: {what happens in the story}

Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.

### Input:
User: {utterance}

### Response:
Character: {utterance}

### Input
User: {utterance}

### Response:
Character: {utterance}

(etc.)

You should:

Message length control

Inspired by the previously named "Roleplay" preset in SillyTavern, with this version of LimaRP it is possible to append a length modifier to the response instruction sequence, like this:

### Input
User: {utterance}

### Response: (length = medium)
Character: {utterance}

This has an immediately noticeable effect on bot responses. The lengths using during training are: micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited. The recommended starting length is medium. Keep in mind that the AI can ramble or impersonate the user with very long messages.

The length control effect is reproducible, but the messages will not necessarily follow lengths very precisely, rather follow certain ranges on average, as seen in this table with data from tests made with one reply at the beginning of the conversation:

lengths

Response length control appears to work well also deep into the conversation. By omitting the modifier, the model will choose the most appropriate response length (although it might not necessarily be what the user desires).

Suggested settings

You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length. It's suggested to also add <|im_end|> and possibly <|im_start|> as custom stopping strings, as they may occasionally come up due to the underlying OpenOrca finetune:

settings

Text generation settings

Like the base Mistral-7B-v0.1, repetition issues still persist. A low temperature combined with a relatively high repetition penalty and low repetition penalty range may help. Otherwise, normally a starting point could be as follows:

Training procedure

Axolotl was used for training on a 4x NVidia A40 cluster graciously provided by Arc Compute.

The model has been trained as an 8-bit LoRA adapter, and it's so large because a LoRA rank of 256 was also used.

Training hyperparameters

With 4 GPUs, the effective batch size is 4.

Loss curves

Loss values are significantly higher than normal because the training procedure is performed on the entire sequences.

Train loss

Train Loss

Eval Loss

Eval Loss