This was made for someone.
Prompt template: ChatML
Use ChatML for this and configure <|im_end|> as a custom stopping string.
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
Models and loras used
Open-Orca/Mistral-7B-OpenOrca:
Apply lemonilia/LimaRP-MistralOrca-7B at 0.37 weight
=> MistralMegaOrca-7B-p1
jondurbin/airoboros-m-7b-3.0:
Apply Undi95/Mistral-pippa-sharegpt-7b-qlora at weight 0.18
=> MistralMegaOrca-7B-p2
Using tie merge, I do that :
MistralMegaOrca-7B-p1 0.8 +
MistralMegaOrca-7B-p2 0.6 +
teknium/CollectiveCognition-v1.1-Mistral-7B 0.4 +
Norquinal/Mistral-7B-claude-chat 0.4
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: Undi95/MistralMegaOrca-7B-p1
parameters:
weight: 0.8
- model: Undi95/MistralMegaOrca-7B-p2
parameters:
weight: 0.6
- model: teknium/CollectiveCognition-v1.1-Mistral-7B
parameters:
weight: 0.4
- model: Norquinal/Mistral-7B-claude-chat
parameters:
weight: 0.4
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: float16
For Dampf! Credit to him for the recipe.