Replaced Zephyr by Airoboros 2.2 in the mix.
Description
This repo contains fp16 files of Mistral-11B-AirOmniMix.
Model used
- Mistral-7B-OpenOrca
- Mistral-7B-v0.1-Open-Platypus
- CollectiveCognition-v1.1-Mistral-7B
- airoboros-mistral2.2-7b
Prompt template
The best one after further testing is this one, since Zephyr is out of the merge:
USER: <prompt>
ASSISTANT:
But this one work too:
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
Or use any prompting system from one of the 4 source model, should work.
The secret sauce
Mistral-11B-OpenOrcaPlatypus :
slices:
- sources:
- model: Open-Orca/Mistral-7B-OpenOrca
layer_range: [0, 24]
- sources:
- model: akjindal53244/Mistral-7B-v0.1-Open-Platypus
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
Mistral-11B-CC-Airo :
slices:
- sources:
- model: "/content/drive/MyDrive/CC-v1.1-7B-bf16"
layer_range: [0, 24]
- sources:
- model: "/content/drive/MyDrive/Mistral-7B-Airoboros-2.2-bf16"
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
Mistral-11B-AirOmniMix :
slices:
- sources:
- model: Mistral-11B-OpenOrcaPlatypus
layer_range: [0, 48]
- model: Mistral-11B-CC-Airo
layer_range: [0, 48]
merge_method: slerp
base_model: Mistral-11B-OpenOrcaPlatypus
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
I use mergekit for all the manipulation told here.
Some scoring I done myself
hf-causal-experimental (pretrained=/content/drive/MyDrive/Mistral-11B-AirOmniMix), limit: None, provide_description: False, num_fewshot: 0, batch_size: 4
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 0.5452 | ± | 0.0146 |
acc_norm | 0.5836 | ± | 0.0144 | ||
arc_easy | 0 | acc | 0.8321 | ± | 0.0077 |
acc_norm | 0.8119 | ± | 0.0080 | ||
hellaswag | 0 | acc | 0.6381 | ± | 0.0048 |
acc_norm | 0.8250 | ± | 0.0038 | ||
piqa | 0 | acc | 0.8096 | ± | 0.0092 |
acc_norm | 0.8243 | ± | 0.0089 | ||
truthfulqa_mc | 1 | mc1 | 0.3941 | ± | 0.0171 |
mc2 | 0.5606 | ± | 0.0155 | ||
winogrande | 0 | acc | 0.7395 | ± | 0.0123 |
Others
Special thanks to Sushi, Henky for the machine he give me for big task, and Charles Goddard for his amazing tool.
If you want to support me, you can here.