Don't mind those at the moment, I need to finetune them for RP, it's just some tests.
WARNING: This model specifically need EOS token I completely forgot to put on the json files, and need to check what was the right ones trough the mix. Please don't use it like this if you really want to review it.
slices:
- sources:
- model: "/content/drive/MyDrive/CC-v1.1-7B-bf16"
layer_range: [0, 24]
- sources:
- model: "/content/drive/MyDrive/Zephyr-7B"
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
================================================
slices:
- sources:
- model: "/content/drive/MyDrive/Mistral-11B-CC-Zephyr"
layer_range: [0, 48]
- model: Undi95/Mistral-11B-OpenOrcaPlatypus
layer_range: [0, 48]
merge_method: slerp
base_model: "/content/drive/MyDrive/Mistral-11B-CC-Zephyr"
parameters:
t:
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
hf-causal-experimental (pretrained=/content/drive/MyDrive/Mistral-11B-Test), limit: None, provide_description: False, num_fewshot: 0, batch_size: 4
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 0.5623 | ± | 0.0145 |
acc_norm | 0.5794 | ± | 0.0144 | ||
arc_easy | 0 | acc | 0.8354 | ± | 0.0076 |
acc_norm | 0.8165 | ± | 0.0079 | ||
hellaswag | 0 | acc | 0.6389 | ± | 0.0048 |
acc_norm | 0.8236 | ± | 0.0038 | ||
piqa | 0 | acc | 0.8139 | ± | 0.0091 |
acc_norm | 0.8264 | ± | 0.0088 | ||
truthfulqa_mc | 1 | mc1 | 0.3978 | ± | 0.0171 |
mc2 | 0.5607 | ± | 0.0155 | ||
winogrande | 0 | acc | 0.7451 | ± | 0.0122 |