not-for-all-audiences nsfw mistral pretrained

exl2 quant of Undi95/Mistral-11B-CC-Air-RP model.

Quantization parameters:

-c /app/pippa.parquet -b 8 -hb 8

Original model card below:

CollectiveCognition-v1.1-Mistral-7B and airoboros-mistral2.2-7b glued together and finetuned with qlora of Pippa and LimaRPv3 dataset.

<!-- description start -->

Description

This repo contains fp16 files of Mistral-11B-CC-Air-RP.

<!-- description end --> <!-- description start -->

Model used

<!-- description end --> <!-- prompt-template start -->

Prompt template: Alpaca or default

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

USER: <prompt>
ASSISTANT:

The secret sauce

slices:
  - sources:
    - model: teknium/CollectiveCognition-v1.1-Mistral-7B
      layer_range: [0, 24]
  - sources:
    - model: teknium/airoboros-mistral2.2-7b
      layer_range: [8, 32]
merge_method: passthrough
dtype: float16

Special thanks to Sushi.

If you want to support me, you can here.