not-for-all-audiences nsfw

exl2 version of Norquinal/PetrolLM-CollectiveCognition
used dataset : wikitext
quantized by IHaBiS

command : python convert.py -i models/Norquinal_PetrolLM-CollectiveCognition -o Norquinal_PetrolLM-CollectiveCognition-temp -cf Norquinal_PetrolLM-CollectiveCognition-6bpw-h8-exl2 -c 0000.parquet -l 4096 -b 6 -hb 8 -ss 4096 -m Norquinal_PetrolLM-CollectiveCognition_measurement.json

Below this sentence is original model card

What is PetrolLM-Claude-Chat?

PetrolLM-Claude-Chat is the CollectiveCognition-v1.1-Mistral-7B model with the PetrolLoRA applied.

The dataset (for the LoRA) consists of 2800 samples, with the composition as follows:

These samples were then back-filled using gpt-4/gpt-3.5-turbo-16k or otherwise converted to fit the prompt format.

Prompt Format

The model uses the following prompt format:

---
style: roleplay
characters:
  [char]: [description]
summary: [scenario]
---
<chat_history>
Format:
[char]: [message]
Human: [message]

Use in Text Generation Web UI

Install the bleeding-edge version of transformers from source:

pip install git+https://github.com/huggingface/transformers

Or, alternatively, change model_type in config.json from mistral to llama.

Use in SillyTavern UI

As an addendum, you can include one of the following as the Last Output Sequence:

Human: In your next reply, write at least two paragraphs. Be descriptive and immersive, providing vivid details about {{char}}'s actions, emotions, and the environment.
{{char}}:
{{char}} (2 paragraphs, engaging, natural, authentic, descriptive, creative):
[System note: Write at least two paragraphs. Be descriptive and immersive, providing vivid details about {{char}}'s actions, emotions, and the environment.]
{{char}}:

The third one seems to work the best. I would recommend experimenting with creating your own to best suit your needs.