Experimental: Created using an unofficial and unsupported method. I have no metrics on how this performs against 13b and I'm not planning on gathering any at this point. Still has weak spots that need work.

https://huggingface.co/nkpz/llama2-22b-blocktriangular-alpaca with further conversational and instruction fine tuning

First, I trained it on an epoch of https://huggingface.co/datasets/Adapting/empathetic_dialogues_v2 to give it a decent base knowledge of a casual chat style. I added some automated capitalization fixes for this data.The result was conversational, but not very smart.

Then I trained it on an epoch of https://huggingface.co/datasets/vicgalle/alpaca-gpt4 and landed here, a model that is capable of chatting but very focused on following instructions.

If you would like to run this in 4-bit, you can use the Hugging Face backend in Koboldai (or in a different script, the load_in_4bit kwarg when calling from_pretrained). GPTQ conversion has so far resulted in broken output for me, YMMV.

Future Ideas

Prompting

Standard prompt format examples:

Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
List 3 ingredients for the following recipe.

### Input:
Spaghetti Bolognese

### Response:

Or

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
List 3 ingredients for the following recipe: Spaghetti Bolognese

### Response:

For a chat session, I've had success using this simplified prompt:

### Scenario
You are speaking with Alexander Graham Bell
### Begin Chat (Format: [Person1]: [Message]\n[Person2]: [Message])
You: Hey, can you tell me a little bit about yourself?

In this example, its output was:

Alexander Graham Bell: Sure, I am an inventor and scientist. I'm most known for inventing the telephone.

You can customize the use of ### prefixed labels to create your own structure.