EverythingLM-13b-16k

Introducing EverythingLM, a llama-2 based, general-purpose 13b model with 16k context thanks to LlongMa. The model is trained on the EverythingLM dataset, more info can be found on the dataset page.

The model is completely uncensored.

This model is an early test of the EverythingLM dataset and some new experimental principles, so don't consider it SOTA.

GGML quants:

https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGML

Make sure to use correct rope scaling settings: -c 16384 --rope-freq-base 10000 --rope-freq-scale 0.25

GPTQ quants:

https://huggingface.co/TheBloke/EverythingLM-13B-16K-GPTQ

Notable features:

Prompt format:

It is a modified Vicuna format, the same used in many of ehartford's models.

You are a helpful AI assistant.

USER: <prompt>
ASSISTANT:

Training took about 1 hour using QLoRa on 1xA100, so this model can be recreated for about $3. QLoRa model can be found here: https://huggingface.co/totally-not-an-llm/EverythingLM-13b-peft.

Model quirks:

Future plans: