<!-- header start --> <div style="width: 100%;"> <img src="https://digitalpipelines.net/assets/images/logos/dp_logo_transparent.png" alt="Digital Pipelines" style="width: 10%; min-width: 200px; display: block; margin: auto;"> </div> <!-- header end -->
Overview
Fine-tuned Llama-2 13B trained with an uncensored/unfiltered Wizard-Vicuna conversation dataset digitalpipelines/wizard_vicuna_70k_uncensored. A QLoRA was created and used for fine-tuning and then merged back into the model. Llama2 has inherited bias even though it's been finetuned on an uncensored dataset.
Available versions of this model
- GPTQ model for usage with GPU. Multiple quantisation options available.
- Various GGML model quantization sizesfor CPU/GPU/Apple M1 usage.
- Original unquantised model
Prompt template: Llama-2-Chat
SYSTEM: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
USER: {prompt}
ASSISTANT: