pretrained facebook meta pytorch llama llama-2

<!-- description start -->

Description

[THIS IS HIGHLY EXPERIMENTAL]

This repo contains quantized files of Llama-2-7b-Mistral, it's like a bootleg of Mistral, with updated data from 2022/2023, made by applying a LoRA at weight 1.0 of extracted Mistral data and made to be use for merge/finetune. I found it incredibly hard to work with Mistral at the moment, so I got this temporary idea.

The Model being thinkered with a LoRA, it still can output bad informations, but it contain Mistral one as well, giving me hope that we can make better model with those base.

<!-- description end --> <!-- description start -->

Model and lora used

<!-- description end --> <!-- prompt-template start -->

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

Screenshots

image/gif

image/gif

If you want to support me, you can here.