Model Info

Merge of my VicUnlocked-alpaca-half-30b LoRA

Important Note: While this is trained on a cleaned ShareGPT dataset like Vicuna used, this was trained in the Alpaca format, so prompting should be something like:

### Instruction:

<prompt> (without the <>)

### Response:

Benchmarks

wikitext2: 4.372413635253906 ptb-new: 24.69171714782715 c4-new: 6.469308853149414

Results generated with GPTQ evals (not quantized) thanks to Neko-Institute-of-Science