llama qlora instruct

This repo contains a qlora adapter for Llama-2-7b, trained on 1B tokens (available here) and subsequently fine-tuned on a private instructions dataset, exclusively in Polish.

The fine-tuning took 1 hour on a single RTX 4090 with the following hyperparameters:

This adapter allows the model to speak Polish more accurately than vanilla Llama-2-7b.

<p align="center"> <img src="https://huggingface.co/Azurro/llama-2-7b-qlora-polish-instruct/raw/main/llama-2-7b-qlora-polish-instruct.jpg"> </p>