llama qlora

This repo contains a qlora adapter for Llama-2-7b, trained on 1B tokens, only in Polish.

The training took 20 days on a single RTX 4090 with the following hyperparameters:

This adapter allows the model to speak Polish more accurately than vanilla Llama-2-7b.

<p align="center"> <img src="https://huggingface.co/Azurro/llama-2-7b-qlora-polish/raw/main/llama-2-7b-qlora-pl.jpg"> </p>