About

Model was trained with LoRA for 3 epochs using the alpaca-gpt4 dataset, quantized to 4-bit

Training procedure

The following bitsandbytes quantization config was used during training:

Framework versions