This represents the PEFT weights only. The base model is LLaMA 2 chat. Instruction finetuning was done using 4 bit QLoRA on a single A100 GPU with the PEFT config as given below. The dataset used for this instruction finetuning process is the cleaned alpaca dataset.

Do note that this model might have inferior performance on some specific tasks compared to full finetuning or a different base model trained with more specific data.

Training procedure

The following bitsandbytes quantization config was used during training:

Framework versions