Training procedure

This is a Falcon 1B model that was finetuned on the MichaelAI23/hotel_requests dataset. LoRA was used for the training in combination with 8-bit quantization.

The following bitsandbytes quantization config was used during training:

Framework versions