Training config

Rank 8 LoRA, 2 epochs of Stable-Diffusion-Prompts with batch size 128.

Training procedure

The following bitsandbytes quantization config was used during training:

Framework versions