Dataset procedure

Training procedure

The following bitsandbytes quantization config was used during training:

LoraConfig procedure

r=16, #attention heads
lora_alpha=32, #alpha scaling
# target_modules=["q_proj", "v_proj"], #if you know the
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM" # set this for CLM or Seq2Seq

Framework versions