Dataset procedure

Training procedure

LoraConfig procedure

r=8, #attention heads
lora_alpha=16, #alpha scaling
lora_dropout=0.1,
bias="none",
task_type="SEQ_CLS" # set this for CLM or Seq2Seq

Framework versions