LoraConfig arguments

config = LoraConfig(r=32, lora_alpha=64, #target_modules=".decoder.(self_attn|encoder_attn).*(q_proj|v_proj)$",#["q_proj", "v_proj"], target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none")

Training arguments

training_args = TrainingArguments( output_dir="temp", # change to a repo name of your choice per_device_train_batch_size=8, gradient_accumulation_steps=2, # increase by 2x for every 2x decrease in batch size learning_rate=1e-3, warmup_steps=10, max_steps=400, #1500 #evaluation_strategy="steps", fp16=True, per_device_eval_batch_size=8, #generation_max_length=128, eval_steps=100, logging_steps=25, remove_unused_columns=False, # required as the PeftModel forward doesn't have the signature of the wrapped model's forward label_names=["label"], # same reason as above )

Training procedure

The following bitsandbytes quantization config was used during training:

Framework versions