lora fine tune facebook/xglm-7.5B with Thaweewat/alpaca-cleaned-52k-th

template

### Question: instruction
input
### Answer: 
peft_config  = LoraConfig(
    r=64,
    lora_alpha=128,
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM",
    target_modules=[
        "q_proj",
        "k_proj",
        "v_proj",
        "out_proj",
        "fc1",
        "fc2",      
    ]
)

Training procedure

The following bitsandbytes quantization config was used during training:

The following bitsandbytes quantization config was used during training:

The following bitsandbytes quantization config was used during training:

Framework versions