qlora peft prompts

Training procedure

The following bitsandbytes quantization config was used during training:

Framework versions

# adding back the LoRA adopters to the base Llama-2 model

lora_config = LoraConfig.from_pretrained('Andyrasika/qlora-dialogue-summary')
model = get_peft_model(model, lora_config)

inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], max_new_tokens=100 ,repetition_penalty=1.2)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))