Quickstart

from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer

model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/mpt-7b-guanaco-qlora", load_in_4bit=True)
tok = AutoTokenizer.from_pretrained("ybelkada/mpt-7b-guanaco-qlora")

Training procedure

The following bitsandbytes quantization config was used during training:

Framework versions