This is a Chinese instruction-tuning lora checkpoint based on llama-7B from this repo's work

We use the 50k Chinese data, which is the combination of alpaca_chinese_instruction_dataset and the Chinese conversation data from sharegpt-90k data. We finetune the model for 3 epochs use a single 4090 with ctxlen=2048.

You can use it like this:

from transformers import LlamaForCausalLM
from peft import PeftModel

model = LlamaForCausalLM.from_pretrained(
    "decapoda-research/llama-7b-hf",
    load_in_8bit=True,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(
    model,
    "Chinese-Vicuna/Chinese-Vicuna-lora-7b-chatv1"
    torch_dtype=torch.float16,
    device_map={'': 0}
)

We offer train-args and train-log in here