对 llama 7b 采用斯坦福提供的训练集进行中文训练,使用PeftModel的lora训练的权重


使用方法

tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")

model = LlamaForCausalLM.from_pretrained( "decapoda-research/llama-7b-hf", load_in_8bit=True, torch_dtype=torch.float16, device_map="auto", )

model = PeftModel.from_pretrained( model, "yinfupai/alpaca_7b_zh", torch_dtype=torch.float16 )


例子

请用python给出斐波纳契数列的代码.

def fibonacci(n):

if n == 0: 

    return 0 
    
elif n == 1: 

    return 1 
    
else:    

    return fibonacci(n-1) + fibonacci(n-2)

一些问题


license: other