GenZ 13B

The instruction finetuned model with 4K input length. The model is finetuned on top of pretrained LLaMa2

Inference

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("budecosystem/genz-13b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("budecosystem/genz-13b", torch_dtype=torch.bfloat16)
inputs = tokenizer("The world is", return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))

Use following prompt template

A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi, how are you? ASSISTANT: 

Finetuning

python finetune.py
   --model_name meta-llama/Llama-2-13b
   --data_path dataset.json
   --output_dir output
   --trust_remote_code
   --prompt_column instruction
   --response_column output

Check the GitHub for the code -> GenZ