Llama-2-13b-guanaco
📝 Article | 💻 Colab | 📄 Script
<center><img src="https://i.imgur.com/C2x7n2a.png" width="300"></center>
This is a llama-2-13b-chat-hf
model fine-tuned using QLoRA (4-bit precision) on the mlabonne/guanaco-llama2
dataset.
🔧 Training
It was trained on a Google Colab notebook with a T4 GPU and high RAM.
💻 Usage
# pip install transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/llama-2-13b-miniguanaco"
prompt = "What is a large language model?"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
f'<s>[INST] {prompt} [/INST]',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")