LLaMA 7B model finetuned on Playtpus 25k dataset.

LLaMA 7B is a large language model trained on one trillion tokens. It works by taking a sequence of words as an input and predicts a next word to recursively generate text. LLaMa-7b is a base model with 7 billion parameters. The model is available in parameter sizes ranging from 7B to 65B.