smol_llama llama2

smol_llama-81M-tied

A small 81M param (total) decoder model, enabled through tying the input/output embeddings. This is the first version of the model.

Notes

This checkpoint is the 'raw' pre-trained model and has not been tuned to a more specific task. It should be fine-tuned before use in most cases.