smol_llama-81M-tied
A small 81M param (total) decoder model, enabled through tying the input/output embeddings. This is the first version of the model.
- 768 hidden size, 6 layers
- standard multi-head attention (24 heads), context length 1024
- input/output embeddings are tied
- train-from-scratch
Notes
This checkpoint is the 'raw' pre-trained model and has not been tuned to a more specific task. It should be fine-tuned before use in most cases.
- slightly larger 101M param GQA pretrained version: here
- For the chat version of this model, please see here