smol_llama-101M-GQA
A small 101M param (total) decoder model. This is the first version of the model.
- 768 hidden size, 6 layers
- GQA (24 heads, 8 key-value), context length 1024
- train-from-scratch
Notes
This checkpoint is the 'raw' pre-trained model and has not been tuned to a more specific task. It should be fine-tuned before use in most cases.
- smaller 81M parameter checkpoint with in/out embeddings tied: here
- For the chat version of this model, please see here