Model Summary
This is a base causal model extended from bigscience/bloomz-3b.
- Model size: 3.02B (~20M more than the base model)
- The tokenizer is extended to support Swedish language. Additional 8068 of tokens trained from Swedish Wiki and OSCAR have been added. The embedding layer is therefore extended too.
- The embedding layer and self-attention query_key_value layers are re-trained on mixed English and Swedish corpuses.
Intended Use
This model is being created in order to enable using Swedish and English on LLMs to cover public research and business use cases. LLMs are intended to be used for language generation or as a pretrained base model. It needs to be further fine-tuned for specific tasks.
The model inherits bigscience-bloom-rail-1.0 license from the base model. It shall NOT be used in bad purposes. For use restrictions, please check out RAIL License, Use Restrictions Appendix A.
Training Corpuses:
The model is re-trained with ~800M Swedish tokens and ~260M English tokens.
Notes:
- Since the model is only re-trained with Swedish and English. It seems only Swedish and English capabilities are retained. If you want to re-enable the capabilities of other languages, you will need to re-train it with the specific language.
- You might notice the base model is bloomz-3b, which is a intruction fine-tuned version of bloom. After this re-train, it seems to lose the instruction capability as well. So it is now simply a base causal model which could speak Swedish.