hmByT5 - Preliminary Language Models
Preliminary Historic Multilingual and Monolingual ByT5 Models. Following languages are currently covered:
- English (British Library Corpus - Books)
- German (Europeana Newspaper)
- French (Europeana Newspaper)
- Finnish (Europeana Newspaper)
- Swedish (Europeana Newspaper)
- Dutch (Delpher Corpus)
More details can be found in our GitHub repository.
Pretraining
We use the official JAX/FLAX example in Hugging Face Transformers to pretrain a ByT5 model on a single v3-8 TPU. Details about the training can be found here.
The model was trained for 0.5 epoch.
Evaluation on Downstream Tasks (NER)
We evaluated the hmByT5 model on downstream tasks:
Model | English AjMC | German AjMC | French AjMC | Finnish NewsEye | Swedish NewsEye | Dutch ICDAR | French ICDAR | Avg. |
---|---|---|---|---|---|---|---|---|
hmbyt5-preliminary/byt5-small-historic-multilingual-flax |
83.28 ± 1.67 | 86.98 ± 0.71 | 83.49 ± 1.06 | 76.96 ± 1.58 | 78.80 ± 1.89 | 86.47 ± 0.79 | 77.43 ± 0.51 |
Acknowledgements
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). Many Thanks for providing access to the TPUs ❤️