NB-BERT fine-tuned on IMDB
Description
This model is based on the pre-trained NB-BERT-large model. It is a model for sentiment analysis. The idea behind this model was to check if a language model mostly pretrained on norwegian (with approximately 4% english) could learn a down stream Norwegian tasks when only seing English examples during fine-tuning.
Data for fine-tuning
This model was fine-tuned on 1000 examples from the IMDB train dataset that belonged to the screen category. The training lasted 3 epochs with a learning rate of 5e-5. The code used to create this model (and some additional models) can be found on Github.