This is a multilingual BERT model fine-tuned on 4000 examples of the NoReC dataset where examples with score 1/2 were marked as negative and 5/6 were marked as positive. The model was fine-tuned for 2 epochs with the following parameters:
- learning_rate = 3e-05
- warmup_ratio = 0.1
- optim = 'adamw_hf'
- weight_decay = 0.1