This is a distilled multilingual BERT model fine-tuned on 4000 examples of the NoReC dataset where examples with score 1/2 were marked as negative and 5/6 were marked as positive. The model was fine-tuned for 3 epochs with the following parameters: