generated_from_keras_callback

RobCaamano/toxicity_weighted

This model was trained from scratch on Distilbert Base Uncased. It achieves the following results on the evaluation set:

Model description

Finetuned model that uses Distilbert Base Uncased to detect types of toxic text. These include: "toxic", "severe_toxic", "obscene", "threat", "insult" & "identity_hate".

Intended uses & limitations

Intended to classify text into different types of toxicity when it is detected. Trained off a small dataset with underrepresented categories.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Train Loss Train Precision Train Recall Epoch
0.0440 0.9059 0.8294 7
0.0380 0.9223 0.8632 8
0.0314 0.9335 0.8838 9
0.0282 0.9437 0.9075 10
0.0240 0.9522 0.9190 11

Framework versions