generated_from_keras_callback

KIZervus

This model is a fine-tuned version of distilbert-base-german-cased. It is trained to classify german text into the classes "vulgar" speech and "non-vulgar" speech. The data set is a collection of other labeled sources in german. For an overview, see the github repository here: https://github.com/NKDataConv/KIZervus Both data and training procedure are documented in the GitHub repo. Your are welcome to contribute.

It achieves the following results on the evaluation set:

Training procedure

For details, see the repo and documentation here: https://github.com/NKDataConv/KIZervus

Training hyperparameters

The following hyperparameters were used during training:

Training results

Train Loss Train Accuracy Validation Loss Validation Accuracy Epoch
0.4830 0.7617 0.5061 0.7406 0
0.4640 0.7744 0.4852 0.7937 1

Framework versions

Supporter

BMBF Logo