generated_from_keras_callback

<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. -->

ayshi/undersampling_distil

This model is a fine-tuned version of distilbert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Train Loss Validation Loss Train Accuracy Epoch
1.7867 1.7551 0.4176 0
1.7228 1.6637 0.4835 1
1.5961 1.4869 0.5934 2
1.4148 1.3503 0.5934 3
1.2203 1.2274 0.6264 4
1.0720 1.1445 0.5934 5
0.9397 1.0827 0.5824 6
0.8296 1.0548 0.6044 7
0.7701 1.0288 0.5824 8

Framework versions