generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

spam_ham_classifier_distilbert

This model is a fine-tuned version of distilbert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
No log 1.0 222 0.3327 {'accuracy': 0.9144144144144144} {'f1': 0.9143031288094271} {'precision': 0.9165782817981563} {'recall': 0.9144144144144144}
No log 2.0 444 0.5384 {'accuracy': 0.8941441441441441} {'f1': 0.8935561370487688} {'precision': 0.9030501089324618} {'recall': 0.8941441441441441}
0.143 3.0 666 0.4686 {'accuracy': 0.9166666666666666} {'f1': 0.9164016263720607} {'precision': 0.922018537166814} {'recall': 0.9166666666666666}
0.143 4.0 888 0.4815 {'accuracy': 0.9166666666666666} {'f1': 0.9164016263720607} {'precision': 0.922018537166814} {'recall': 0.9166666666666666}

Framework versions