generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

bert-tomi

This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 20 1.5815
No log 2.0 40 0.7518
No log 3.0 60 0.7153
No log 4.0 80 0.6354
No log 5.0 100 0.5895
No log 6.0 120 0.4882
No log 7.0 140 0.4590
No log 8.0 160 0.4303
No log 9.0 180 0.4644
No log 10.0 200 0.4416
No log 11.0 220 0.4348
No log 12.0 240 0.5306
No log 13.0 260 0.4412
No log 14.0 280 0.4053
No log 15.0 300 0.4185
No log 16.0 320 0.3982
No log 17.0 340 0.4291
No log 18.0 360 0.4316
No log 19.0 380 0.4328
No log 20.0 400 0.4198

Framework versions