generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

distilbert-finetuned-lr1e-05-epochs25

This model is a fine-tuned version of distilbert-base-cased-distilled-squad on the None dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 10 4.0203
No log 2.0 20 3.3541
No log 3.0 30 3.2099
No log 4.0 40 3.0069
No log 5.0 50 2.9681
No log 6.0 60 2.9433
No log 7.0 70 2.9839
No log 8.0 80 2.9769
No log 9.0 90 2.8885
No log 10.0 100 2.9944
No log 11.0 110 3.1082
No log 12.0 120 3.1373
No log 13.0 130 3.1689
No log 14.0 140 3.2052
No log 15.0 150 3.2829
No log 16.0 160 3.3722
No log 17.0 170 3.3986
No log 18.0 180 3.3982
No log 19.0 190 3.3895
No log 20.0 200 3.4327
No log 21.0 210 3.4649
No log 22.0 220 3.4791
No log 23.0 230 3.4945
No log 24.0 240 3.5045
No log 25.0 250 3.5052

Framework versions