generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

distilbert-finetuned-lr1e-06-epochs25

This model is a fine-tuned version of distilbert-base-cased-distilled-squad on the None dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 10 5.6433
No log 2.0 20 5.2269
No log 3.0 30 4.9921
No log 4.0 40 4.8316
No log 5.0 50 4.6954
No log 6.0 60 4.5766
No log 7.0 70 4.4717
No log 8.0 80 4.3697
No log 9.0 90 4.2855
No log 10.0 100 4.2092
No log 11.0 110 4.1450
No log 12.0 120 4.0885
No log 13.0 130 4.0374
No log 14.0 140 3.9893
No log 15.0 150 3.9444
No log 16.0 160 3.9073
No log 17.0 170 3.8759
No log 18.0 180 3.8485
No log 19.0 190 3.8255
No log 20.0 200 3.8075
No log 21.0 210 3.7924
No log 22.0 220 3.7800
No log 23.0 230 3.7706
No log 24.0 240 3.7654
No log 25.0 250 3.7636

Framework versions