generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

distilbert-base-cased-distilled-squad-finetuned-lr1e-06-epochs50

This model is a fine-tuned version of distilbert-base-cased-distilled-squad on the None dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 9 4.5183
No log 2.0 18 4.2862
No log 3.0 27 4.1030
No log 4.0 36 3.9555
No log 5.0 45 3.8184
No log 6.0 54 3.6927
No log 7.0 63 3.5723
No log 8.0 72 3.4703
No log 9.0 81 3.3797
No log 10.0 90 3.3053
No log 11.0 99 3.2365
No log 12.0 108 3.1652
No log 13.0 117 3.0925
No log 14.0 126 3.0304
No log 15.0 135 2.9779
No log 16.0 144 2.9331
No log 17.0 153 2.8963
No log 18.0 162 2.8552
No log 19.0 171 2.8178
No log 20.0 180 2.7749
No log 21.0 189 2.7410
No log 22.0 198 2.7161
No log 23.0 207 2.6865
No log 24.0 216 2.6560
No log 25.0 225 2.6324
No log 26.0 234 2.6122
No log 27.0 243 2.5934
No log 28.0 252 2.5708
No log 29.0 261 2.5563
No log 30.0 270 2.5439
No log 31.0 279 2.5285
No log 32.0 288 2.5139
No log 33.0 297 2.5030
No log 34.0 306 2.4906
No log 35.0 315 2.4797
No log 36.0 324 2.4707
No log 37.0 333 2.4638
No log 38.0 342 2.4580
No log 39.0 351 2.4512
No log 40.0 360 2.4475
No log 41.0 369 2.4427
No log 42.0 378 2.4396
No log 43.0 387 2.4355
No log 44.0 396 2.4324
No log 45.0 405 2.4286
No log 46.0 414 2.4251
No log 47.0 423 2.4223
No log 48.0 432 2.4209
No log 49.0 441 2.4201
No log 50.0 450 2.4198

Framework versions