<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
distilbert-base-uncased-finetuned-sprint-meds
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.8427
- Accuracy: 0.8790
- F1: 0.8630
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
1.8256 | 1.0 | 21 | 1.9309 | 0.6868 | 0.5992 |
1.7067 | 2.0 | 42 | 1.8220 | 0.6993 | 0.6190 |
1.5327 | 3.0 | 63 | 1.7250 | 0.7189 | 0.6489 |
1.4475 | 4.0 | 84 | 1.6374 | 0.7509 | 0.6903 |
1.3108 | 5.0 | 105 | 1.5627 | 0.7438 | 0.6843 |
1.1881 | 6.0 | 126 | 1.4905 | 0.7669 | 0.7135 |
1.1726 | 7.0 | 147 | 1.4287 | 0.7847 | 0.7379 |
1.0681 | 8.0 | 168 | 1.3705 | 0.7829 | 0.7368 |
0.9392 | 9.0 | 189 | 1.3214 | 0.7954 | 0.7513 |
0.9603 | 10.0 | 210 | 1.2741 | 0.8043 | 0.7613 |
0.8349 | 11.0 | 231 | 1.2415 | 0.8185 | 0.7793 |
0.8094 | 12.0 | 252 | 1.2028 | 0.8256 | 0.7883 |
0.787 | 13.0 | 273 | 1.1673 | 0.8310 | 0.7951 |
0.7128 | 14.0 | 294 | 1.1412 | 0.8381 | 0.8056 |
0.6821 | 15.0 | 315 | 1.1091 | 0.8399 | 0.8074 |
0.6177 | 16.0 | 336 | 1.0906 | 0.8399 | 0.8098 |
0.633 | 17.0 | 357 | 1.0645 | 0.8434 | 0.8170 |
0.5734 | 18.0 | 378 | 1.0415 | 0.8470 | 0.8199 |
0.5181 | 19.0 | 399 | 1.0233 | 0.8416 | 0.8153 |
0.4926 | 20.0 | 420 | 1.0076 | 0.8470 | 0.8209 |
0.4773 | 21.0 | 441 | 0.9896 | 0.8434 | 0.8184 |
0.4361 | 22.0 | 462 | 0.9768 | 0.8470 | 0.8216 |
0.4385 | 23.0 | 483 | 0.9624 | 0.8505 | 0.8261 |
0.3962 | 24.0 | 504 | 0.9520 | 0.8559 | 0.8309 |
0.392 | 25.0 | 525 | 0.9392 | 0.8577 | 0.8339 |
0.4095 | 26.0 | 546 | 0.9331 | 0.8577 | 0.8359 |
0.3389 | 27.0 | 567 | 0.9242 | 0.8577 | 0.8348 |
0.3296 | 28.0 | 588 | 0.9117 | 0.8577 | 0.8344 |
0.3527 | 29.0 | 609 | 0.9026 | 0.8665 | 0.8465 |
0.315 | 30.0 | 630 | 0.9008 | 0.8648 | 0.8431 |
0.2891 | 31.0 | 651 | 0.8923 | 0.8648 | 0.8433 |
0.3283 | 32.0 | 672 | 0.8818 | 0.8701 | 0.8507 |
0.2967 | 33.0 | 693 | 0.8799 | 0.8683 | 0.8479 |
0.2657 | 34.0 | 714 | 0.8750 | 0.8683 | 0.8479 |
0.3015 | 35.0 | 735 | 0.8727 | 0.8719 | 0.8526 |
0.2847 | 36.0 | 756 | 0.8656 | 0.8754 | 0.8575 |
0.2614 | 37.0 | 777 | 0.8630 | 0.8772 | 0.8589 |
0.26 | 38.0 | 798 | 0.8604 | 0.8754 | 0.8598 |
0.2557 | 39.0 | 819 | 0.8588 | 0.8772 | 0.8612 |
0.2389 | 40.0 | 840 | 0.8562 | 0.8790 | 0.8619 |
0.2464 | 41.0 | 861 | 0.8529 | 0.8790 | 0.8615 |
0.2304 | 42.0 | 882 | 0.8529 | 0.8772 | 0.8613 |
0.2356 | 43.0 | 903 | 0.8514 | 0.8790 | 0.8636 |
0.2291 | 44.0 | 924 | 0.8479 | 0.8790 | 0.8631 |
0.2323 | 45.0 | 945 | 0.8457 | 0.8790 | 0.8631 |
0.2281 | 46.0 | 966 | 0.8454 | 0.8790 | 0.8638 |
0.2163 | 47.0 | 987 | 0.8432 | 0.8790 | 0.8633 |
0.226 | 48.0 | 1008 | 0.8433 | 0.8790 | 0.8631 |
0.229 | 49.0 | 1029 | 0.8431 | 0.8790 | 0.8631 |
0.2388 | 50.0 | 1050 | 0.8427 | 0.8790 | 0.8630 |
Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3