<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
distilbert-base-cased-finetuned-paper
This model is a fine-tuned version of distilbert-base-cased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1417
- Precision: 0.5690
- Recall: 0.6226
- F1: 0.5946
- Accuracy: 0.9790
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 1.0 | 62 | 0.1183 | 0.3810 | 0.3019 | 0.3368 | 0.9702 |
No log | 2.0 | 124 | 0.0923 | 0.3986 | 0.5189 | 0.4508 | 0.9730 |
No log | 3.0 | 186 | 0.0808 | 0.5 | 0.5943 | 0.5431 | 0.9765 |
No log | 4.0 | 248 | 0.0951 | 0.6042 | 0.5472 | 0.5743 | 0.9790 |
No log | 5.0 | 310 | 0.0939 | 0.5794 | 0.5849 | 0.5822 | 0.9795 |
No log | 6.0 | 372 | 0.0949 | 0.5508 | 0.6132 | 0.5804 | 0.9792 |
No log | 7.0 | 434 | 0.1004 | 0.5075 | 0.6415 | 0.5667 | 0.9782 |
No log | 8.0 | 496 | 0.1110 | 0.5403 | 0.6321 | 0.5826 | 0.9784 |
0.0979 | 9.0 | 558 | 0.1139 | 0.5517 | 0.6038 | 0.5766 | 0.9789 |
0.0979 | 10.0 | 620 | 0.1153 | 0.5727 | 0.5943 | 0.5833 | 0.9795 |
0.0979 | 11.0 | 682 | 0.1238 | 0.5238 | 0.6226 | 0.5690 | 0.9776 |
0.0979 | 12.0 | 744 | 0.1249 | 0.5478 | 0.5943 | 0.5701 | 0.9786 |
0.0979 | 13.0 | 806 | 0.1263 | 0.5323 | 0.6226 | 0.5739 | 0.9786 |
0.0979 | 14.0 | 868 | 0.1303 | 0.5810 | 0.5755 | 0.5782 | 0.9792 |
0.0979 | 15.0 | 930 | 0.1358 | 0.4929 | 0.6509 | 0.5610 | 0.9773 |
0.0979 | 16.0 | 992 | 0.1305 | 0.5766 | 0.6038 | 0.5899 | 0.9793 |
0.0033 | 17.0 | 1054 | 0.1321 | 0.5323 | 0.6226 | 0.5739 | 0.9779 |
0.0033 | 18.0 | 1116 | 0.1353 | 0.5726 | 0.6321 | 0.6009 | 0.9789 |
0.0033 | 19.0 | 1178 | 0.1355 | 0.5462 | 0.6132 | 0.5778 | 0.9793 |
0.0033 | 20.0 | 1240 | 0.1346 | 0.5556 | 0.6132 | 0.5830 | 0.9796 |
0.0033 | 21.0 | 1302 | 0.1386 | 0.5403 | 0.6321 | 0.5826 | 0.9784 |
0.0033 | 22.0 | 1364 | 0.1389 | 0.5508 | 0.6132 | 0.5804 | 0.9790 |
0.0033 | 23.0 | 1426 | 0.1376 | 0.55 | 0.6226 | 0.5841 | 0.9790 |
0.0033 | 24.0 | 1488 | 0.1394 | 0.5641 | 0.6226 | 0.5919 | 0.9796 |
0.0012 | 25.0 | 1550 | 0.1408 | 0.55 | 0.6226 | 0.5841 | 0.9789 |
0.0012 | 26.0 | 1612 | 0.1413 | 0.5739 | 0.6226 | 0.5973 | 0.9792 |
0.0012 | 27.0 | 1674 | 0.1417 | 0.5455 | 0.6226 | 0.5815 | 0.9787 |
0.0012 | 28.0 | 1736 | 0.1424 | 0.5455 | 0.6226 | 0.5815 | 0.9787 |
0.0012 | 29.0 | 1798 | 0.1419 | 0.5546 | 0.6226 | 0.5867 | 0.9790 |
0.0012 | 30.0 | 1860 | 0.1417 | 0.5690 | 0.6226 | 0.5946 | 0.9790 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2