<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
og-deberta-extra-o
This model is a fine-tuned version of microsoft/deberta-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5184
- Precision: 0.5981
- Recall: 0.6667
- F1: 0.6305
- Accuracy: 0.9226
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 1.0 | 55 | 0.4813 | 0.2863 | 0.3467 | 0.3136 | 0.8720 |
No log | 2.0 | 110 | 0.3469 | 0.4456 | 0.4587 | 0.4520 | 0.9010 |
No log | 3.0 | 165 | 0.3166 | 0.5206 | 0.5387 | 0.5295 | 0.9147 |
No log | 4.0 | 220 | 0.3338 | 0.4899 | 0.584 | 0.5328 | 0.9087 |
No log | 5.0 | 275 | 0.3166 | 0.5625 | 0.648 | 0.6022 | 0.9198 |
No log | 6.0 | 330 | 0.3464 | 0.5707 | 0.6027 | 0.5863 | 0.9207 |
No log | 7.0 | 385 | 0.3548 | 0.5489 | 0.6133 | 0.5793 | 0.9207 |
No log | 8.0 | 440 | 0.4005 | 0.6125 | 0.6027 | 0.6075 | 0.9210 |
No log | 9.0 | 495 | 0.4185 | 0.5763 | 0.6347 | 0.6041 | 0.9171 |
0.2019 | 10.0 | 550 | 0.4174 | 0.5596 | 0.6507 | 0.6017 | 0.9179 |
0.2019 | 11.0 | 605 | 0.4558 | 0.5603 | 0.632 | 0.5940 | 0.9179 |
0.2019 | 12.0 | 660 | 0.4615 | 0.5632 | 0.6533 | 0.6049 | 0.9166 |
0.2019 | 13.0 | 715 | 0.4899 | 0.5815 | 0.6187 | 0.5995 | 0.9208 |
0.2019 | 14.0 | 770 | 0.4800 | 0.5581 | 0.64 | 0.5963 | 0.9186 |
0.2019 | 15.0 | 825 | 0.4752 | 0.5905 | 0.6613 | 0.6239 | 0.9212 |
0.2019 | 16.0 | 880 | 0.5014 | 0.5773 | 0.6373 | 0.6058 | 0.9174 |
0.2019 | 17.0 | 935 | 0.5095 | 0.5917 | 0.6453 | 0.6173 | 0.9195 |
0.2019 | 18.0 | 990 | 0.5249 | 0.5807 | 0.6427 | 0.6101 | 0.9203 |
0.0077 | 19.0 | 1045 | 0.5086 | 0.5761 | 0.656 | 0.6135 | 0.9222 |
0.0077 | 20.0 | 1100 | 0.5108 | 0.5962 | 0.6693 | 0.6307 | 0.9219 |
0.0077 | 21.0 | 1155 | 0.5144 | 0.5977 | 0.6853 | 0.6385 | 0.9231 |
0.0077 | 22.0 | 1210 | 0.5176 | 0.5990 | 0.6613 | 0.6286 | 0.9229 |
0.0077 | 23.0 | 1265 | 0.5171 | 0.6039 | 0.6667 | 0.6337 | 0.9226 |
0.0077 | 24.0 | 1320 | 0.5184 | 0.6043 | 0.672 | 0.6364 | 0.9226 |
0.0077 | 25.0 | 1375 | 0.5184 | 0.5981 | 0.6667 | 0.6305 | 0.9226 |
Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1