<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
Metricas_teste_wan
This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.1473
- Accuracy: 0.9818
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
No log | 1.0 | 248 | 0.3574 | 0.9114 |
No log | 2.0 | 496 | 0.1911 | 0.9386 |
0.7732 | 3.0 | 744 | 0.1919 | 0.9386 |
0.7732 | 4.0 | 992 | 0.1044 | 0.9727 |
0.0987 | 5.0 | 1240 | 0.0928 | 0.9682 |
0.0987 | 6.0 | 1488 | 0.0545 | 0.9841 |
0.0406 | 7.0 | 1736 | 0.1183 | 0.9727 |
0.0406 | 8.0 | 1984 | 0.1114 | 0.9773 |
0.0204 | 9.0 | 2232 | 0.0838 | 0.9773 |
0.0204 | 10.0 | 2480 | 0.0726 | 0.9818 |
0.0084 | 11.0 | 2728 | 0.1100 | 0.975 |
0.0084 | 12.0 | 2976 | 0.1133 | 0.9773 |
0.0032 | 13.0 | 3224 | 0.1283 | 0.9773 |
0.0032 | 14.0 | 3472 | 0.0935 | 0.9795 |
0.0084 | 15.0 | 3720 | 0.1318 | 0.9705 |
0.0084 | 16.0 | 3968 | 0.1446 | 0.9773 |
0.0031 | 17.0 | 4216 | 0.1123 | 0.9773 |
0.0031 | 18.0 | 4464 | 0.0971 | 0.975 |
0.0049 | 19.0 | 4712 | 0.1369 | 0.9773 |
0.0049 | 20.0 | 4960 | 0.1855 | 0.9773 |
0.0018 | 21.0 | 5208 | 0.2224 | 0.9659 |
0.0018 | 22.0 | 5456 | 0.1444 | 0.9795 |
0.0045 | 23.0 | 5704 | 0.1544 | 0.9795 |
0.0045 | 24.0 | 5952 | 0.1495 | 0.9705 |
0.0037 | 25.0 | 6200 | 0.1741 | 0.975 |
0.0037 | 26.0 | 6448 | 0.1658 | 0.9705 |
0.0001 | 27.0 | 6696 | 0.2132 | 0.9727 |
0.0001 | 28.0 | 6944 | 0.2222 | 0.9682 |
0.0079 | 29.0 | 7192 | 0.1348 | 0.9795 |
0.0079 | 30.0 | 7440 | 0.1656 | 0.9773 |
0.0016 | 31.0 | 7688 | 0.1584 | 0.975 |
0.0016 | 32.0 | 7936 | 0.1674 | 0.9795 |
0.0005 | 33.0 | 8184 | 0.1837 | 0.9795 |
0.0005 | 34.0 | 8432 | 0.1595 | 0.9773 |
0.0016 | 35.0 | 8680 | 0.1949 | 0.9795 |
0.0016 | 36.0 | 8928 | 0.0991 | 0.9818 |
0.0004 | 37.0 | 9176 | 0.1864 | 0.9795 |
0.0004 | 38.0 | 9424 | 0.1444 | 0.9795 |
0.0024 | 39.0 | 9672 | 0.1486 | 0.9773 |
0.0024 | 40.0 | 9920 | 0.1457 | 0.9773 |
0.0004 | 41.0 | 10168 | 0.1486 | 0.9773 |
0.0004 | 42.0 | 10416 | 0.1518 | 0.9773 |
0.0 | 43.0 | 10664 | 0.1517 | 0.9773 |
0.0 | 44.0 | 10912 | 0.1480 | 0.9773 |
0.0 | 45.0 | 11160 | 0.1458 | 0.9818 |
0.0 | 46.0 | 11408 | 0.1462 | 0.9818 |
0.0 | 47.0 | 11656 | 0.1466 | 0.9818 |
0.0 | 48.0 | 11904 | 0.1470 | 0.9818 |
0.0 | 49.0 | 12152 | 0.1472 | 0.9818 |
0.0 | 50.0 | 12400 | 0.1473 | 0.9818 |
Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3