<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
modelBeto5
This model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-cased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1686
- Precision: 0.5990
- Recall: 0.6541
- F1: 0.6253
- Accuracy: 0.9727
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 1.0 | 29 | 0.2706 | 0.0 | 0.0 | 0.0 | 0.9451 |
No log | 2.0 | 58 | 0.3328 | 0.0 | 0.0 | 0.0 | 0.9451 |
No log | 3.0 | 87 | 0.1872 | 0.0476 | 0.0108 | 0.0176 | 0.9320 |
No log | 4.0 | 116 | 0.1428 | 0.3971 | 0.1459 | 0.2134 | 0.9551 |
No log | 5.0 | 145 | 0.1169 | 0.4690 | 0.2865 | 0.3557 | 0.9614 |
No log | 6.0 | 174 | 0.1259 | 0.5414 | 0.5297 | 0.5355 | 0.9629 |
No log | 7.0 | 203 | 0.1166 | 0.4575 | 0.6108 | 0.5231 | 0.9604 |
No log | 8.0 | 232 | 0.1240 | 0.6149 | 0.4919 | 0.5465 | 0.9693 |
No log | 9.0 | 261 | 0.1145 | 0.5276 | 0.5676 | 0.5469 | 0.9681 |
No log | 10.0 | 290 | 0.1377 | 0.5612 | 0.5946 | 0.5774 | 0.9688 |
No log | 11.0 | 319 | 0.1321 | 0.5833 | 0.6432 | 0.6118 | 0.9700 |
No log | 12.0 | 348 | 0.1549 | 0.6581 | 0.5514 | 0.6 | 0.9717 |
No log | 13.0 | 377 | 0.1482 | 0.6080 | 0.6541 | 0.6302 | 0.9713 |
No log | 14.0 | 406 | 0.1589 | 0.5348 | 0.6649 | 0.5928 | 0.9675 |
No log | 15.0 | 435 | 0.1507 | 0.6178 | 0.6378 | 0.6277 | 0.9720 |
No log | 16.0 | 464 | 0.1554 | 0.6082 | 0.6378 | 0.6227 | 0.9720 |
No log | 17.0 | 493 | 0.1658 | 0.5918 | 0.6270 | 0.6089 | 0.9708 |
0.0785 | 18.0 | 522 | 0.1616 | 0.5792 | 0.6919 | 0.6305 | 0.9715 |
0.0785 | 19.0 | 551 | 0.1632 | 0.6059 | 0.6649 | 0.6340 | 0.9717 |
0.0785 | 20.0 | 580 | 0.1638 | 0.6103 | 0.6432 | 0.6263 | 0.9726 |
0.0785 | 21.0 | 609 | 0.1603 | 0.6010 | 0.6432 | 0.6214 | 0.9724 |
0.0785 | 22.0 | 638 | 0.1652 | 0.6078 | 0.6703 | 0.6375 | 0.9722 |
0.0785 | 23.0 | 667 | 0.1577 | 0.6440 | 0.6649 | 0.6543 | 0.9738 |
0.0785 | 24.0 | 696 | 0.1600 | 0.6492 | 0.6703 | 0.6596 | 0.9743 |
0.0785 | 25.0 | 725 | 0.1663 | 0.6256 | 0.6595 | 0.6421 | 0.9733 |
0.0785 | 26.0 | 754 | 0.1686 | 0.6106 | 0.6865 | 0.6463 | 0.9713 |
0.0785 | 27.0 | 783 | 0.1691 | 0.5951 | 0.6595 | 0.6256 | 0.9720 |
0.0785 | 28.0 | 812 | 0.1668 | 0.61 | 0.6595 | 0.6338 | 0.9731 |
0.0785 | 29.0 | 841 | 0.1679 | 0.5931 | 0.6541 | 0.6221 | 0.9724 |
0.0785 | 30.0 | 870 | 0.1678 | 0.6162 | 0.6595 | 0.6371 | 0.9734 |
0.0785 | 31.0 | 899 | 0.1683 | 0.6040 | 0.6595 | 0.6305 | 0.9729 |
0.0785 | 32.0 | 928 | 0.1686 | 0.5990 | 0.6541 | 0.6253 | 0.9727 |
Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3