<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
NeRUBioS_RoBERTa_Training_Testing
This model is a fine-tuned version of PlanTL-GOB-ES/roberta-base-biomedical-clinical-es on an adaptation of the Nubes dataset called NeRUBioS. Training and Testing Datasets have 13832 and 2765 samples, respectively. This is a result of the PhD dissertation of Antonio Tamayo. It achieves the following results on the evaluation set:
- Loss: 0.3540
- Negref Precision: 0.5522
- Negref Recall: 0.6138
- Negref F1: 0.5814
- Neg Precision: 0.9530
- Neg Recall: 0.9684
- Neg F1: 0.9606
- Nsco Precision: 0.8812
- Nsco Recall: 0.9092
- Nsco F1: 0.8950
- Unc Precision: 0.8208
- Unc Recall: 0.8923
- Unc F1: 0.8550
- Usco Precision: 0.6786
- Usco Recall: 0.7815
- Usco F1: 0.7264
- Precision: 0.8223
- Recall: 0.8680
- F1: 0.8446
- Accuracy: 0.9497
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
Training results
Training Loss | Epoch | Step | Validation Loss | Negref Precision | Negref Recall | Negref F1 | Neg Precision | Neg Recall | Neg F1 | Nsco Precision | Nsco Recall | Nsco F1 | Unc Precision | Unc Recall | Unc F1 | Usco Precision | Usco Recall | Usco F1 | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.1945 | 1.0 | 1729 | 0.2026 | 0.4356 | 0.4919 | 0.4621 | 0.9246 | 0.9558 | 0.9399 | 0.8264 | 0.9039 | 0.8634 | 0.7192 | 0.8077 | 0.7609 | 0.6125 | 0.7069 | 0.6563 | 0.7610 | 0.8276 | 0.7929 | 0.9383 |
0.1173 | 2.0 | 3458 | 0.2070 | 0.4961 | 0.5595 | 0.5259 | 0.9314 | 0.9635 | 0.9472 | 0.8552 | 0.9070 | 0.8803 | 0.8221 | 0.8769 | 0.8486 | 0.6472 | 0.7404 | 0.6906 | 0.7953 | 0.8516 | 0.8225 | 0.9423 |
0.0938 | 3.0 | 5187 | 0.2145 | 0.5006 | 0.6182 | 0.5532 | 0.9348 | 0.9670 | 0.9506 | 0.8679 | 0.9047 | 0.8859 | 0.7991 | 0.8667 | 0.8315 | 0.6316 | 0.7404 | 0.6817 | 0.7919 | 0.8607 | 0.8249 | 0.9435 |
0.0543 | 4.0 | 6916 | 0.2151 | 0.5267 | 0.6226 | 0.5707 | 0.9425 | 0.9677 | 0.9550 | 0.8588 | 0.9017 | 0.8797 | 0.8076 | 0.8718 | 0.8385 | 0.6733 | 0.7789 | 0.7223 | 0.8036 | 0.8647 | 0.8330 | 0.9460 |
0.0403 | 5.0 | 8645 | 0.2669 | 0.5627 | 0.5536 | 0.5581 | 0.9470 | 0.9670 | 0.9569 | 0.8697 | 0.9085 | 0.8886 | 0.8153 | 0.8487 | 0.8317 | 0.7009 | 0.7712 | 0.7344 | 0.8265 | 0.8526 | 0.8393 | 0.9472 |
0.0325 | 6.0 | 10374 | 0.2608 | 0.5207 | 0.6094 | 0.5616 | 0.9503 | 0.9677 | 0.9589 | 0.8751 | 0.9062 | 0.8904 | 0.7977 | 0.8795 | 0.8366 | 0.6771 | 0.7815 | 0.7255 | 0.8093 | 0.8650 | 0.8362 | 0.9476 |
0.0205 | 7.0 | 12103 | 0.3006 | 0.5646 | 0.6285 | 0.5949 | 0.9419 | 0.9684 | 0.9550 | 0.8671 | 0.9130 | 0.8895 | 0.7958 | 0.8795 | 0.8356 | 0.6594 | 0.7815 | 0.7153 | 0.8125 | 0.8704 | 0.8404 | 0.9484 |
0.0146 | 8.0 | 13832 | 0.3124 | 0.5653 | 0.5844 | 0.5747 | 0.9569 | 0.9677 | 0.9623 | 0.8896 | 0.9085 | 0.8990 | 0.8085 | 0.8769 | 0.8413 | 0.6966 | 0.7969 | 0.7434 | 0.8320 | 0.8628 | 0.8471 | 0.9496 |
0.0095 | 9.0 | 15561 | 0.3160 | 0.5459 | 0.6285 | 0.5843 | 0.9544 | 0.9705 | 0.9624 | 0.8770 | 0.9115 | 0.8939 | 0.8255 | 0.8974 | 0.8600 | 0.6868 | 0.7892 | 0.7344 | 0.8202 | 0.8730 | 0.8458 | 0.9511 |
0.0069 | 10.0 | 17290 | 0.3415 | 0.5461 | 0.6094 | 0.5760 | 0.9536 | 0.9677 | 0.9606 | 0.8810 | 0.9070 | 0.8938 | 0.8182 | 0.9 | 0.8571 | 0.6785 | 0.7866 | 0.7286 | 0.8207 | 0.8676 | 0.8435 | 0.9505 |
0.0047 | 11.0 | 19019 | 0.3483 | 0.5481 | 0.6197 | 0.5817 | 0.9517 | 0.9684 | 0.9600 | 0.8776 | 0.9107 | 0.8938 | 0.8160 | 0.8872 | 0.8501 | 0.6909 | 0.7815 | 0.7334 | 0.8204 | 0.8690 | 0.8440 | 0.9490 |
0.0018 | 12.0 | 20748 | 0.3540 | 0.5522 | 0.6138 | 0.5814 | 0.9530 | 0.9684 | 0.9606 | 0.8812 | 0.9092 | 0.8950 | 0.8208 | 0.8923 | 0.8550 | 0.6786 | 0.7815 | 0.7264 | 0.8223 | 0.8680 | 0.8446 | 0.9497 |
Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3