<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
RE_NegREF_NSD_Nubes_Training_Test_dataset_roberta-base-biomedical-clinical-es_fine_tuned_v3
This model is a fine-tuned version of PlanTL-GOB-ES/roberta-base-biomedical-clinical-es on an adaptation of the NUBES dataset called NeRUBioS (For this model, uncertainty labels were not considered). Training and Testing Datasets have 13832 and 2765 samples, respectively. This is a result of the PhD dissertation of Antonio Tamayo. It achieves the following results on the evaluation set:
- Loss: 0.3617
- Negref Precision: 0.5916
- Negref Recall: 0.6021
- Negref F1: 0.5968
- Neg Precision: 0.9531
- Neg Recall: 0.9698
- Neg F1: 0.9614
- Nsco Precision: 0.8976
- Nsco Recall: 0.9145
- Nsco F1: 0.9060
- Precision: 0.8598
- Recall: 0.8754
- F1: 0.8676
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
Training results
Training Loss | Epoch | Step | Validation Loss | Negref Precision | Negref Recall | Negref F1 | Neg Precision | Neg Recall | Neg F1 | Nsco Precision | Nsco Recall | Nsco F1 | Precision | Recall | F1 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.0026 | 1.0 | 1729 | 0.3442 | 0.5689 | 0.5639 | 0.5664 | 0.9602 | 0.9663 | 0.9632 | 0.8765 | 0.9017 | 0.8889 | 0.8512 | 0.8614 | 0.8563 |
0.0098 | 2.0 | 3458 | 0.2580 | 0.5198 | 0.5771 | 0.5470 | 0.9254 | 0.9761 | 0.9501 | 0.8796 | 0.9123 | 0.8957 | 0.8236 | 0.8722 | 0.8472 |
0.0172 | 3.0 | 5187 | 0.2335 | 0.5618 | 0.6344 | 0.5959 | 0.9524 | 0.9698 | 0.9610 | 0.8908 | 0.9070 | 0.8988 | 0.8449 | 0.8789 | 0.8616 |
0.0082 | 4.0 | 6916 | 0.2568 | 0.5819 | 0.6520 | 0.6150 | 0.9563 | 0.9670 | 0.9616 | 0.8896 | 0.9085 | 0.8990 | 0.8505 | 0.8818 | 0.8659 |
0.0054 | 5.0 | 8645 | 0.3267 | 0.5882 | 0.6123 | 0.6000 | 0.9601 | 0.9628 | 0.9614 | 0.9048 | 0.9062 | 0.9055 | 0.8628 | 0.8713 | 0.8670 |
0.0069 | 6.0 | 10374 | 0.3017 | 0.5559 | 0.6138 | 0.5834 | 0.9556 | 0.9677 | 0.9616 | 0.8945 | 0.9107 | 0.9025 | 0.8475 | 0.8754 | 0.8612 |
0.0035 | 7.0 | 12103 | 0.3325 | 0.5541 | 0.6241 | 0.5870 | 0.9448 | 0.9740 | 0.9592 | 0.8859 | 0.9107 | 0.8982 | 0.8392 | 0.8801 | 0.8591 |
0.0016 | 8.0 | 13832 | 0.3345 | 0.5851 | 0.6109 | 0.5977 | 0.9537 | 0.9691 | 0.9613 | 0.8981 | 0.9138 | 0.9059 | 0.8576 | 0.8766 | 0.8670 |
0.0031 | 9.0 | 15561 | 0.3414 | 0.5974 | 0.6035 | 0.6004 | 0.9575 | 0.9642 | 0.9608 | 0.9094 | 0.9107 | 0.9101 | 0.8671 | 0.8719 | 0.8695 |
0.0014 | 10.0 | 17290 | 0.3479 | 0.5977 | 0.6153 | 0.6064 | 0.9518 | 0.9698 | 0.9607 | 0.8901 | 0.9130 | 0.9014 | 0.8572 | 0.8774 | 0.8672 |
0.0005 | 11.0 | 19019 | 0.3542 | 0.5892 | 0.6065 | 0.5977 | 0.9524 | 0.9698 | 0.9610 | 0.8970 | 0.9153 | 0.9060 | 0.8583 | 0.8766 | 0.8673 |
0.0002 | 12.0 | 20748 | 0.3617 | 0.5916 | 0.6021 | 0.5968 | 0.9531 | 0.9698 | 0.9614 | 0.8976 | 0.9145 | 0.9060 | 0.8598 | 0.8754 | 0.8676 |
Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3