<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
COMPner2-bert-base-spanish-wwm-cased
This model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-cased on the simonestradasch/NERcomp2 dataset. It achieves the following results on the evaluation set:
- Loss: 0.2843
- Body Part Precision: 0.6644
- Body Part Recall: 0.7143
- Body Part F1: 0.6884
- Body Part Number: 413
- Disease Precision: 0.7251
- Disease Recall: 0.7303
- Disease F1: 0.7276
- Disease Number: 975
- Family Member Precision: 0.8065
- Family Member Recall: 0.8333
- Family Member F1: 0.8197
- Family Member Number: 30
- Medication Precision: 0.7778
- Medication Recall: 0.6774
- Medication F1: 0.7241
- Medication Number: 93
- Procedure Precision: 0.5763
- Procedure Recall: 0.5949
- Procedure F1: 0.5854
- Procedure Number: 311
- Overall Precision: 0.6885
- Overall Recall: 0.7025
- Overall F1: 0.6955
- Overall Accuracy: 0.9146
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.4243 | 1.0 | 1004 | 0.2935 | 0.5910 | 0.6998 | 0.6408 | 413 | 0.6784 | 0.6944 | 0.6863 | 975 | 0.8 | 0.8 | 0.8000 | 30 | 0.6882 | 0.6882 | 0.6882 | 93 | 0.6050 | 0.5466 | 0.5743 | 311 | 0.6473 | 0.6718 | 0.6593 | 0.9052 |
0.2348 | 2.0 | 2008 | 0.2843 | 0.6644 | 0.7143 | 0.6884 | 413 | 0.7251 | 0.7303 | 0.7276 | 975 | 0.8065 | 0.8333 | 0.8197 | 30 | 0.7778 | 0.6774 | 0.7241 | 93 | 0.5763 | 0.5949 | 0.5854 | 311 | 0.6885 | 0.7025 | 0.6955 | 0.9146 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3