<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
PubMedELECTRA-LitCovid-v1.3.1
This model is a fine-tuned version of microsoft/BiomedNLP-PubMedELECTRA-base-uncased-abstract on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.6642
- Hamming loss: 0.0230
- F1 micro: 0.8128
- F1 macro: 0.3182
- F1 weighted: 0.8716
- F1 samples: 0.8709
- Precision micro: 0.7205
- Precision macro: 0.2593
- Precision weighted: 0.8282
- Precision samples: 0.8563
- Recall micro: 0.9323
- Recall macro: 0.7360
- Recall weighted: 0.9323
- Recall samples: 0.9428
- Roc Auc: 0.9559
- Accuracy: 0.6786
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Hamming loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1.2083 | 1.0 | 2272 | 0.5767 | 0.0441 | 0.6908 | 0.2460 | 0.8045 | 0.8120 | 0.5536 | 0.1953 | 0.7313 | 0.7793 | 0.9184 | 0.7253 | 0.9184 | 0.9322 | 0.9382 | 0.5340 |
1.1155 | 2.0 | 4544 | 0.5897 | 0.0282 | 0.7778 | 0.2895 | 0.8483 | 0.8513 | 0.6729 | 0.2337 | 0.7989 | 0.8313 | 0.9214 | 0.6944 | 0.9214 | 0.9355 | 0.9480 | 0.6247 |
0.9687 | 3.0 | 6816 | 0.5859 | 0.0261 | 0.7937 | 0.3031 | 0.8573 | 0.8605 | 0.6880 | 0.2446 | 0.8017 | 0.8359 | 0.9378 | 0.7219 | 0.9378 | 0.9481 | 0.9569 | 0.6418 |
0.7527 | 4.0 | 9088 | 0.6337 | 0.0232 | 0.8115 | 0.3152 | 0.8694 | 0.8707 | 0.7184 | 0.2571 | 0.8240 | 0.8553 | 0.9323 | 0.7123 | 0.9323 | 0.9429 | 0.9558 | 0.6752 |
0.4783 | 5.0 | 11360 | 0.6642 | 0.0230 | 0.8128 | 0.3182 | 0.8716 | 0.8709 | 0.7205 | 0.2593 | 0.8282 | 0.8563 | 0.9323 | 0.7360 | 0.9323 | 0.9428 | 0.9559 | 0.6786 |
Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.13.3