<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
BioLinkBERT-LitCovid-v1.2
This model is a fine-tuned version of michiyasunaga/BioLinkBERT-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0950
- F1 micro: 0.9201
- F1 macro: 0.8831
- F1 weighted: 0.9202
- F1 samples: 0.9200
- Precision micro: 0.9141
- Precision macro: 0.8790
- Precision weighted: 0.9144
- Precision samples: 0.9283
- Recall micro: 0.9263
- Recall macro: 0.8877
- Recall weighted: 0.9263
- Recall samples: 0.9372
- Roc Auc: 0.9529
- Accuracy: 0.7848
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.1013 | 1.0 | 2211 | 0.0899 | 0.9159 | 0.8789 | 0.9164 | 0.9149 | 0.9074 | 0.8824 | 0.9092 | 0.9213 | 0.9245 | 0.8808 | 0.9245 | 0.9355 | 0.9511 | 0.7729 |
0.0749 | 2.0 | 4422 | 0.0847 | 0.9205 | 0.8854 | 0.9205 | 0.9203 | 0.9138 | 0.8843 | 0.9144 | 0.9264 | 0.9274 | 0.8882 | 0.9274 | 0.9390 | 0.9534 | 0.7857 |
0.0583 | 3.0 | 6633 | 0.0871 | 0.9212 | 0.8851 | 0.9212 | 0.9206 | 0.9145 | 0.8913 | 0.9151 | 0.9269 | 0.9280 | 0.8808 | 0.9280 | 0.9390 | 0.9537 | 0.7883 |
0.0433 | 4.0 | 8844 | 0.0924 | 0.9201 | 0.8849 | 0.9203 | 0.9202 | 0.9094 | 0.8766 | 0.9099 | 0.9246 | 0.9312 | 0.8947 | 0.9312 | 0.9416 | 0.9546 | 0.7834 |
0.0315 | 5.0 | 11055 | 0.0950 | 0.9201 | 0.8831 | 0.9202 | 0.9200 | 0.9141 | 0.8790 | 0.9144 | 0.9283 | 0.9263 | 0.8877 | 0.9263 | 0.9372 | 0.9529 | 0.7848 |
Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3