<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
MultipleQG-Full_Ctxt_Only-filtered_0_15_PubMedBert
This model is a fine-tuned version of dmis-lab/TinyPubMedBERT-v1.0 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5624
- Rouge1: 0.8933
- Rouge2: 0.6988
- Rougel: 0.6905
- Rougelsum: 0.6905
- Exact Match: 0.0
- Precision: [0.8885194659233093, 0.9855583906173706]
- Recall: [0.8840953707695007, 0.9852440357208252]
- F1: [0.8863018751144409, 0.9854011535644531]
- Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0)
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Exact Match | Precision | Recall | F1 | Hashcode |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1.4416 | 1.0 | 365 | 0.8932 | 0.5338 | 0.5313 | 0.5338 | 0.5338 | 0.0 | [0.876929759979248, 0.9371142387390137] | [0.79274982213974, 0.9412162899971008] | [0.8327177166938782, 0.9391608238220215] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
0.8412 | 2.0 | 730 | 0.7106 | 0.7626 | 0.6348 | 0.6615 | 0.6615 | 0.0 | [0.876401960849762, 0.9752652645111084] | [0.8331349492073059, 0.976763129234314] | [0.854220986366272, 0.9760136604309082] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
0.697 | 3.0 | 1095 | 0.6683 | 0.7570 | 0.6533 | 0.6732 | 0.6732 | 0.0 | [0.8774559497833252, 0.9713358879089355] | [0.8353069424629211, 0.9722082018852234] | [0.8558628559112549, 0.9717718362808228] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
0.6262 | 4.0 | 1460 | 0.6224 | 0.8165 | 0.6622 | 0.6751 | 0.6751 | 0.0 | [0.8810427188873291, 0.9765440225601196] | [0.8570842742919922, 0.9781153202056885] | [0.868898332118988, 0.9773290157318115] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
0.5824 | 5.0 | 1825 | 0.6191 | 0.8236 | 0.6693 | 0.6738 | 0.6738 | 0.0 | [0.8814970850944519, 0.98689204454422] | [0.8591786623001099, 0.9870170950889587] | [0.8701948523521423, 0.9869545698165894] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
0.5504 | 6.0 | 2190 | 0.5932 | 0.8641 | 0.6914 | 0.6812 | 0.6812 | 0.0 | [0.8851853609085083, 0.9714876413345337] | [0.8708034753799438, 0.9725130796432495] | [0.8779354691505432, 0.9720001220703125] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
0.5178 | 7.0 | 2555 | 0.5754 | 0.8863 | 0.7037 | 0.6832 | 0.6832 | 0.0 | [0.8866510391235352, 0.9713358879089355] | [0.8813090324401855, 0.9722082018852234] | [0.8839719295501709, 0.9717718362808228] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
0.5072 | 8.0 | 2920 | 0.5680 | 0.8746 | 0.6950 | 0.6850 | 0.6850 | 0.0 | [0.8861116170883179, 0.9775792360305786] | [0.8811479806900024, 0.9789127111434937] | [0.8836228251457214, 0.9782454967498779] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
0.4869 | 9.0 | 3285 | 0.5592 | 0.8902 | 0.6942 | 0.6860 | 0.6860 | 0.0 | [0.8886069655418396, 0.9714876413345337] | [0.8855726718902588, 0.9725130796432495] | [0.8870871663093567, 0.9720001220703125] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
0.4858 | 10.0 | 3650 | 0.5624 | 0.8933 | 0.6988 | 0.6905 | 0.6905 | 0.0 | [0.8885194659233093, 0.9855583906173706] | [0.8840953707695007, 0.9852440357208252] | [0.8863018751144409, 0.9854011535644531] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
Framework versions
- Transformers 4.28.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2