generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

xlm-roberta-large-finetuned-augument-visquad2-5-4-2023-1

This model is a fine-tuned version of xlm-roberta-large on the None dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Best F1 Validation Loss Exact F1 Total Hasans Exact Hasans F1 Hasans Total Noans Exact Noans F1 Noans Total Best Exact Best Exact Thresh Best F1 Thresh
1.1009 1.0 4221 69.8405 1.2010 36.1685 54.2642 3821 52.0920 78.1543 2653 0.0 0.0 1168 55.4305 0.8416 0.9065
0.4716 2.0 8443 74.1358 1.0553 38.2884 56.0896 3821 54.9943 80.6326 2653 0.3425 0.3425 1168 58.8328 0.8002 0.9118
0.3487 3.0 12664 76.3875 1.1176 39.3876 56.5884 3821 56.6905 81.4641 2653 0.0856 0.0856 1168 61.3190 0.7923 0.9324
0.2747 4.0 16886 76.3938 1.2634 38.6548 56.3082 3821 55.6728 81.0982 2653 0.0 0.0 1168 60.4030 0.7414 0.9059
0.217 5.0 21107 76.5504 1.3581 39.3353 56.9569 3821 56.4644 81.8441 2653 0.4281 0.4281 1168 61.0050 0.8307 0.8701
0.1758 6.0 25329 77.2312 1.5473 39.4399 56.6673 3821 56.7282 81.5401 2653 0.1712 0.1712 1168 61.4761 0.8283 0.8996
0.1429 7.0 29550 77.2045 1.7840 38.8642 56.8934 3821 55.7105 81.6773 2653 0.5993 0.5993 1168 61.3190 0.7413 0.9449
0.1159 8.0 33772 76.6868 2.1179 38.7071 56.7210 3821 55.2959 81.2405 2653 1.0274 1.0274 1168 60.4292 0.5939 0.9917
0.0997 9.0 37993 77.1237 2.3697 38.6548 56.6046 3821 55.4467 81.2990 2653 0.5137 0.5137 1168 60.8479 0.9509 0.9636
0.0873 10.0 42210 76.9467 2.4868 38.7333 56.7311 3821 55.1451 81.0666 2653 1.4555 1.4555 1168 60.7694 0.8788 0.9686

Framework versions