<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
final-squad-bn-qgen-mt5-small-all-metric-v2
This model is a fine-tuned version of google/mt5-small on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.6559
- Rouge1 Precision: 31.143
- Rouge1 Recall: 24.8687
- Rouge1 Fmeasure: 26.7861
- Rouge2 Precision: 12.1721
- Rouge2 Recall: 9.3907
- Rouge2 Fmeasure: 10.1945
- Rougel Precision: 29.2741
- Rougel Recall: 23.4105
- Rougel Fmeasure: 25.196
- Rougelsum Precision: 29.2488
- Rougelsum Recall: 23.3873
- Rougelsum Fmeasure: 25.1783
- Bleu-1: 20.2844
- Bleu-2: 11.7083
- Bleu-3: 7.2251
- Bleu-4: 4.6646
- Meteor: 0.1144
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Meteor |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.9251 | 1.0 | 6769 | 0.7237 | 26.4973 | 20.6282 | 22.3983 | 9.3138 | 6.9928 | 7.6534 | 24.9538 | 19.4635 | 21.1113 | 24.9713 | 19.4608 | 21.119 | 17.5414 | 9.5172 | 5.6104 | 3.4646 | 0.097 |
0.8214 | 2.0 | 13538 | 0.6804 | 29.524 | 23.4125 | 25.2574 | 11.2954 | 8.6345 | 9.3841 | 27.8173 | 22.1005 | 23.8164 | 27.7939 | 22.0878 | 23.801 | 19.2368 | 10.9056 | 6.6821 | 4.2702 | 0.1074 |
0.7914 | 3.0 | 20307 | 0.6600 | 30.7136 | 24.5527 | 26.4259 | 11.8743 | 9.1634 | 9.9452 | 28.8725 | 23.1161 | 24.859 | 28.8566 | 23.1018 | 24.8457 | 19.9315 | 11.4473 | 7.0613 | 4.5701 | 0.1119 |
0.7895 | 4.0 | 27076 | 0.6559 | 31.1568 | 24.8787 | 26.8004 | 12.1685 | 9.3879 | 10.1929 | 29.2804 | 23.3999 | 25.1925 | 29.2554 | 23.3891 | 25.1818 | 20.2844 | 11.7083 | 7.2251 | 4.6646 | 0.1144 |
Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1