<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
flan-t5-base-qg-SQuAD-LMQG
This model is a fine-tuned version of google/flan-t5-base on the qg_squad dataset. It achieves the following results on the evaluation set:
- Loss: 0.5624
- Rouge1: 52.9376
- Rouge2: 30.3418
- Rougel: 48.9442
- Rougelsum: 48.9337
- Meteor: 48.0417
- Bleu-n: 21.5099
- Bleu-1: 53.2950
- Bleu-2: 27.3888
- Bleu-3: 17.6196
- Bleu-4: 11.8132
- Gen Len: 14.2609
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor | Bleu-n | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Gen Len |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.6054 | 1.0 | 9466 | 0.5603 | 51.5022 | 28.8098 | 47.624 | 47.6181 | 46.3462 | 20.5165 | 52.2978 | 26.2204 | 16.7816 | 11.2230 | 14.1466 |
0.5436 | 2.0 | 18932 | 0.5555 | 52.6557 | 29.8173 | 48.7211 | 48.7146 | 47.4380 | 21.0460 | 53.6907 | 27.3374 | 17.4950 | 11.6865 | 14.0910 |
0.5087 | 3.0 | 28398 | 0.5572 | 52.5567 | 30.0117 | 48.5798 | 48.5669 | 47.5632 | 21.3178 | 53.1790 | 27.2268 | 17.5366 | 11.8127 | 14.1871 |
0.4874 | 4.0 | 37864 | 0.5601 | 52.9404 | 30.3445 | 48.9746 | 48.9583 | 47.9995 | 21.5205 | 53.3457 | 27.4082 | 17.6273 | 11.8327 | 14.2623 |
0.473 | 5.0 | 47330 | 0.5624 | 52.9376 | 30.3418 | 48.9442 | 48.9337 | 48.0417 | 21.5099 | 53.2950 | 27.3888 | 17.6196 | 11.8132 | 14.2609 |
Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3