<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
flan-t5-base-qg-SQuAD-10
This model is a fine-tuned version of google/flan-t5-base on the squad dataset. It achieves the following results on the evaluation set:
- Loss: 0.5608
- Rouge1: 52.7379
- Rouge2: 30.2006
- Rougel: 48.7775
- Rougelsum: 48.7801
- Meteor: 47.8283
- Bleu-n: 21.4544
- Bleu-1: 53.0604
- Bleu-2: 27.1936
- Bleu-3: 17.5151
- Bleu-4: 11.7991
- Gen Len: 14.2843
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor | Bleu-n | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Gen Len |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.6022 | 1.0 | 10950 | 0.5586 | 51.7436 | 28.9612 | 47.8006 | 47.8226 | 46.5085 | 20.4533 | 52.5556 | 26.3407 | 16.7820 | 11.1797 | 14.1169 |
0.5414 | 2.0 | 21900 | 0.5549 | 52.4226 | 29.7931 | 48.4799 | 48.4956 | 47.3728 | 21.0853 | 53.0004 | 26.9469 | 17.2415 | 11.5862 | 14.2268 |
0.5161 | 3.0 | 32850 | 0.5565 | 52.6896 | 30.0797 | 48.69 | 48.7029 | 47.6188 | 21.2473 | 53.3591 | 27.2354 | 17.5149 | 11.7720 | 14.1970 |
0.485 | 4.0 | 43800 | 0.5590 | 52.7436 | 30.1349 | 48.8039 | 48.8081 | 47.7780 | 21.4458 | 53.0793 | 27.2141 | 17.5082 | 11.8084 | 14.2746 |
0.4759 | 5.0 | 54750 | 0.5608 | 52.7379 | 30.2006 | 48.7775 | 48.7801 | 47.8283 | 21.4544 | 53.0604 | 27.1936 | 17.5151 | 11.7991 | 14.2843 |
Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3