<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
flan-t5-qg-LQ-tarek-test
This model is a fine-tuned version of google/flan-t5-small on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.5322
- Rouge1: 23.9922
- Rouge2: 6.7099
- Rougel: 21.5985
- Rougelsum: 21.5946
- Gen Len: 16.1304
- Meteor: {'meteor': 0.17861183292998817}
- Bleu: {'bleu': 0.032560412750946656, 'precisions': [0.31838712287431764, 0.07483151167147811, 0.02936351414577522, 0.011773119505596449], 'brevity_penalty': 0.6077920327728761, 'length_ratio': 0.6675912775198823, 'translation_length': 210109, 'reference_length': 314727}
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Meteor | Bleu |
---|---|---|---|---|---|---|---|---|---|---|
1.565 | 1.0 | 23583 | 1.5526 | 23.7548 | 6.5396 | 21.3855 | 21.3784 | 16.1645 | {'meteor': 0.17704033979826234} | {'bleu': 0.03244683754192353, 'precisions': [0.31640932313437786, 0.07308847939823084, 0.029126556774398793, 0.011756263748055154], 'brevity_penalty': 0.6116573626703378, 'length_ratio': 0.6704286572172073, 'translation_length': 211002, 'reference_length': 314727} |
1.4823 | 2.0 | 47166 | 1.5322 | 23.9922 | 6.7099 | 21.5985 | 21.5946 | 16.1304 | {'meteor': 0.17861183292998817} | {'bleu': 0.032560412750946656, 'precisions': [0.31838712287431764, 0.07483151167147811, 0.02936351414577522, 0.011773119505596449], 'brevity_penalty': 0.6077920327728761, 'length_ratio': 0.6675912775198823, 'translation_length': 210109, 'reference_length': 314727} |
Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3