<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
t5_finetuned_paraphrase-1024
This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.7026
- Rouge1: 69.0908
- Rouge2: 51.3373
- Rougel: 65.9952
- Rougelsum: 65.9963
- Gen Len: 17.9946
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
0.9264 | 1.0 | 12500 | 0.8018 | 67.7403 | 48.7329 | 64.2684 | 64.273 | 18.0296 |
0.8654 | 2.0 | 25000 | 0.7572 | 68.256 | 49.8753 | 64.9795 | 64.9743 | 18.0071 |
0.8391 | 3.0 | 37500 | 0.7366 | 68.5695 | 50.4061 | 65.3434 | 65.3424 | 18.0047 |
0.8151 | 4.0 | 50000 | 0.7225 | 68.7762 | 50.6912 | 65.5711 | 65.561 | 17.9929 |
0.8068 | 5.0 | 62500 | 0.7134 | 68.8998 | 50.9785 | 65.7391 | 65.7475 | 17.9953 |
0.801 | 6.0 | 75000 | 0.7076 | 69.0179 | 51.1629 | 65.8718 | 65.8738 | 17.9952 |
0.7982 | 7.0 | 87500 | 0.7035 | 69.0643 | 51.2707 | 65.9447 | 65.9508 | 17.9959 |
0.784 | 8.0 | 100000 | 0.7026 | 69.0908 | 51.3373 | 65.9952 | 65.9963 | 17.9946 |
Framework versions
- Transformers 4.27.3
- Pytorch 1.7.1+cu101
- Datasets 2.13.2
- Tokenizers 0.13.2