<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
flan-t5-large-clang8-e1-b16
This model is a fine-tuned version of google/flan-t5-large on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2994
- Rouge1: 80.9044
- Rouge2: 74.7041
- Rougel: 80.3109
- Rougelsum: 80.3664
- Gen Len: 16.0625
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
0.2432 | 0.25 | 36000 | 0.4018 | 78.4447 | 71.3656 | 77.7552 | 77.8451 | 15.9010 |
0.1837 | 0.49 | 72000 | 0.3781 | 76.8828 | 69.9993 | 76.0584 | 76.1479 | 15.4026 |
0.1511 | 0.74 | 108000 | 0.3282 | 79.7898 | 73.329 | 79.1608 | 79.2416 | 15.9021 |
0.1267 | 0.98 | 144000 | 0.2994 | 80.9044 | 74.7041 | 80.3109 | 80.3664 | 16.0625 |
Framework versions
- Transformers 4.27.4
- Pytorch 1.11.0a0+b6df043
- Datasets 2.11.0
- Tokenizers 0.13.2