<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
flan-t5-base-extraction-cnndm_2000-all-loss-ep50
This model is a fine-tuned version of google/flan-t5-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.8215
- Hint Hit Num: 2.3026
- Hint Precision: 0.4235
- Num: 5.4314
- Gen Len: 18.9975
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 60
- eval_batch_size: 400
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Hint Hit Num | Hint Precision | Num | Gen Len |
---|---|---|---|---|---|---|---|
2.424 | 2.94 | 100 | 1.9342 | 2.1002 | 0.4025 | 5.1762 | 18.9598 |
2.0802 | 5.88 | 200 | 1.8616 | 2.2495 | 0.4197 | 5.3613 | 18.9832 |
1.9829 | 8.82 | 300 | 1.8354 | 2.2921 | 0.4224 | 5.4219 | 18.9951 |
1.9248 | 11.76 | 400 | 1.8307 | 2.2661 | 0.4202 | 5.3879 | 18.996 |
1.8718 | 14.71 | 500 | 1.8215 | 2.3026 | 0.4235 | 5.4314 | 18.9975 |
1.8358 | 17.65 | 600 | 1.8238 | 2.3191 | 0.4251 | 5.446 | 18.9979 |
1.7922 | 20.59 | 700 | 1.8296 | 2.3174 | 0.425 | 5.4404 | 18.9981 |
1.7677 | 23.53 | 800 | 1.8277 | 2.3338 | 0.4259 | 5.4634 | 18.9975 |
1.7372 | 26.47 | 900 | 1.8335 | 2.3206 | 0.425 | 5.4472 | 18.9981 |
1.7112 | 29.41 | 1000 | 1.8360 | 2.3131 | 0.4243 | 5.4378 | 18.9982 |
Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1