<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
flan-t5-definition-en-large-taboo-for-llms-deft
This model is a fine-tuned version of ltg/flan-t5-definition-en-large on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.0332
- Rouge1: 33.5241
- Rouge2: 16.8064
- Rougel: 30.2969
- Rougelsum: 30.2909
- Gen Len: 16.5819
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
2.6185 | 0.62 | 100 | 2.1816 | 33.3077 | 15.1203 | 28.9167 | 28.8557 | 17.7666 |
2.3457 | 1.24 | 200 | 2.0990 | 33.2477 | 16.1885 | 29.5227 | 29.4474 | 16.7143 |
2.1751 | 1.85 | 300 | 2.0604 | 33.5161 | 16.4732 | 30.0261 | 30.0036 | 16.3031 |
2.0749 | 2.47 | 400 | 2.0392 | 33.1594 | 16.8128 | 30.0222 | 30.0057 | 16.5401 |
2.035 | 3.09 | 500 | 2.0332 | 33.5241 | 16.8064 | 30.2969 | 30.2909 | 16.5819 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3