<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
bart-large-cnn-YT-transcript-sum
This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.4849
- Rouge1: 48.0422
- Rouge2: 22.8938
- Rougel: 34.0775
- Rougelsum: 44.7056
- Gen Len: 108.8009
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 432 | 1.5362 | 49.0506 | 22.9422 | 35.5667 | 45.7219 | 88.0602 |
1.5312 | 2.0 | 864 | 1.4849 | 48.0422 | 22.8938 | 34.0775 | 44.7056 | 108.8009 |
0.9026 | 3.0 | 1296 | 1.5761 | 50.0558 | 23.9657 | 36.247 | 46.4508 | 96.0231 |
0.5642 | 4.0 | 1728 | 1.8304 | 50.6862 | 24.4638 | 36.3568 | 47.2607 | 93.1667 |
0.3629 | 5.0 | 2160 | 1.9355 | 51.2362 | 25.1077 | 37.772 | 47.4362 | 88.9583 |
0.2335 | 6.0 | 2592 | 2.1215 | 49.5831 | 23.4294 | 35.9861 | 45.9306 | 94.2917 |
0.1603 | 7.0 | 3024 | 2.2890 | 49.8716 | 23.4756 | 36.2617 | 46.2866 | 88.7639 |
0.1603 | 8.0 | 3456 | 2.3604 | 49.5627 | 23.6399 | 35.9596 | 45.7914 | 88.8333 |
0.1049 | 9.0 | 3888 | 2.5252 | 50.358 | 24.1986 | 36.5297 | 46.5519 | 90.5463 |
0.0744 | 10.0 | 4320 | 2.6694 | 50.46 | 24.1493 | 37.0205 | 46.8988 | 91.0139 |
0.049 | 11.0 | 4752 | 2.7840 | 50.8805 | 24.5482 | 36.5901 | 46.9176 | 90.8380 |
0.0312 | 12.0 | 5184 | 2.8330 | 50.4793 | 24.6444 | 37.2087 | 46.7151 | 86.9444 |
0.0156 | 13.0 | 5616 | 2.9540 | 50.3911 | 24.4843 | 36.8037 | 46.8691 | 94.9352 |
0.0083 | 14.0 | 6048 | 3.0214 | 51.0557 | 25.127 | 37.1368 | 47.3072 | 92.5787 |
0.0083 | 15.0 | 6480 | 3.0340 | 51.3998 | 25.5847 | 37.5635 | 47.7132 | 90.5602 |
Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3