<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
meeting-sensai
This model is a fine-tuned version of raquelclemente/tmp_trainer on the None dataset. It achieves the following results on the evaluation set:
- Loss: 6.7917
- Rouge1: 0.4205
- Rouge2: 0.1795
- Rougel: 0.3536
- Rougelsum: 0.3536
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
---|---|---|---|---|---|---|---|
No log | 2.46 | 5 | 3.6391 | 0.4237 | 0.1996 | 0.3660 | 0.3660 |
No log | 4.91 | 10 | 3.4021 | 0.4118 | 0.2142 | 0.3778 | 0.3778 |
No log | 7.46 | 15 | 3.5879 | 0.4314 | 0.2218 | 0.3965 | 0.3965 |
No log | 9.91 | 20 | 3.8672 | 0.4730 | 0.2043 | 0.4003 | 0.4003 |
No log | 12.46 | 25 | 3.8721 | 0.4083 | 0.1885 | 0.3675 | 0.3675 |
No log | 14.91 | 30 | 4.0905 | 0.4609 | 0.2105 | 0.4070 | 0.4070 |
No log | 17.46 | 35 | 4.4786 | 0.4580 | 0.1856 | 0.3739 | 0.3739 |
No log | 19.91 | 40 | 4.9112 | 0.4689 | 0.2573 | 0.4224 | 0.4224 |
No log | 22.46 | 45 | 5.2165 | 0.4698 | 0.2151 | 0.4096 | 0.4096 |
No log | 24.91 | 50 | 5.5533 | 0.4014 | 0.1837 | 0.3062 | 0.3062 |
No log | 27.46 | 55 | 5.8611 | 0.4109 | 0.1854 | 0.3686 | 0.3686 |
No log | 29.91 | 60 | 5.8393 | 0.4715 | 0.2370 | 0.4074 | 0.4074 |
No log | 32.46 | 65 | 6.1874 | 0.4297 | 0.1898 | 0.3479 | 0.3479 |
No log | 34.91 | 70 | 6.3194 | 0.4350 | 0.2207 | 0.3802 | 0.3802 |
No log | 37.46 | 75 | 6.2460 | 0.4009 | 0.2056 | 0.3518 | 0.3518 |
No log | 39.91 | 80 | 6.5997 | 0.3992 | 0.1869 | 0.3713 | 0.3713 |
No log | 42.46 | 85 | 6.5516 | 0.4345 | 0.1928 | 0.3756 | 0.3756 |
No log | 44.91 | 90 | 6.5791 | 0.4078 | 0.2036 | 0.3489 | 0.3489 |
No log | 47.46 | 95 | 6.7799 | 0.4205 | 0.1795 | 0.3536 | 0.3536 |
No log | 49.91 | 100 | 6.7917 | 0.4205 | 0.1795 | 0.3536 | 0.3536 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2