<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
kobart_32_1e-4_datav2_min30_lp5.0_temperature1.0
This model is a fine-tuned version of gogamza/kobart-base-v2 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.7215
- Rouge1: 36.3912
- Rouge2: 13.2376
- Rougel: 23.7632
- Bleu1: 30.6123
- Bleu2: 18.0414
- Bleu3: 10.5291
- Bleu4: 6.0123
- Gen Len: 49.5035
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Bleu1 | Bleu2 | Bleu3 | Bleu4 | Gen Len |
---|---|---|---|---|---|---|---|---|---|---|---|
1.3994 | 3.78 | 5000 | 2.7215 | 36.3912 | 13.2376 | 23.7632 | 30.6123 | 18.0414 | 10.5291 | 6.0123 | 49.5035 |
Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2