<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
DialoGPT-large-faqs-block-size-16-bs-16-lr-1e-05
This model is a fine-tuned version of microsoft/DialoGPT-large on the None dataset. It achieves the following results on the evaluation set:
- Loss: 3.7894
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
No log | 1.0 | 321 | 3.4739 |
4.182 | 2.0 | 642 | 3.0932 |
4.182 | 3.0 | 963 | 2.9670 |
2.6449 | 4.0 | 1284 | 2.9128 |
2.0623 | 5.0 | 1605 | 2.9541 |
2.0623 | 6.0 | 1926 | 3.0378 |
1.6514 | 7.0 | 2247 | 3.1422 |
1.3414 | 8.0 | 2568 | 3.2869 |
1.3414 | 9.0 | 2889 | 3.3904 |
1.1036 | 10.0 | 3210 | 3.4720 |
0.9535 | 11.0 | 3531 | 3.5315 |
0.9535 | 12.0 | 3852 | 3.5810 |
0.8249 | 13.0 | 4173 | 3.6205 |
0.8249 | 14.0 | 4494 | 3.6689 |
0.7545 | 15.0 | 4815 | 3.7067 |
0.686 | 16.0 | 5136 | 3.7433 |
0.686 | 17.0 | 5457 | 3.7534 |
0.649 | 18.0 | 5778 | 3.7751 |
0.6241 | 19.0 | 6099 | 3.7854 |
0.6241 | 20.0 | 6420 | 3.7894 |
Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4.dev0
- Tokenizers 0.13.3