<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
gpt2-large-NaturalQuestions_2000-ep20
This model is a fine-tuned version of gpt2-large on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.5184
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 24
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.2377 | 0.3 | 50 | 1.0267 |
1.0948 | 0.6 | 100 | 0.9831 |
1.0673 | 0.9 | 150 | 0.9405 |
0.7058 | 1.2 | 200 | 1.0023 |
0.5501 | 1.5 | 250 | 1.0274 |
0.5649 | 1.8 | 300 | 0.9887 |
0.4972 | 2.1 | 350 | 1.0855 |
0.2934 | 2.4 | 400 | 1.1109 |
0.2999 | 2.69 | 450 | 1.0877 |
0.2907 | 2.99 | 500 | 1.0879 |
0.1533 | 3.29 | 550 | 1.2041 |
0.1553 | 3.59 | 600 | 1.1832 |
0.1681 | 3.89 | 650 | 1.1806 |
0.107 | 4.19 | 700 | 1.2764 |
0.0914 | 4.49 | 750 | 1.2541 |
0.1001 | 4.79 | 800 | 1.2589 |
0.0816 | 5.09 | 850 | 1.3118 |
0.0548 | 5.39 | 900 | 1.3473 |
0.068 | 5.69 | 950 | 1.2907 |
0.0615 | 5.99 | 1000 | 1.3007 |
0.0358 | 6.29 | 1050 | 1.4036 |
0.0409 | 6.59 | 1100 | 1.3794 |
0.0473 | 6.89 | 1150 | 1.3592 |
0.0413 | 7.19 | 1200 | 1.4133 |
0.0438 | 7.49 | 1250 | 1.3690 |
0.034 | 7.78 | 1300 | 1.3665 |
0.0307 | 8.08 | 1350 | 1.4268 |
0.0211 | 8.38 | 1400 | 1.4637 |
0.0302 | 8.68 | 1450 | 1.4456 |
0.0288 | 8.98 | 1500 | 1.4584 |
0.0232 | 9.28 | 1550 | 1.4431 |
0.017 | 9.58 | 1600 | 1.4756 |
0.02 | 9.88 | 1650 | 1.4988 |
0.0207 | 10.18 | 1700 | 1.5071 |
0.0182 | 10.48 | 1750 | 1.4956 |
0.0155 | 10.78 | 1800 | 1.5102 |
0.0163 | 11.08 | 1850 | 1.5207 |
0.0122 | 11.38 | 1900 | 1.5392 |
0.0156 | 11.68 | 1950 | 1.5124 |
0.0149 | 11.98 | 2000 | 1.5184 |
0.0128 | 12.28 | 2050 | 1.5435 |
0.0107 | 12.57 | 2100 | 1.5686 |
0.0118 | 12.87 | 2150 | 1.5301 |
0.0094 | 13.17 | 2200 | 1.5828 |
0.0173 | 13.47 | 2250 | 1.5810 |
Framework versions
- Transformers 4.29.2
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.13.3