<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
gpt2-large-NaturalQuestions_4000-ep20
This model is a fine-tuned version of gpt2-large on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.2404
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 24
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.2669 | 0.15 | 50 | 1.0266 |
1.091 | 0.3 | 100 | 0.9900 |
1.0277 | 0.45 | 150 | 0.9562 |
1.0095 | 0.6 | 200 | 0.9515 |
1.0311 | 0.75 | 250 | 0.9247 |
0.9533 | 0.9 | 300 | 0.9114 |
0.8352 | 1.05 | 350 | 0.9219 |
0.543 | 1.2 | 400 | 0.9551 |
0.5364 | 1.35 | 450 | 0.9691 |
0.5246 | 1.5 | 500 | 0.9491 |
0.5454 | 1.65 | 550 | 0.9552 |
0.5586 | 1.8 | 600 | 0.9510 |
0.5593 | 1.95 | 650 | 0.9694 |
0.3849 | 2.1 | 700 | 1.0554 |
0.2775 | 2.25 | 750 | 1.0372 |
0.2847 | 2.4 | 800 | 1.0488 |
0.2964 | 2.54 | 850 | 1.0312 |
0.2938 | 2.69 | 900 | 1.0604 |
0.288 | 2.84 | 950 | 1.0605 |
0.308 | 2.99 | 1000 | 1.0322 |
0.1557 | 3.14 | 1050 | 1.1383 |
0.1586 | 3.29 | 1100 | 1.1386 |
0.1696 | 3.44 | 1150 | 1.1489 |
0.1665 | 3.59 | 1200 | 1.1383 |
0.1653 | 3.74 | 1250 | 1.1548 |
0.1791 | 3.89 | 1300 | 1.1275 |
0.1519 | 4.04 | 1350 | 1.1838 |
0.0953 | 4.19 | 1400 | 1.2258 |
0.0964 | 4.34 | 1450 | 1.2286 |
0.0967 | 4.49 | 1500 | 1.2159 |
0.1023 | 4.64 | 1550 | 1.1961 |
0.1107 | 4.79 | 1600 | 1.1961 |
0.1015 | 4.94 | 1650 | 1.2439 |
0.0792 | 5.09 | 1700 | 1.2571 |
0.0611 | 5.24 | 1750 | 1.2737 |
0.0654 | 5.39 | 1800 | 1.2909 |
0.0621 | 5.54 | 1850 | 1.2612 |
0.0707 | 5.69 | 1900 | 1.2836 |
0.0627 | 5.84 | 1950 | 1.2889 |
0.0842 | 5.99 | 2000 | 1.2404 |
0.0485 | 6.14 | 2050 | 1.3126 |
0.0399 | 6.29 | 2100 | 1.3524 |
0.0501 | 6.44 | 2150 | 1.2900 |
0.0474 | 6.59 | 2200 | 1.2991 |
0.048 | 6.74 | 2250 | 1.3290 |
Framework versions
- Transformers 4.29.2
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.13.3