<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
gpt2-3_left_out_aochildes
This model is a fine-tuned version of gpt2 on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 3.9778
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
6.0166 | 0.25 | 500 | 5.0977 |
4.7864 | 0.5 | 1000 | 4.7281 |
4.4839 | 0.75 | 1500 | 4.4904 |
4.298 | 0.99 | 2000 | 4.3375 |
4.0749 | 1.24 | 2500 | 4.2553 |
4.0113 | 1.49 | 3000 | 4.1753 |
3.9322 | 1.74 | 3500 | 4.0968 |
3.875 | 1.99 | 4000 | 4.0287 |
3.668 | 2.24 | 4500 | 4.0130 |
3.6649 | 2.49 | 5000 | 3.9650 |
3.6562 | 2.73 | 5500 | 3.9277 |
3.6205 | 2.98 | 6000 | 3.8888 |
3.4184 | 3.23 | 6500 | 3.9017 |
3.4199 | 3.48 | 7000 | 3.8781 |
3.4292 | 3.73 | 7500 | 3.8526 |
3.4239 | 3.98 | 8000 | 3.8261 |
3.1971 | 4.23 | 8500 | 3.8637 |
3.2102 | 4.48 | 9000 | 3.8484 |
3.2265 | 4.72 | 9500 | 3.8268 |
3.2287 | 4.97 | 10000 | 3.8132 |
2.9824 | 5.22 | 10500 | 3.8637 |
3.0033 | 5.47 | 11000 | 3.8563 |
3.0131 | 5.72 | 11500 | 3.8417 |
3.0202 | 5.97 | 12000 | 3.8287 |
2.7855 | 6.22 | 12500 | 3.8845 |
2.7736 | 6.46 | 13000 | 3.8860 |
2.796 | 6.71 | 13500 | 3.8793 |
2.8046 | 6.96 | 14000 | 3.8722 |
2.5939 | 7.21 | 14500 | 3.9214 |
2.5733 | 7.46 | 15000 | 3.9283 |
2.5914 | 7.71 | 15500 | 3.9278 |
2.5941 | 7.96 | 16000 | 3.9252 |
2.4481 | 8.2 | 16500 | 3.9566 |
2.4262 | 8.45 | 17000 | 3.9639 |
2.4348 | 8.7 | 17500 | 3.9653 |
2.4351 | 8.95 | 18000 | 3.9656 |
2.3644 | 9.2 | 18500 | 3.9749 |
2.3548 | 9.45 | 19000 | 3.9776 |
2.3558 | 9.7 | 19500 | 3.9777 |
2.3536 | 9.95 | 20000 | 3.9778 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3