<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
guten-rarity-all-2p5k-plus-wiki-syn
This model is a fine-tuned version of gpt2 on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 4.3202
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
6.7083 | 0.29 | 500 | 5.6325 |
5.3261 | 0.57 | 1000 | 5.2023 |
4.9678 | 0.86 | 1500 | 4.9423 |
4.7004 | 1.14 | 2000 | 4.7943 |
4.5349 | 1.43 | 2500 | 4.6790 |
4.4252 | 1.71 | 3000 | 4.5693 |
4.3292 | 2.0 | 3500 | 4.4825 |
4.0911 | 2.28 | 4000 | 4.4479 |
4.0738 | 2.57 | 4500 | 4.3969 |
4.0407 | 2.85 | 5000 | 4.3448 |
3.8855 | 3.14 | 5500 | 4.3353 |
3.7711 | 3.42 | 6000 | 4.3114 |
3.7597 | 3.71 | 6500 | 4.2779 |
3.7367 | 3.99 | 7000 | 4.2487 |
3.4898 | 4.28 | 7500 | 4.2683 |
3.4841 | 4.56 | 8000 | 4.2557 |
3.4702 | 4.85 | 8500 | 4.2424 |
3.3817 | 5.13 | 9000 | 4.2490 |
3.2909 | 5.42 | 9500 | 4.2499 |
3.2886 | 5.7 | 10000 | 4.2486 |
3.2898 | 5.99 | 10500 | 4.2486 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3