<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
all-base-miss-simple_wikipedia-seed
This model is a fine-tuned version of gpt2 on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 4.3071
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
6.1584 | 0.34 | 500 | 5.3866 |
4.8808 | 0.69 | 1000 | 5.0175 |
4.5857 | 1.03 | 1500 | 4.7969 |
4.3352 | 1.37 | 2000 | 4.6646 |
4.2292 | 1.72 | 2500 | 4.5512 |
4.1096 | 2.06 | 3000 | 4.4719 |
3.9263 | 2.4 | 3500 | 4.4212 |
3.8869 | 2.75 | 4000 | 4.3514 |
3.7965 | 3.09 | 4500 | 4.3099 |
3.6202 | 3.43 | 5000 | 4.2878 |
3.6126 | 3.77 | 5500 | 4.2471 |
3.5078 | 4.12 | 6000 | 4.2366 |
3.3541 | 4.46 | 6500 | 4.2332 |
3.3516 | 4.8 | 7000 | 4.2141 |
3.2659 | 5.15 | 7500 | 4.2191 |
3.1699 | 5.49 | 8000 | 4.2194 |
3.1664 | 5.83 | 8500 | 4.2184 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3