<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
borges-gpt-collab-finetuned
This model is a fine-tuned version of DeepESP/gpt2-spanish on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 6.2150
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42069
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
4.6177 | 4.96 | 35 | 4.3309 |
3.9729 | 9.96 | 70 | 4.2350 |
3.2225 | 14.96 | 105 | 4.3344 |
2.3158 | 19.96 | 140 | 4.5764 |
1.3761 | 24.96 | 175 | 4.9125 |
0.6779 | 29.96 | 210 | 5.3096 |
0.3399 | 34.96 | 245 | 5.6735 |
0.2147 | 39.96 | 280 | 5.9322 |
0.1675 | 44.96 | 315 | 6.1347 |
0.1418 | 49.96 | 350 | 6.2150 |
Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+rocm5.2
- Datasets 2.6.1
- Tokenizers 0.13.2