<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
tiny-mlm-snli-custom-tokenizer
This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 4.8264
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
6.5934 | 0.4 | 500 | 6.1299 |
5.5915 | 0.8 | 1000 | 6.0987 |
5.4106 | 1.2 | 1500 | 6.0209 |
5.3188 | 1.6 | 2000 | 5.9610 |
5.1738 | 2.0 | 2500 | 5.8186 |
5.0521 | 2.4 | 3000 | 5.7991 |
4.9494 | 2.8 | 3500 | 5.7584 |
4.9176 | 3.2 | 4000 | 5.6663 |
4.8419 | 3.6 | 4500 | 5.6272 |
4.6759 | 4.0 | 5000 | 5.5572 |
4.6021 | 4.4 | 5500 | 5.4644 |
4.6077 | 4.8 | 6000 | 5.4168 |
4.4571 | 5.2 | 6500 | 5.3577 |
4.4012 | 5.6 | 7000 | 5.3301 |
4.3231 | 6.0 | 7500 | 5.2220 |
4.2708 | 6.4 | 8000 | 5.2296 |
4.2149 | 6.8 | 8500 | 5.1176 |
4.1028 | 7.2 | 9000 | 5.1298 |
4.1042 | 7.6 | 9500 | 5.0949 |
4.0501 | 8.0 | 10000 | 5.0850 |
4.012 | 8.4 | 10500 | 5.0018 |
3.875 | 8.8 | 11000 | 5.0539 |
3.8863 | 9.2 | 11500 | 4.8985 |
3.8032 | 9.6 | 12000 | 4.9226 |
3.8501 | 10.0 | 12500 | 4.9202 |
3.6744 | 10.4 | 13000 | 4.8908 |
3.6515 | 10.8 | 13500 | 4.9305 |
3.6525 | 11.2 | 14000 | 4.8675 |
3.6416 | 11.6 | 14500 | 4.8610 |
3.5686 | 12.0 | 15000 | 4.6993 |
3.5437 | 12.4 | 15500 | 4.8085 |
3.4837 | 12.8 | 16000 | 4.7654 |
3.4553 | 13.2 | 16500 | 4.8264 |
Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2