<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
tiny-mlm-glue-cola-custom-tokenizer-target-glue-qqp
This model is a fine-tuned version of muhtasham/tiny-mlm-glue-cola-custom-tokenizer on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4522
- Accuracy: 0.7694
- F1: 0.7176
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
0.6113 | 0.04 | 500 | 0.5700 | 0.6848 | 0.5805 |
0.5608 | 0.09 | 1000 | 0.5332 | 0.7140 | 0.6093 |
0.5442 | 0.13 | 1500 | 0.5218 | 0.7215 | 0.6596 |
0.5278 | 0.18 | 2000 | 0.5059 | 0.7337 | 0.6510 |
0.5193 | 0.22 | 2500 | 0.4961 | 0.7402 | 0.6579 |
0.5085 | 0.26 | 3000 | 0.4918 | 0.7413 | 0.6695 |
0.5049 | 0.31 | 3500 | 0.4851 | 0.7464 | 0.6760 |
0.5024 | 0.35 | 4000 | 0.4900 | 0.7433 | 0.6854 |
0.505 | 0.4 | 4500 | 0.4799 | 0.7500 | 0.6846 |
0.4942 | 0.44 | 5000 | 0.4715 | 0.7568 | 0.6800 |
0.4826 | 0.48 | 5500 | 0.4733 | 0.7528 | 0.6936 |
0.4898 | 0.53 | 6000 | 0.4634 | 0.7638 | 0.6684 |
0.4789 | 0.57 | 6500 | 0.4643 | 0.7617 | 0.6904 |
0.4721 | 0.62 | 7000 | 0.4594 | 0.7652 | 0.6810 |
0.4742 | 0.66 | 7500 | 0.4654 | 0.7616 | 0.6937 |
0.4828 | 0.7 | 8000 | 0.4608 | 0.7648 | 0.6997 |
0.4758 | 0.75 | 8500 | 0.4538 | 0.7680 | 0.6891 |
0.4697 | 0.79 | 9000 | 0.4614 | 0.7626 | 0.7067 |
0.466 | 0.84 | 9500 | 0.4497 | 0.7718 | 0.6838 |
0.4685 | 0.88 | 10000 | 0.4491 | 0.7714 | 0.6765 |
0.4629 | 0.92 | 10500 | 0.4502 | 0.7708 | 0.6595 |
0.4617 | 0.97 | 11000 | 0.4473 | 0.7723 | 0.6809 |
0.4606 | 1.01 | 11500 | 0.4569 | 0.7668 | 0.7114 |
0.4467 | 1.06 | 12000 | 0.4482 | 0.7752 | 0.6727 |
0.4537 | 1.1 | 12500 | 0.4468 | 0.7722 | 0.7130 |
0.454 | 1.14 | 13000 | 0.4545 | 0.7711 | 0.7131 |
0.4395 | 1.19 | 13500 | 0.4522 | 0.7694 | 0.7176 |
Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2