<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
tiny-mlm-glue-rte-target-glue-cola
This model is a fine-tuned version of muhtasham/tiny-mlm-glue-rte on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.7986
- Matthews Correlation: 0.1168
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
Training results
Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
---|---|---|---|---|
0.6097 | 1.87 | 500 | 0.6209 | 0.0 |
0.6011 | 3.73 | 1000 | 0.6173 | 0.0 |
0.5827 | 5.6 | 1500 | 0.6197 | 0.0622 |
0.5534 | 7.46 | 2000 | 0.6410 | 0.0939 |
0.5244 | 9.33 | 2500 | 0.6664 | 0.1184 |
0.5087 | 11.19 | 3000 | 0.6684 | 0.1327 |
0.4867 | 13.06 | 3500 | 0.6789 | 0.0999 |
0.4693 | 14.93 | 4000 | 0.7124 | 0.1109 |
0.4483 | 16.79 | 4500 | 0.7333 | 0.1388 |
0.4303 | 18.66 | 5000 | 0.7486 | 0.1287 |
0.4105 | 20.52 | 5500 | 0.7961 | 0.1321 |
0.4046 | 22.39 | 6000 | 0.7986 | 0.1168 |
Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2