<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
olm-bert-tiny-december-2022-target-glue-qnli
This model is a fine-tuned version of muhtasham/olm-bert-tiny-december-2022 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.6358
 - Accuracy: 0.6306
 
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
 - train_batch_size: 32
 - eval_batch_size: 32
 - seed: 42
 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 - lr_scheduler_type: constant
 - training_steps: 5000
 
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 
|---|---|---|---|---|
| 0.692 | 0.15 | 500 | 0.6882 | 0.5574 | 
| 0.6777 | 0.31 | 1000 | 0.6637 | 0.6059 | 
| 0.667 | 0.46 | 1500 | 0.6568 | 0.6064 | 
| 0.6609 | 0.61 | 2000 | 0.6517 | 0.6193 | 
| 0.6596 | 0.76 | 2500 | 0.6514 | 0.6127 | 
| 0.6584 | 0.92 | 3000 | 0.6496 | 0.6202 | 
| 0.6514 | 1.07 | 3500 | 0.6487 | 0.6191 | 
| 0.652 | 1.22 | 4000 | 0.6420 | 0.6253 | 
| 0.6449 | 1.37 | 4500 | 0.6415 | 0.6268 | 
| 0.6477 | 1.53 | 5000 | 0.6358 | 0.6306 | 
Framework versions
- Transformers 4.27.0.dev0
 - Pytorch 1.13.1+cu116
 - Datasets 2.9.1.dev0
 - Tokenizers 0.13.2