<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
bert-base-uncased-qnli
This model is a fine-tuned version of bert-base-uncased on the GLUE QNLI dataset. It achieves the following results on the evaluation set:
- Loss: 0.2297
 - Accuracy: 0.9105
 
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
 - train_batch_size: 128
 - eval_batch_size: 128
 - seed: 10
 - distributed_type: multi-GPU
 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 - lr_scheduler_type: linear
 - num_epochs: 50
 
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 
|---|---|---|---|---|
| 0.3436 | 1.0 | 819 | 0.2489 | 0.9035 | 
| 0.1962 | 2.0 | 1638 | 0.2297 | 0.9105 | 
| 0.1049 | 3.0 | 2457 | 0.2620 | 0.9121 | 
| 0.0662 | 4.0 | 3276 | 0.3534 | 0.9088 | 
| 0.0487 | 5.0 | 4095 | 0.3688 | 0.9046 | 
| 0.0368 | 6.0 | 4914 | 0.3943 | 0.9074 | 
| 0.0329 | 7.0 | 5733 | 0.4250 | 0.9092 | 
| 0.0272 | 8.0 | 6552 | 0.4012 | 0.9054 | 
| 0.0243 | 9.0 | 7371 | 0.4497 | 0.9041 | 
Framework versions
- Transformers 4.26.0
 - Pytorch 1.14.0a0+410ce96
 - Datasets 2.9.0
 - Tokenizers 0.13.2