This model is a fine-tuned version of bert-base-uncased on an NLI dataset. It achieves the following results on the evaluation set:

{'precision': 0.8384560400285919} {'recall': 0.9536585365853658} {'f1': 0.892354507417269} {'accuracy': 0.8345996493278784}

Training hyperparameters:

learning_rate=2e-5 batch_size=32 epochs = 4 warmup_steps=10% training data number optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear