T5-base-finetuned-wnli

<!-- Provide a quick summary of what the model is/does. -->

This model is T5 fine-tuned on GLUE WNLI dataset. It acheives the following results on the validation set

Model Details

T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.

Training procedure

Tokenization

Since, T5 is a text-to-text model, the labels of the dataset are converted as follows: For each example, a sentence as been formed as "wnli sentence1: " + wnli_sent1 + "sentence 2: " + wnli_sent2 and fed to the tokenizer to get the input_ids and attention_mask. For each label, label is choosen as "entailment" if label is 1, else label is "not_entailment" and tokenized to get input_ids and attention_mask . During training, these inputs_ids having pad token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels is given as decoder attention mask.

Training hyperparameters

The following hyperparameters were used during training:

Training results

Epoch Training Loss Validation Accuracy
1 0.1502 0.4930
2 0.1331 0.5634
3 0.1355 0.4225