language:

tags:

Dataset:

Base Model - T5-Base

Google's T5 The T5 was built by the Google team in order to create a general-purpose model that can understand the text. The basic idea behind t5 was to deal with the text processing problem as a “text-to-text” problem, i.e. taking the text as input and producing new text as output.

Baseline Preprocessing

Baseline Preprocessing This code repository serves as a supplementary for the main repository, which can be used to do basic preprocessing of the Totto dataset.

Fine-tuning

On the Totto dataset, we used the T5 for the conditional generation model and fine-tuned it with 10000 steps BLEU and then 20000 steps BERT-SCORE as a metric.