generated_from_trainer

Mukayese: Turkish NLP Strikes Back

Summarization: mukayese/mbart-large-turkish-sum

This model is a fine-tuned version of google/mt5-base on the mlsum/tu dataset.

It achieves the following results on the evaluation set:

Check this paper for more details on the model and the dataset.

Training hyperparameters

The following hyperparameters were used during training:

Framework versions

Citation

@misc{safaya-etal-2022-mukayese,
    title={Mukayese: Turkish NLP Strikes Back},
    author={Ali Safaya and Emirhan Kurtuluş and Arda Göktoğan and Deniz Yuret},
    year={2022},
    eprint={2203.01215},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}