<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

Flan_t5_Large_Chat_Summary

This model is a fine-tuned version of google/flan-t5-large on the shared_TaskA dataset.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Example Uses

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM 
tokenizer_pre = AutoTokenizer.from_pretrained("Amalq/flan_t5_large_chat_summary")
model_pre = AutoModelForSeq2SeqLM.from_pretrained("Amalq/flan_t5_large_chat_summary")