generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

roberta_gpt2_summarization_cnn_dailymail

This model is a fine-tuned version of on the cnn_dailymail dataset.

Model description

This model uses RoBerta encoder and GPT2 decoder and fine-tuned on the summarization task. It got Rouge scores as follows:

Rouge1= 35.886

Rouge2= 16.292

RougeL= 23.499

Intended uses & limitations

To use its API:

from transformers import RobertaTokenizerFast, GPT2Tokenizer, EncoderDecoderModel

model = EncoderDecoderModel.from_pretrained("Ayham/roberta_gpt2_summarization_cnn_dailymail")

input_tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base')

output_tokenizer = GPT2Tokenizer.from_pretrained("gpt2")

article = """Your Input Text"""

input_ids = input_tokenizer(article, return_tensors="pt").input_ids

output_ids = model.generate(input_ids)

print(output_tokenizer.decode(output_ids[0], skip_special_tokens=True))

More information needed

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Framework versions