summarization conversational seq2seq bart large

Usage

from transformers import pipeline

summarizer_pipe = pipeline("summarization", model="yashugupta786/bart_large_xsum_samsum_conv_summarizer")
conversation_data = '''Hannah: Hey, do you have Betty's number?
Amanda: Lemme check
Amanda: Sorry, can't find it.
Amanda: Ask Larry
Amanda: He called her last time we were at the park together
Hannah: I don't know him well
Amanda: Don't be shy, he's very nice
Hannah: If you say so..
Hannah: I'd rather you texted him
Amanda: Just text him 🙂
Hannah: Urgh.. Alright
Hannah: Bye
Amanda: Bye bye                                       
'''
summarizer_pipe(conversation_data)

Results

key value
eval_rouge1 54.3921
eval_rouge2 29.8078
eval_rougeL 45.1543
eval_rougeLsum 49.942
test_rouge1 53.3059
test_rouge2 28.355
test_rougeL 44.0953
test_rougeLsum 48.9246

All the metric Rouge1,2,L are computed using precison and recall then computed the F measure for these Rouge recall= no of overlaping words/total no of referenced humman annotated words Rouge precision= no of overlaping words/total no of candidate machine predicted words