Model based on BERT, employed in a regression task to predict the Rouge-2 of a sentence with respect to the highlights of the paper. Starting from the model proposed with the paper morenolq/thext-ai-scibert we performed an additional fine-tuning contextualizing the sentence with our custom context, namely PCE-best. The additional training epoch was performed on CSPubSumm (Ed Collins, et al. "A supervised approach to extractive summarisation of scientific papers.".
You can find more details in the GitHub repo.
Usage
Tis checkpoint should be loaded into BertForSequenceClassification.from_pretrained
. See the BERT docs for more information.
Metrics
We tested the model on CSPubSumm with the following results:
CSPubSumm | |
---|---|
Rouge-1 F1 | 0.3738 |
Rouge-2 F1 | 0.1613 |
Rouge-L F1 | 0.3443 |