Model based on BERT, employed in a regression task to predict the Rouge-2 of a sentence with respect to the highlights of the paper. Starting from the model proposed with the paper morenolq/thext-ai-scibert we performed an additional fine-tuning contextualizing the sentence with our custom context, namely PCE-best. The additional training epoch was performed on AIPubSumm (L. Cagliero, M. La Quatra "Extracting highlights of scientific articles: A supervised summarization approach.").
You can find more details in the GitHub repo.
Usage
Tis checkpoint should be loaded into BertForSequenceClassification.from_pretrained. See the BERT docs for more information.
Metrics
We tested the model on AIPubSumm with the following results:
| AIPubSumm | |
|---|---|
| Rouge-1 F1 | 0.3415 |
| Rouge-2 F1 | 0.1250 |
| Rouge-L F1 | 0.3111 |