BART fine-tuned for keyphrase generation

<!-- Provide a quick summary of what the model is/does. -->

This is the <a href="https://huggingface.co/facebook/bart-base">bart-base</a> (<a href = "https://arxiv.org/abs/1910.13461">Lewis et al.. 2019</a>) model <a href="https://arxiv.org/abs/2209.03791">finetuned for the keyphrase generation task</a> on the fragments of the following corpora:

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("beogradjanka/bart_finetuned_keyphrase_extraction")
model = AutoModelForSeq2SeqLM.from_pretrained("beogradjanka/bart_finetuned_keyphrase_extraction")

text = "In this paper, we investigate cross-domain limitations of keyphrase generation using the models for abstractive text summarization.\
        We present an evaluation of BART fine-tuned for keyphrase generation across three types of texts, \
        namely scientific texts from computer science and biomedical domains and news texts. \
        We explore the role of transfer learning between different domains to improve the model performance on small text corpora."

tokenized_text = tokenizer.prepare_seq2seq_batch([text], return_tensors='pt')
translation = model.generate(**tokenized_text)
translated_text = tokenizer.batch_decode(translation, skip_special_tokens=True)[0]
print(translated_text)

Training Hyperparameters

The following hyperparameters were used during training:

BibTeX:

@article{glazkova2023applying,
  title={Applying Transformer-Based Text Summarization for Keyphrase Generation},
  author={Glazkova, Anna and Morozov, Dmitry},
  journal={Lobachevskii Journal of Mathematics},
  volume={44},
  number={1},
  pages={123--136},
  year={2023},
  doi={10.1134/S1995080223010134}
}