summarization

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

Transformers >= 4.23.1
This model relies on a custom modeling file, you need to add trust_remote_code=True
See #13467

LSG ArXiv paper.
Github/conversion script is available at this link.

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline

tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-16384-arxiv", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-16384-arxiv", trust_remote_code=True)

text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(
  text, 
  truncation=True, 
  max_length=64, 
  no_repeat_ngram_size=7,
  num_beams=2,
  early_stopping=True
  )

ccdv/lsg-bart-base-16384-arxiv

This model is a fine-tuned version of ccdv/lsg-bart-base-4096-arxiv on the scientific_papers arxiv dataset.
The model is converted to handle 16384 long sequences and fine-tuned accordingly during 1 epoch.
It achieves the following results on the test set:

Length Global tokens Fine-tuning Block Size Sparsity Connexions R1 R2 RL RLsum
16384 64 Full 256 0 768 48.74 20.88 28.50 44.23
16384 1 Full 256 0 768 48.66 20.92 28.50 44.18
16384 64 Global only 256 0 768 48.08 20.42 28.00 43.65
16384 1 None 256 0 768 47.03 20.19 28.26 42.69

Reference model:

Length Global tokens Fine-tuning Block Size Sparsity Connexions R1 R2 RL RLsum
4096 1 - 256 0 768 46.65 18.91 26.90 42.18

Model description

The model relies on Local-Sparse-Global attention to handle long sequences: attn

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers).
The model is warm started from ccdv/lsg-bart-base-4096-arxiv, converted to handle long sequences (encoder only) and fine tuned.

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Generate hyperparameters

The following hyperparameters were used during generation:

Framework versions