summarization

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

Transformers >= 4.23.1
This model relies on a custom modeling file, you need to add trust_remote_code=True
See #13467

LSG ArXiv paper.
Github/conversion script is available at this link.

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline

tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096-arxiv", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096-arxiv", trust_remote_code=True)

text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(
  text, 
  truncation=True, 
  max_length=64, 
  no_repeat_ngram_size=7,
  num_beams=2,
  early_stopping=True
  )

ccdv/lsg-bart-base-4096-arxiv

This model is a fine-tuned version of ccdv/lsg-bart-base-4096 on the scientific_papers arxiv dataset.
It achieves the following results on the test set:

Length Sparse Type Block Size Sparsity Connexions R1 R2 RL RLsum
4096 Local 256 0 768 46.65 18.91 26.90 42.18
4096 Local 128 0 384 46.18 18.57 26.71 41.69
4096 Pooling 128 4 644 46.27 18.68 26.87 41.82
4096 Stride 128 4 644 46.34 18.64 26.69 41.87
4096 Block Stride 128 4 644 46.23 18.62 26.62 41.80
4096 Norm 128 4 644 45.96 18.46 26.52 41.51
4096 LSH 128 4 644 46.19 18.72 26.89 41.76

With smaller block size (lower ressources):

Length Sparse Type Block Size Sparsity Connexions R1 R2 RL RLsum
4096 Local 64 0 192 44.71 17.53 26.03 40.23
4096 Local 32 0 96 39.67 14.34 23.81 35.00
4096 Pooling 32 4 160 42.75 16.34 25.20 38.23
4096 Stride 32 4 160 44.23 17.21 25.71 39.72
4096 Block Stride 32 4 160 44.15 17.10 25.68 39.60
4096 Norm 32 4 160 42.02 15.65 24.56 37.45
4096 LSH 32 4 160 42.58 16.21 25.10 38.04

Model description

The model relies on Local-Sparse-Global attention to handle long sequences: attn

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers).
The model is warm started from BART-base, converted to handle long sequences (encoder only) and fine tuned.

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Generate hyperparameters

The following hyperparameters were used during generation:

Framework versions