simplification generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

flan-t5-base-finetuned-length_control_token

This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:

Model description

This model was trained on a dataset called PWKP-GPT3-LENGTH-CONTROL-40BUCKETS. The dataset contains 30k instances taken from PWKP, then processed through GPT3 to obtain simplifications. The 30k instances come from: 10k which were supposed to generate very long simplifications, 10k which were supposed to generate very short simplifications, and 10k without specifying the simplicity level. The model does not sucessfuly work on these buckets. There exists another dataset, the PWKP-GPT3-LENGTH-CONTROL-4BUCKETS, but it was never trained on something. Those buckets are also rather unbalanced.

The idea comes from Controllable Sentence Simplification Louis Martin, https://arxiv.org/pdf/1910.02677.pdf

It was fine-tuned on the FLAN-T5-base model.

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Sacrebleu
1.3257 1.0 1782 1.0906 15.4208
1.1718 2.0 3564 1.0648 15.5358
1.0972 3.0 5346 1.0484 15.8113
1.0472 4.0 7128 1.0394 16.0159
1.0092 5.0 8910 1.0305 16.1341
0.9858 6.0 10692 1.0276 16.2445

Framework versions