<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
bart-finetuned-idl
This model is a fine-tuned version of facebook/bart-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0031
- Bleu: 0.0
- Gen Len: 4.9917
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
---|---|---|---|---|---|
0.2005 | 1.0 | 13874 | 0.1589 | 0.0 | 5.0002 |
0.1182 | 2.0 | 27748 | 0.0949 | 0.0 | 4.9924 |
0.0983 | 3.0 | 41622 | 0.0778 | 0.0 | 4.9924 |
0.0724 | 4.0 | 55496 | 0.0724 | 0.0 | 4.9903 |
0.0532 | 5.0 | 69370 | 0.0549 | 0.0 | 4.9928 |
0.0458 | 6.0 | 83244 | 0.0463 | 0.0 | 4.9861 |
0.0435 | 7.0 | 97118 | 0.0548 | 0.0 | 4.9923 |
0.0464 | 8.0 | 110992 | 0.0847 | 0.0 | 4.9899 |
0.0317 | 9.0 | 124866 | 0.0303 | 0.0 | 4.9922 |
0.0302 | 10.0 | 138740 | 0.0284 | 0.0 | 4.9919 |
0.0306 | 11.0 | 152614 | 0.0120 | 0.0 | 4.9919 |
0.0224 | 12.0 | 166488 | 0.0462 | 0.0 | 4.9917 |
0.0184 | 13.0 | 180362 | 0.0138 | 0.0 | 4.9924 |
0.0208 | 14.0 | 194236 | 0.0730 | 0.0 | 4.9919 |
0.0149 | 15.0 | 208110 | 0.0126 | 0.0 | 4.992 |
0.0161 | 16.0 | 221984 | 0.0100 | 0.0 | 4.9915 |
0.0178 | 17.0 | 235858 | 0.0106 | 0.0 | 4.992 |
0.0116 | 18.0 | 249732 | 0.0149 | 0.0 | 4.9921 |
0.0096 | 19.0 | 263606 | 0.0085 | 0.0 | 4.9918 |
0.0094 | 20.0 | 277480 | 0.0101 | 0.0 | 4.9916 |
0.0084 | 21.0 | 291354 | 0.0093 | 0.0 | 4.9918 |
0.0077 | 22.0 | 305228 | 0.0138 | 0.0 | 4.992 |
0.0094 | 23.0 | 319102 | 0.0084 | 0.0 | 4.9918 |
0.0079 | 24.0 | 332976 | 0.0058 | 0.0 | 4.9917 |
0.006 | 25.0 | 346850 | 0.0067 | 0.0 | 4.9918 |
0.0046 | 26.0 | 360724 | 0.0041 | 0.0 | 4.9918 |
0.0049 | 27.0 | 374598 | 0.0061 | 0.0 | 4.9919 |
0.002 | 28.0 | 388472 | 0.0035 | 0.0 | 4.9918 |
0.003 | 29.0 | 402346 | 0.0038 | 0.0 | 4.9917 |
0.0027 | 30.0 | 416220 | 0.0050 | 0.0 | 4.9917 |
0.001 | 31.0 | 430094 | 0.0063 | 0.0 | 4.9918 |
0.0017 | 32.0 | 443968 | 0.0042 | 0.0 | 4.992 |
0.0013 | 33.0 | 457842 | 0.0032 | 0.0 | 4.9917 |
0.0005 | 34.0 | 471716 | 0.0031 | 0.0 | 4.9917 |
0.0003 | 35.0 | 485590 | 0.0031 | 0.0 | 4.9917 |
Framework versions
- Transformers 4.24.0
- Pytorch 1.10.0+cu111
- Datasets 2.7.1
- Tokenizers 0.13.2