generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

text_shortening_model_v65

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Bert precision Bert recall Bert f1-score Average word count Max word count Min word count Average token count % shortened texts with length > 12
1.7747 1.0 146 1.3200 0.8806 0.8825 0.881 6.7818 18 2 10.6827 2.1021
1.3684 2.0 292 1.2106 0.8857 0.8858 0.8852 6.5335 18 2 10.4835 1.7017
1.2448 3.0 438 1.1635 0.8862 0.8883 0.8868 6.6246 18 1 10.6817 2.1021
1.1406 4.0 584 1.1386 0.8897 0.8923 0.8905 6.6697 18 2 10.6767 2.2022
1.0623 5.0 730 1.1373 0.889 0.893 0.8905 6.6897 18 2 10.7568 1.5015
1.0034 6.0 876 1.1111 0.8923 0.8953 0.8933 6.5876 18 2 10.6927 1.7017
0.9391 7.0 1022 1.1037 0.8927 0.8947 0.8932 6.5455 18 2 10.6196 1.3013
0.8868 8.0 1168 1.0997 0.8949 0.8959 0.895 6.4805 18 2 10.5836 1.4014
0.8443 9.0 1314 1.1011 0.8939 0.8965 0.8947 6.5626 18 2 10.6386 1.5015
0.8117 10.0 1460 1.0997 0.8957 0.8981 0.8965 6.4865 16 2 10.6066 1.001
0.7844 11.0 1606 1.1153 0.8976 0.8979 0.8973 6.4404 18 2 10.5345 1.5015
0.7593 12.0 1752 1.1126 0.8946 0.8988 0.8962 6.6356 18 2 10.7698 1.9019
0.7249 13.0 1898 1.1047 0.8968 0.8991 0.8975 6.5335 16 2 10.6396 1.4014
0.7048 14.0 2044 1.1127 0.8961 0.8984 0.8968 6.5275 16 2 10.6336 1.4014
0.6828 15.0 2190 1.1237 0.8965 0.8982 0.8969 6.4675 16 2 10.5906 1.7017
0.6558 16.0 2336 1.1221 0.8975 0.8972 0.8969 6.3634 16 1 10.4985 1.2012
0.6296 17.0 2482 1.1296 0.8962 0.8982 0.8968 6.4775 16 1 10.6496 1.9019
0.6304 18.0 2628 1.1334 0.8981 0.898 0.8976 6.3724 16 1 10.4755 1.6016
0.6124 19.0 2774 1.1463 0.898 0.9006 0.8989 6.5075 15 2 10.6246 1.5015
0.6001 20.0 2920 1.1547 0.8982 0.8997 0.8984 6.4925 16 2 10.5766 1.9019
0.5834 21.0 3066 1.1551 0.8972 0.8973 0.8967 6.3323 16 2 10.4705 1.7017
0.5707 22.0 3212 1.1687 0.897 0.899 0.8976 6.4665 16 2 10.6026 1.7017
0.5667 23.0 3358 1.1656 0.8965 0.8981 0.8968 6.4585 16 2 10.5726 2.002
0.5519 24.0 3504 1.1747 0.8968 0.8984 0.8971 6.4885 16 2 10.5616 2.1021
0.5538 25.0 3650 1.1754 0.8967 0.8983 0.897 6.4735 16 2 10.5676 2.002
0.5403 26.0 3796 1.1734 0.8968 0.8983 0.8971 6.4835 16 2 10.6036 1.9019
0.5371 27.0 3942 1.1735 0.8964 0.8982 0.8968 6.4865 16 2 10.5696 2.1021
0.5381 28.0 4088 1.1767 0.8968 0.8982 0.897 6.4735 16 2 10.5926 1.9019
0.5278 29.0 4234 1.1771 0.8966 0.8975 0.8966 6.4454 16 2 10.5556 2.002
0.5249 30.0 4380 1.1783 0.8964 0.8977 0.8966 6.4565 16 2 10.5686 2.002

Framework versions