generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

text_shortening_model_v71

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Bert precision Bert recall Bert f1-score Average word count Max word count Min word count Average token count % shortened texts with length > 12
1.8083 1.0 37 1.2558 0.8866 0.8836 0.8845 6.5606 17 1 10.4024 1.4014
1.3282 2.0 74 1.1453 0.8901 0.8914 0.8902 6.6797 18 1 10.5205 1.9019
1.1679 3.0 111 1.1190 0.8937 0.8928 0.8928 6.4825 18 1 10.4174 1.8018
1.0406 4.0 148 1.0827 0.8927 0.8956 0.8936 6.7377 18 2 10.6837 1.6016
0.9626 5.0 185 1.0821 0.8969 0.8978 0.8969 6.6176 18 2 10.6476 2.002
0.8814 6.0 222 1.0887 0.8974 0.9004 0.8984 6.7538 18 2 10.7908 2.3023
0.8163 7.0 259 1.0816 0.8972 0.8979 0.8971 6.6056 18 2 10.6096 1.8018
0.7636 8.0 296 1.0855 0.8987 0.8999 0.8988 6.5846 18 2 10.6967 2.002
0.7237 9.0 333 1.0949 0.8988 0.9004 0.8992 6.6346 18 2 10.6797 1.7017
0.6776 10.0 370 1.1174 0.9002 0.9017 0.9005 6.6186 18 2 10.6947 1.7017
0.6399 11.0 407 1.1237 0.8988 0.9002 0.8991 6.6316 18 2 10.6567 2.1021
0.5949 12.0 444 1.1426 0.8999 0.8988 0.8989 6.4755 18 2 10.5485 1.4014
0.5685 13.0 481 1.1564 0.9003 0.9015 0.9004 6.6216 18 2 10.6136 1.7017
0.5374 14.0 518 1.1690 0.9003 0.8997 0.8995 6.5506 18 2 10.5726 1.8018
0.5183 15.0 555 1.1736 0.9008 0.8997 0.8998 6.5415 18 2 10.5526 1.6016
0.4862 16.0 592 1.1882 0.8995 0.9001 0.8994 6.5936 18 2 10.6056 1.3013
0.4769 17.0 629 1.1910 0.9005 0.9003 0.8999 6.5716 18 2 10.6026 1.6016
0.4565 18.0 666 1.1957 0.9009 0.9 0.9 6.4615 18 2 10.5275 1.1011
0.4264 19.0 703 1.2276 0.9008 0.9004 0.9001 6.5125 18 2 10.5556 1.4014
0.4245 20.0 740 1.2415 0.9023 0.9005 0.9009 6.4605 18 2 10.4945 1.4014
0.4015 21.0 777 1.2658 0.9011 0.9004 0.9003 6.5135 18 2 10.5636 1.2012
0.3903 22.0 814 1.2779 0.9021 0.9018 0.9015 6.5495 18 2 10.5475 1.1011
0.3821 23.0 851 1.2899 0.9016 0.902 0.9014 6.5716 18 2 10.6336 1.4014
0.3595 24.0 888 1.3062 0.9007 0.9013 0.9005 6.5936 18 2 10.6947 1.3013
0.3551 25.0 925 1.3088 0.9015 0.9005 0.9006 6.4975 17 2 10.5355 1.2012
0.343 26.0 962 1.3169 0.9018 0.9009 0.9009 6.5005 17 2 10.5716 1.2012
0.3426 27.0 999 1.3264 0.8997 0.9018 0.9003 6.6486 17 2 10.7658 1.4014
0.3314 28.0 1036 1.3234 0.9018 0.9008 0.9008 6.4865 18 2 10.5165 1.2012
0.3187 29.0 1073 1.3378 0.9013 0.9003 0.9003 6.5055 18 2 10.5305 1.2012
0.3169 30.0 1110 1.3497 0.9015 0.9003 0.9004 6.4835 18 2 10.5546 1.2012
0.312 31.0 1147 1.3589 0.9018 0.8997 0.9003 6.4585 18 2 10.4615 1.2012
0.2995 32.0 1184 1.3572 0.901 0.9006 0.9004 6.5215 18 2 10.5866 1.3013
0.2987 33.0 1221 1.3647 0.9014 0.9009 0.9007 6.5305 18 2 10.5956 1.5015
0.2907 34.0 1258 1.3693 0.902 0.9007 0.9009 6.4585 18 2 10.5205 1.2012
0.2853 35.0 1295 1.3774 0.9026 0.9016 0.9016 6.4935 18 2 10.5385 1.2012
0.2746 36.0 1332 1.3815 0.9027 0.9023 0.9021 6.5285 18 2 10.5706 1.4014
0.2798 37.0 1369 1.3818 0.9026 0.9016 0.9016 6.4935 18 2 10.5285 1.4014
0.2801 38.0 1406 1.3858 0.9031 0.9018 0.902 6.4665 18 2 10.5175 1.3013
0.2773 39.0 1443 1.3868 0.9031 0.9018 0.902 6.4625 18 2 10.5185 1.3013
0.2756 40.0 1480 1.3875 0.9031 0.9018 0.902 6.4725 18 2 10.5235 1.4014

Framework versions