generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

polish_transliterator_BART

This model is a fine-tuned version of sshleifer/bart-tiny-random on the None dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
10.3014 1.0 572 10.2707 0.0 0.0 0.0 0.0 2.0
10.2465 2.0 1144 10.2013 0.0 0.0 0.0 0.0 2.0
10.1717 3.0 1716 10.1342 0.0 0.0 0.0 0.0 2.0
10.1086 4.0 2288 10.0704 0.0 0.0 0.0 0.0 2.0
10.0524 5.0 2860 10.0102 0.0 0.0 0.0 0.0 2.0
9.9976 6.0 3432 9.9539 0.0 0.0 0.0 0.0 2.0
9.8907 7.0 4004 9.9018 0.0 0.0 0.0 0.0 2.0
9.8424 8.0 4576 9.8536 0.0 0.0 0.0 0.0 2.0
9.8046 9.0 5148 9.8095 0.0 0.0 0.0 0.0 2.0
9.7581 10.0 5720 9.7693 0.0 0.0 0.0 0.0 2.0
9.7253 11.0 6292 9.7331 0.0 0.0 0.0 0.0 2.0
9.698 12.0 6864 9.7008 0.0 0.0 0.0 0.0 2.0
9.6611 13.0 7436 9.6723 0.0 0.0 0.0 0.0 2.0
9.6125 14.0 8008 9.6477 0.0 0.0 0.0 0.0 2.0
9.5928 15.0 8580 9.6269 0.0 0.0 0.0 0.0 2.0
9.5747 16.0 9152 9.6099 0.0 0.0 0.0 0.0 2.0
9.5613 17.0 9724 9.5966 0.0 0.0 0.0 0.0 2.0
9.5418 18.0 10296 9.5871 0.0 0.0 0.0 0.0 2.0
9.539 19.0 10868 9.5814 0.0 0.0 0.0 0.0 2.0
9.5366 20.0 11440 9.5795 0.0 0.0 0.0 0.0 2.0

Framework versions