generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

flan-t5-large-da-multiwoz2.1_fs0.2

This model is a fine-tuned version of google/flan-t5-large on the None dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Accuracy Num Gen Len
0.9653 0.28 400 0.4635 31.3166 3689 15.196
0.5071 0.57 800 0.4031 35.8289 3689 15.6546
0.4603 0.85 1200 0.3718 37.6313 3689 15.6511
0.4219 1.13 1600 0.3577 37.9333 3689 16.5319
0.3991 1.42 2000 0.3491 40.5462 3689 15.453
0.394 1.7 2400 0.3409 40.9333 3689 15.5137
0.3822 1.98 2800 0.3370 41.2932 3689 15.225
0.3625 2.26 3200 0.3327 42.1132 3689 16.0718
0.3577 2.55 3600 0.3329 42.1372 3689 15.9973
0.3644 2.83 4000 0.3303 42.2529 3689 15.6525
0.349 3.11 4400 0.3256 43.2025 3689 15.6601
0.3355 3.4 4800 0.3243 43.791 3689 15.5451
0.338 3.68 5200 0.3231 43.5073 3689 15.7411
0.3424 3.96 5600 0.3196 44.5281 3689 15.1307
0.3299 4.25 6000 0.3159 45.1554 3689 15.5213
0.328 4.53 6400 0.3188 43.4699 3689 15.3849
0.3204 4.81 6800 0.3159 44.7764 3689 15.8219
0.3166 5.1 7200 0.3165 45.0608 3689 15.8791

Framework versions