<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
KB13-t5-small-finetuned-en-to-regex
This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.4028
- Semantic accuracy: 0.439
- Syntactic accuracy: 0.3659
- Gen Len: 15.3659
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Semantic accuracy | Syntactic accuracy | Gen Len |
---|---|---|---|---|---|---|
No log | 1.0 | 47 | 0.9241 | 0.0488 | 0.0488 | 15.1951 |
No log | 2.0 | 94 | 0.6326 | 0.3171 | 0.2683 | 14.6341 |
No log | 3.0 | 141 | 0.5936 | 0.2927 | 0.2683 | 15.1463 |
No log | 4.0 | 188 | 0.5097 | 0.3415 | 0.3171 | 15.5854 |
No log | 5.0 | 235 | 0.4467 | 0.3659 | 0.3171 | 15.7073 |
No log | 6.0 | 282 | 0.3875 | 0.3659 | 0.3415 | 15.4146 |
No log | 7.0 | 329 | 0.4208 | 0.3659 | 0.3171 | 15.5122 |
No log | 8.0 | 376 | 0.3551 | 0.3659 | 0.3171 | 15.3659 |
No log | 9.0 | 423 | 0.2996 | 0.3659 | 0.3171 | 15.3659 |
No log | 10.0 | 470 | 0.3571 | 0.3902 | 0.3171 | 15.2195 |
0.7453 | 11.0 | 517 | 0.3316 | 0.4146 | 0.3415 | 15.3659 |
0.7453 | 12.0 | 564 | 0.3371 | 0.4146 | 0.3415 | 15.439 |
0.7453 | 13.0 | 611 | 0.3488 | 0.4146 | 0.3415 | 15.439 |
0.7453 | 14.0 | 658 | 0.3069 | 0.439 | 0.3659 | 15.4146 |
0.7453 | 15.0 | 705 | 0.3289 | 0.439 | 0.3659 | 15.1951 |
0.7453 | 16.0 | 752 | 0.3420 | 0.3902 | 0.3171 | 15.0976 |
0.7453 | 17.0 | 799 | 0.3190 | 0.4146 | 0.3415 | 15.1463 |
0.7453 | 18.0 | 846 | 0.3495 | 0.439 | 0.3659 | 15.1463 |
0.7453 | 19.0 | 893 | 0.3588 | 0.439 | 0.3659 | 15.3659 |
0.7453 | 20.0 | 940 | 0.3457 | 0.439 | 0.3659 | 15.3659 |
0.7453 | 21.0 | 987 | 0.3662 | 0.439 | 0.3659 | 15.3659 |
0.1294 | 22.0 | 1034 | 0.3533 | 0.439 | 0.3659 | 15.3659 |
0.1294 | 23.0 | 1081 | 0.3872 | 0.4146 | 0.3415 | 15.4146 |
0.1294 | 24.0 | 1128 | 0.3902 | 0.4146 | 0.3415 | 15.3659 |
0.1294 | 25.0 | 1175 | 0.3802 | 0.439 | 0.3659 | 15.3659 |
0.1294 | 26.0 | 1222 | 0.3893 | 0.439 | 0.3659 | 15.4146 |
0.1294 | 27.0 | 1269 | 0.4035 | 0.4146 | 0.3415 | 15.1951 |
0.1294 | 28.0 | 1316 | 0.4020 | 0.4146 | 0.3415 | 15.3659 |
0.1294 | 29.0 | 1363 | 0.3983 | 0.439 | 0.3659 | 15.3659 |
0.1294 | 30.0 | 1410 | 0.4028 | 0.439 | 0.3659 | 15.3659 |
Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2