<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
retrain2_oneTimeTraining_MTL-1epoch
This model is a fine-tuned version of alexziweiwang/exp21-uaspeech-foundation on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 5.9312
- Acc: 0.265
- Wer: 1.0
- Correct: 53
- Total: 200
- Strlen: 200
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
Training results
Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
---|---|---|---|---|---|---|---|---|
No log | 0.02 | 5 | 13.6638 | 0.005 | 1.6126 | 1 | 200 | 200 |
12.2282 | 0.04 | 10 | 13.4030 | 0.005 | 1.4743 | 1 | 200 | 200 |
12.2282 | 0.06 | 15 | 13.1289 | 0.005 | 1.3953 | 1 | 200 | 200 |
12.3565 | 0.08 | 20 | 12.8538 | 0.005 | 1.3043 | 1 | 200 | 200 |
12.3565 | 0.11 | 25 | 12.5711 | 0.005 | 1.2095 | 1 | 200 | 200 |
10.7997 | 0.13 | 30 | 12.2891 | 0.005 | 1.1462 | 1 | 200 | 200 |
10.7997 | 0.15 | 35 | 12.0060 | 0.005 | 1.0909 | 1 | 200 | 200 |
10.1556 | 0.17 | 40 | 11.7183 | 0.005 | 1.0632 | 1 | 200 | 200 |
10.1556 | 0.19 | 45 | 11.4347 | 0.01 | 1.0395 | 2 | 200 | 200 |
10.3187 | 0.21 | 50 | 11.1549 | 0.01 | 1.0178 | 2 | 200 | 200 |
10.3187 | 0.23 | 55 | 10.8828 | 0.01 | 1.0099 | 2 | 200 | 200 |
9.8042 | 0.25 | 60 | 10.6161 | 0.01 | 1.0040 | 2 | 200 | 200 |
9.8042 | 0.27 | 65 | 10.3539 | 0.01 | 0.9980 | 2 | 200 | 200 |
9.6489 | 0.3 | 70 | 10.0954 | 0.015 | 1.0 | 3 | 200 | 200 |
9.6489 | 0.32 | 75 | 9.8456 | 0.025 | 1.0 | 5 | 200 | 200 |
9.6112 | 0.34 | 80 | 9.5980 | 0.045 | 1.0 | 9 | 200 | 200 |
9.6112 | 0.36 | 85 | 9.3535 | 0.055 | 1.0 | 11 | 200 | 200 |
8.4257 | 0.38 | 90 | 9.1168 | 0.085 | 1.0 | 17 | 200 | 200 |
8.4257 | 0.4 | 95 | 8.8920 | 0.105 | 1.0 | 21 | 200 | 200 |
8.7311 | 0.42 | 100 | 8.6739 | 0.11 | 1.0 | 22 | 200 | 200 |
8.7311 | 0.44 | 105 | 8.4607 | 0.135 | 1.0 | 27 | 200 | 200 |
8.3653 | 0.46 | 110 | 8.2551 | 0.165 | 1.0 | 33 | 200 | 200 |
8.3653 | 0.48 | 115 | 8.0573 | 0.17 | 1.0 | 34 | 200 | 200 |
7.1342 | 0.51 | 120 | 7.8700 | 0.175 | 1.0 | 35 | 200 | 200 |
7.1342 | 0.53 | 125 | 7.6908 | 0.185 | 1.0 | 37 | 200 | 200 |
7.5411 | 0.55 | 130 | 7.5221 | 0.205 | 1.0 | 41 | 200 | 200 |
7.5411 | 0.57 | 135 | 7.3628 | 0.22 | 1.0 | 44 | 200 | 200 |
7.2449 | 0.59 | 140 | 7.2131 | 0.23 | 1.0 | 46 | 200 | 200 |
7.2449 | 0.61 | 145 | 7.0735 | 0.23 | 1.0 | 46 | 200 | 200 |
7.5166 | 0.63 | 150 | 6.9396 | 0.25 | 1.0 | 50 | 200 | 200 |
7.5166 | 0.65 | 155 | 6.8186 | 0.25 | 1.0 | 50 | 200 | 200 |
7.0016 | 0.67 | 160 | 6.7015 | 0.25 | 1.0 | 50 | 200 | 200 |
7.0016 | 0.7 | 165 | 6.5904 | 0.25 | 1.0 | 50 | 200 | 200 |
6.0715 | 0.72 | 170 | 6.4879 | 0.255 | 1.0 | 51 | 200 | 200 |
6.0715 | 0.74 | 175 | 6.3980 | 0.26 | 1.0 | 52 | 200 | 200 |
6.312 | 0.76 | 180 | 6.3198 | 0.26 | 1.0 | 52 | 200 | 200 |
6.312 | 0.78 | 185 | 6.2532 | 0.26 | 1.0 | 52 | 200 | 200 |
6.3694 | 0.8 | 190 | 6.1952 | 0.26 | 1.0 | 52 | 200 | 200 |
6.3694 | 0.82 | 195 | 6.1453 | 0.26 | 1.0 | 52 | 200 | 200 |
6.2196 | 0.84 | 200 | 6.0993 | 0.26 | 1.0 | 52 | 200 | 200 |
6.2196 | 0.86 | 205 | 6.0556 | 0.265 | 1.0 | 53 | 200 | 200 |
5.7131 | 0.89 | 210 | 6.0181 | 0.265 | 1.0 | 53 | 200 | 200 |
5.7131 | 0.91 | 215 | 5.9873 | 0.265 | 1.0 | 53 | 200 | 200 |
6.1827 | 0.93 | 220 | 5.9619 | 0.265 | 1.0 | 53 | 200 | 200 |
6.1827 | 0.95 | 225 | 5.9460 | 0.265 | 1.0 | 53 | 200 | 200 |
5.3823 | 0.97 | 230 | 5.9359 | 0.265 | 1.0 | 53 | 200 | 200 |
5.3823 | 0.99 | 235 | 5.9312 | 0.265 | 1.0 | 53 | 200 | 200 |
Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2