<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
retrain_first1epoch
This model is a fine-tuned version of alexziweiwang/exp21-uaspeech-foundation on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 8.2238
- Acc: 0.24
- Wer: 1.0
- Correct: 48
- Total: 200
- Strlen: 200
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
Training results
Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
---|---|---|---|---|---|---|---|---|
No log | 0.02 | 5 | 13.8048 | 0.22 | 1.0237 | 44 | 200 | 200 |
12.2209 | 0.04 | 10 | 13.6869 | 0.22 | 1.0257 | 44 | 200 | 200 |
12.2209 | 0.06 | 15 | 13.5691 | 0.225 | 1.0296 | 45 | 200 | 200 |
12.4299 | 0.08 | 20 | 13.4590 | 0.24 | 1.0375 | 48 | 200 | 200 |
12.4299 | 0.11 | 25 | 13.3508 | 0.235 | 1.0395 | 47 | 200 | 200 |
11.0298 | 0.13 | 30 | 13.2241 | 0.25 | 1.0375 | 50 | 200 | 200 |
11.0298 | 0.15 | 35 | 13.0757 | 0.245 | 1.0336 | 49 | 200 | 200 |
10.5248 | 0.17 | 40 | 12.9277 | 0.245 | 1.0316 | 49 | 200 | 200 |
10.5248 | 0.19 | 45 | 12.7784 | 0.25 | 1.0316 | 50 | 200 | 200 |
10.8585 | 0.21 | 50 | 12.6346 | 0.25 | 1.0277 | 50 | 200 | 200 |
10.8585 | 0.23 | 55 | 12.4939 | 0.25 | 1.0277 | 50 | 200 | 200 |
10.7046 | 0.25 | 60 | 12.3472 | 0.25 | 1.0257 | 50 | 200 | 200 |
10.7046 | 0.27 | 65 | 12.1962 | 0.25 | 1.0237 | 50 | 200 | 200 |
10.8031 | 0.3 | 70 | 12.0537 | 0.25 | 1.0257 | 50 | 200 | 200 |
10.8031 | 0.32 | 75 | 11.9088 | 0.25 | 1.0237 | 50 | 200 | 200 |
10.859 | 0.34 | 80 | 11.7693 | 0.25 | 1.0257 | 50 | 200 | 200 |
10.859 | 0.36 | 85 | 11.6214 | 0.25 | 1.0198 | 50 | 200 | 200 |
9.7886 | 0.38 | 90 | 11.4699 | 0.25 | 1.0178 | 50 | 200 | 200 |
9.7886 | 0.4 | 95 | 11.3182 | 0.25 | 1.0138 | 50 | 200 | 200 |
10.4627 | 0.42 | 100 | 11.1609 | 0.25 | 1.0119 | 50 | 200 | 200 |
10.4627 | 0.44 | 105 | 11.0017 | 0.25 | 1.0138 | 50 | 200 | 200 |
10.0619 | 0.46 | 110 | 10.8520 | 0.25 | 1.0138 | 50 | 200 | 200 |
10.0619 | 0.48 | 115 | 10.7096 | 0.25 | 1.0138 | 50 | 200 | 200 |
8.7443 | 0.51 | 120 | 10.5629 | 0.25 | 1.0138 | 50 | 200 | 200 |
8.7443 | 0.53 | 125 | 10.4111 | 0.25 | 1.0119 | 50 | 200 | 200 |
9.675 | 0.55 | 130 | 10.2606 | 0.25 | 1.0119 | 50 | 200 | 200 |
9.675 | 0.57 | 135 | 10.1125 | 0.245 | 1.0119 | 49 | 200 | 200 |
9.1918 | 0.59 | 140 | 9.9708 | 0.24 | 1.0040 | 48 | 200 | 200 |
9.1918 | 0.61 | 145 | 9.8248 | 0.24 | 1.0040 | 48 | 200 | 200 |
9.6798 | 0.63 | 150 | 9.6785 | 0.24 | 1.0040 | 48 | 200 | 200 |
9.6798 | 0.65 | 155 | 9.5309 | 0.24 | 1.0040 | 48 | 200 | 200 |
9.0181 | 0.67 | 160 | 9.3867 | 0.24 | 1.0040 | 48 | 200 | 200 |
9.0181 | 0.7 | 165 | 9.2432 | 0.24 | 1.0040 | 48 | 200 | 200 |
7.7446 | 0.72 | 170 | 9.1053 | 0.24 | 1.0040 | 48 | 200 | 200 |
7.7446 | 0.74 | 175 | 8.9743 | 0.24 | 1.0040 | 48 | 200 | 200 |
8.0251 | 0.76 | 180 | 8.8538 | 0.24 | 1.0040 | 48 | 200 | 200 |
8.0251 | 0.78 | 185 | 8.7473 | 0.24 | 1.0020 | 48 | 200 | 200 |
7.9652 | 0.8 | 190 | 8.6516 | 0.24 | 1.0020 | 48 | 200 | 200 |
7.9652 | 0.82 | 195 | 8.5661 | 0.24 | 1.0020 | 48 | 200 | 200 |
7.9537 | 0.84 | 200 | 8.4887 | 0.24 | 1.0020 | 48 | 200 | 200 |
7.9537 | 0.86 | 205 | 8.4206 | 0.24 | 1.0 | 48 | 200 | 200 |
7.2889 | 0.89 | 210 | 8.3644 | 0.24 | 1.0 | 48 | 200 | 200 |
7.2889 | 0.91 | 215 | 8.3169 | 0.24 | 1.0 | 48 | 200 | 200 |
7.8974 | 0.93 | 220 | 8.2789 | 0.24 | 1.0 | 48 | 200 | 200 |
7.8974 | 0.95 | 225 | 8.2514 | 0.24 | 1.0 | 48 | 200 | 200 |
6.9118 | 0.97 | 230 | 8.2330 | 0.24 | 1.0 | 48 | 200 | 200 |
6.9118 | 0.99 | 235 | 8.2238 | 0.24 | 1.0 | 48 | 200 | 200 |
Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2