<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
wav2vec2-demo-F04
This model is a fine-tuned version of yip-i/uaspeech-pretrained on the None dataset. It achieves the following results on the evaluation set:
- Loss: 4.4557
- Wer: 1.0985
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
16.8788 | 0.89 | 500 | 3.6172 | 1.0 |
3.0484 | 1.79 | 1000 | 3.3653 | 1.0 |
3.0178 | 2.68 | 1500 | 3.3402 | 1.0 |
3.182 | 3.57 | 2000 | 3.1676 | 1.0103 |
3.0374 | 4.46 | 2500 | 3.5767 | 1.2914 |
2.8118 | 5.36 | 3000 | 3.1389 | 1.0444 |
2.8424 | 6.25 | 3500 | 3.1171 | 1.1454 |
2.8194 | 7.14 | 4000 | 3.1267 | 1.2464 |
2.8052 | 8.04 | 4500 | 3.2637 | 1.0918 |
2.7835 | 8.93 | 5000 | 3.3412 | 1.1052 |
2.7794 | 9.82 | 5500 | 3.4910 | 1.2220 |
2.7405 | 10.71 | 6000 | 3.1507 | 1.2451 |
2.7518 | 11.61 | 6500 | 3.5342 | 1.1618 |
2.7461 | 12.5 | 7000 | 3.7598 | 1.2768 |
2.7315 | 13.39 | 7500 | 3.7623 | 1.2220 |
2.7203 | 14.29 | 8000 | 4.1022 | 1.0730 |
2.6901 | 15.18 | 8500 | 3.6616 | 1.2914 |
2.7152 | 16.07 | 9000 | 3.7305 | 1.2488 |
2.7036 | 16.96 | 9500 | 3.6997 | 1.1454 |
2.6938 | 17.86 | 10000 | 4.9800 | 1.0365 |
2.6962 | 18.75 | 10500 | 4.3985 | 1.1813 |
2.6801 | 19.64 | 11000 | 5.2335 | 1.1910 |
2.6695 | 20.54 | 11500 | 4.4297 | 1.0432 |
2.6762 | 21.43 | 12000 | 4.7141 | 1.1612 |
2.6833 | 22.32 | 12500 | 4.6789 | 1.0578 |
2.6688 | 23.21 | 13000 | 4.2029 | 1.1971 |
2.6717 | 24.11 | 13500 | 4.3582 | 1.1606 |
2.6414 | 25.0 | 14000 | 4.3469 | 1.2859 |
2.6585 | 25.89 | 14500 | 4.4786 | 1.0517 |
2.6379 | 26.79 | 15000 | 4.1083 | 1.1800 |
2.6453 | 27.68 | 15500 | 4.5773 | 1.0365 |
2.6588 | 28.57 | 16000 | 4.5645 | 1.1381 |
2.6289 | 29.46 | 16500 | 4.4557 | 1.0985 |
Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2