<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
wav2vec2-live-japanese-jdrt-30epochs-domo
This model is a fine-tuned version of ttop324/wav2vec2-live-japanese on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.4493
- Wer: 0.5144
- Cer: 0.4721
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2519
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
13.274 | 1.0 | 503 | 13.9895 | 1.0284 | 0.9995 |
3.3113 | 2.0 | 1007 | 3.3197 | 1.0 | 1.0 |
3.1656 | 3.0 | 1511 | 3.0974 | 1.0 | 1.0 |
2.8601 | 4.0 | 2015 | 2.7746 | 0.9201 | 0.9580 |
2.6907 | 5.0 | 2519 | 2.5402 | 0.7862 | 0.7523 |
2.55 | 6.0 | 3023 | 2.3554 | 0.7553 | 0.7248 |
2.4216 | 7.0 | 3527 | 2.2204 | 0.7349 | 0.7131 |
2.3158 | 8.0 | 4031 | 1.9821 | 0.6394 | 0.5287 |
2.2588 | 9.0 | 4535 | 1.8560 | 0.5913 | 0.4918 |
2.1482 | 10.0 | 5039 | 1.7875 | 0.5845 | 0.4864 |
2.1606 | 11.0 | 5543 | 1.7326 | 0.5736 | 0.4836 |
2.1086 | 12.0 | 6047 | 1.6948 | 0.5687 | 0.4839 |
2.0778 | 13.0 | 6551 | 1.6684 | 0.5656 | 0.4891 |
2.0279 | 14.0 | 7055 | 1.6289 | 0.5560 | 0.4853 |
2.0237 | 15.0 | 7559 | 1.6029 | 0.5508 | 0.4753 |
1.999 | 16.0 | 8063 | 1.5888 | 0.5470 | 0.4862 |
2.033 | 17.0 | 8567 | 1.5589 | 0.5395 | 0.4853 |
1.9862 | 18.0 | 9071 | 1.5430 | 0.5350 | 0.4791 |
1.9313 | 19.0 | 9575 | 1.5241 | 0.5321 | 0.4781 |
1.9242 | 20.0 | 10079 | 1.5073 | 0.5265 | 0.4744 |
1.9603 | 21.0 | 10583 | 1.4961 | 0.5224 | 0.4716 |
1.9423 | 22.0 | 11087 | 1.4864 | 0.5207 | 0.4690 |
2.0109 | 23.0 | 11591 | 1.4758 | 0.5195 | 0.4714 |
1.9952 | 24.0 | 12095 | 1.4690 | 0.5178 | 0.4711 |
1.9367 | 25.0 | 12599 | 1.4607 | 0.5155 | 0.4690 |
1.9318 | 26.0 | 13103 | 1.4564 | 0.5162 | 0.4713 |
1.9309 | 27.0 | 13607 | 1.4505 | 0.5144 | 0.4703 |
1.8649 | 28.0 | 14111 | 1.4521 | 0.5137 | 0.4716 |
1.9382 | 29.0 | 14615 | 1.4494 | 0.5148 | 0.4717 |
1.9489 | 29.94 | 15090 | 1.4493 | 0.5144 | 0.4721 |
Framework versions
- Transformers 4.28.0
- Pytorch 1.12.0+cu116
- Datasets 2.12.0
- Tokenizers 0.13.3