<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
wav2vec2-large-xls-r-300m-urdu-colab-cv8
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set:
- Loss: 1.4651
- Wer: 0.7
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
20.3271 | 1.27 | 32 | 20.3487 | 1.0 |
11.0206 | 2.55 | 64 | 7.7343 | 1.0 |
5.8023 | 3.82 | 96 | 5.4188 | 1.0 |
4.5872 | 5.12 | 128 | 4.1428 | 1.0 |
3.6691 | 6.39 | 160 | 3.4557 | 1.0 |
3.3143 | 7.67 | 192 | 3.2663 | 1.0 |
3.1689 | 8.94 | 224 | 3.1022 | 0.9982 |
3.1472 | 10.24 | 256 | 3.0544 | 0.9993 |
3.1091 | 11.51 | 288 | 3.0327 | 0.9978 |
3.0437 | 12.78 | 320 | 3.0288 | 1.0 |
2.9981 | 14.08 | 352 | 2.8645 | 1.0 |
2.5244 | 15.35 | 384 | 2.0238 | 0.9686 |
1.4962 | 16.63 | 416 | 1.5885 | 0.9118 |
1.0138 | 17.9 | 448 | 1.3656 | 0.8155 |
0.7655 | 19.2 | 480 | 1.4592 | 0.8125 |
0.6267 | 20.47 | 512 | 1.4170 | 0.7867 |
0.5127 | 21.75 | 544 | 1.3200 | 0.7716 |
0.4422 | 23.04 | 576 | 1.4082 | 0.7727 |
0.3482 | 24.31 | 608 | 1.3932 | 0.7432 |
0.3128 | 25.59 | 640 | 1.4059 | 0.7432 |
0.2762 | 26.86 | 672 | 1.4689 | 0.7336 |
0.2451 | 28.16 | 704 | 1.4318 | 0.7207 |
0.2104 | 29.43 | 736 | 1.4304 | 0.7399 |
0.1858 | 30.71 | 768 | 1.4586 | 0.7225 |
0.1779 | 31.98 | 800 | 1.4948 | 0.7284 |
0.1546 | 33.27 | 832 | 1.4960 | 0.7173 |
0.1457 | 34.55 | 864 | 1.4949 | 0.7077 |
0.1333 | 35.82 | 896 | 1.4656 | 0.7085 |
0.1212 | 37.12 | 928 | 1.5061 | 0.7033 |
0.1162 | 38.39 | 960 | 1.4653 | 0.7055 |
0.1043 | 39.67 | 992 | 1.4651 | 0.7 |
Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.12.1