<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
libri-alpha-1-Temp-1-processor-change
This model is a fine-tuned version of on the None dataset. It achieves the following results on the evaluation set:
- Loss: 107.5632
- Wer: 0.1210
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
800.4601 | 0.75 | 100 | 189.2537 | 0.1508 |
680.6167 | 1.49 | 200 | 151.4276 | 0.1445 |
596.8935 | 2.24 | 300 | 144.2152 | 0.1401 |
576.7275 | 2.99 | 400 | 135.5783 | 0.1354 |
558.4325 | 3.73 | 500 | 121.5539 | 0.1324 |
512.5202 | 4.48 | 600 | 118.9303 | 0.1302 |
480.8142 | 5.22 | 700 | 117.4402 | 0.1295 |
480.9125 | 5.97 | 800 | 113.6219 | 0.1266 |
425.4858 | 6.72 | 900 | 112.1535 | 0.1274 |
443.0105 | 7.46 | 1000 | 112.7184 | 0.1240 |
436.3363 | 8.21 | 1100 | 110.7517 | 0.1268 |
416.3612 | 8.96 | 1200 | 112.7510 | 0.1272 |
418.833 | 9.7 | 1300 | 108.8896 | 0.1208 |
417.0277 | 10.45 | 1400 | 108.8765 | 0.1208 |
432.6803 | 11.19 | 1500 | 108.7693 | 0.1213 |
378.0122 | 11.94 | 1600 | 107.9976 | 0.1217 |
407.0373 | 12.69 | 1700 | 107.8557 | 0.1211 |
398.8923 | 13.43 | 1800 | 107.5632 | 0.1210 |
Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.7.0
- Tokenizers 0.11.0