<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
ws_whisper_small_distil_att_loss_mozilla_epochs_50_batch_2_try3
This model is a fine-tuned version of rohitp1/kkkh_whisper_small_distillation_att_loss_mozilla_epochs_100_batch_4_concat_dataset on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.8151
- Wer: 19.8454
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 1024
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 50
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
1.147 | 1.46 | 100 | 1.6545 | 16.9156 |
1.1459 | 2.93 | 200 | 1.6637 | 16.9591 |
1.1433 | 4.39 | 300 | 1.6806 | 16.9069 |
1.4547 | 5.85 | 400 | 2.3254 | 20.3259 |
2.4311 | 7.31 | 500 | 3.9239 | 19.4764 |
2.5513 | 8.78 | 600 | 2.7875 | 19.4555 |
2.2862 | 10.24 | 700 | 3.7611 | 19.7410 |
2.3854 | 11.7 | 800 | 2.8151 | 19.8454 |
Framework versions
- Transformers 4.29.2
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.11.0