<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
dgx2_whisper_base_distil_att_loss_mozilla_epochs_40_batch_16_concat_dataset
This model is a fine-tuned version of rohitp1/dgx1_whisper_base_finetune_teacher_babble_noise_mozilla_100_epochs_batch_16 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.1315
- Wer: 30.6873
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 40
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
2.182 | 2.2 | 150 | 2.1102 | 31.0198 |
2.0124 | 4.41 | 300 | 2.0539 | 30.7134 |
1.9833 | 6.61 | 450 | 2.0528 | 30.7099 |
1.9666 | 8.82 | 600 | 2.0768 | 30.6490 |
1.9756 | 11.03 | 750 | 2.0959 | 30.6699 |
1.9659 | 13.23 | 900 | 2.1110 | 30.6612 |
1.9583 | 15.44 | 1050 | 2.1150 | 30.6403 |
1.9524 | 17.64 | 1200 | 2.1191 | 30.6507 |
1.9461 | 19.85 | 1350 | 2.1222 | 30.6716 |
1.9448 | 22.06 | 1500 | 2.1275 | 30.6890 |
1.9393 | 24.26 | 1650 | 2.1297 | 30.6646 |
1.935 | 26.47 | 1800 | 2.1315 | 30.6873 |
Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2