<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. -->
whisper_input_decoder_no_lob__0015
This model is a fine-tuned version of openai/whisper-tiny on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 3.9595
- Train Accuracy: 0.0138
- Train Wermet: 0.7012
- Validation Loss: 3.1493
- Validation Accuracy: 0.0132
- Validation Wermet: 0.7718
- Epoch: 14
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
Training results
Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
---|---|---|---|---|---|---|
5.4122 | 0.0107 | 0.9328 | 3.9759 | 0.0114 | 0.9606 | 0 |
4.7176 | 0.0116 | 0.8683 | 3.9404 | 0.0114 | 0.9334 | 1 |
4.6750 | 0.0117 | 0.8478 | 3.9211 | 0.0115 | 0.9237 | 2 |
4.6511 | 0.0117 | 0.8413 | 3.8864 | 0.0115 | 0.9331 | 3 |
4.6294 | 0.0118 | 0.8270 | 3.8729 | 0.0115 | 0.9228 | 4 |
4.6134 | 0.0118 | 0.8199 | 3.8690 | 0.0114 | 0.9451 | 5 |
4.5980 | 0.0118 | 0.8102 | 3.8491 | 0.0115 | 0.9152 | 6 |
4.5759 | 0.0119 | 0.7890 | 3.8366 | 0.0116 | 0.8691 | 7 |
4.5518 | 0.0120 | 0.7694 | 3.8081 | 0.0116 | 0.9013 | 8 |
4.5219 | 0.0121 | 0.7591 | 3.7734 | 0.0118 | 0.8383 | 9 |
4.4761 | 0.0122 | 0.7400 | 3.7156 | 0.0120 | 0.8125 | 10 |
4.4139 | 0.0125 | 0.7257 | 3.6311 | 0.0121 | 0.8188 | 11 |
4.3113 | 0.0128 | 0.7127 | 3.5089 | 0.0124 | 0.8008 | 12 |
4.1608 | 0.0132 | 0.7088 | 3.3587 | 0.0127 | 0.7742 | 13 |
3.9595 | 0.0138 | 0.7012 | 3.1493 | 0.0132 | 0.7718 | 14 |
Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3