<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
elderly_whisper-small-fr
This model is a fine-tuned version of aviroes/whisper-small-fr on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3006
- Wer: 0.3065
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.25e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- training_steps: 5000
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.5099 | 0.04 | 100 | 0.4949 | 0.2392 |
0.4968 | 0.08 | 200 | 0.4577 | 0.2922 |
0.4662 | 0.11 | 300 | 0.4336 | 0.2325 |
0.4241 | 0.15 | 400 | 0.4232 | 0.2545 |
0.3902 | 0.19 | 500 | 0.4073 | 0.3009 |
0.4205 | 0.23 | 600 | 0.3978 | 0.2672 |
0.4 | 0.27 | 700 | 0.3798 | 0.2473 |
0.3508 | 0.3 | 800 | 0.3860 | 0.2218 |
0.3601 | 0.34 | 900 | 0.3870 | 0.2509 |
0.3147 | 0.38 | 1000 | 0.3663 | 0.2983 |
0.3194 | 0.42 | 1100 | 0.3637 | 0.2285 |
0.3218 | 0.46 | 1200 | 0.3616 | 0.2361 |
0.3365 | 0.5 | 1300 | 0.3555 | 0.2091 |
0.3474 | 0.53 | 1400 | 0.3560 | 0.2075 |
0.3439 | 0.57 | 1500 | 0.3490 | 0.2228 |
0.3254 | 0.61 | 1600 | 0.3432 | 0.1892 |
0.3089 | 0.65 | 1700 | 0.3426 | 0.1979 |
0.3577 | 0.69 | 1800 | 0.3383 | 0.1897 |
0.325 | 0.72 | 1900 | 0.3402 | 0.1871 |
0.2855 | 0.76 | 2000 | 0.3350 | 0.2040 |
0.3012 | 0.8 | 2100 | 0.3309 | 0.3121 |
0.3677 | 0.84 | 2200 | 0.3313 | 0.2040 |
0.3208 | 0.88 | 2300 | 0.3301 | 0.2917 |
0.3459 | 0.91 | 2400 | 0.3248 | 0.2973 |
0.2694 | 0.95 | 2500 | 0.3146 | 0.1866 |
0.3347 | 0.99 | 2600 | 0.3141 | 0.1953 |
0.1851 | 1.03 | 2700 | 0.3159 | 0.1943 |
0.1691 | 1.07 | 2800 | 0.3143 | 0.1856 |
0.1861 | 1.1 | 2900 | 0.3135 | 0.3039 |
0.1525 | 1.14 | 3000 | 0.3136 | 0.3320 |
0.165 | 1.18 | 3100 | 0.3124 | 0.2126 |
0.1421 | 1.22 | 3200 | 0.3161 | 0.3565 |
0.1676 | 1.26 | 3300 | 0.3180 | 0.2050 |
0.1719 | 1.3 | 3400 | 0.3157 | 0.1984 |
0.1863 | 1.33 | 3500 | 0.3173 | 0.3080 |
0.1499 | 1.37 | 3600 | 0.3102 | 0.2438 |
0.1599 | 1.41 | 3700 | 0.3096 | 0.2055 |
0.1762 | 1.45 | 3800 | 0.3070 | 0.3157 |
0.1641 | 1.49 | 3900 | 0.3052 | 0.2529 |
0.1387 | 1.52 | 4000 | 0.3071 | 0.2009 |
0.1662 | 1.56 | 4100 | 0.3077 | 0.2040 |
0.1715 | 1.6 | 4200 | 0.3050 | 0.2315 |
0.1584 | 1.64 | 4300 | 0.3031 | 0.2004 |
0.1563 | 1.68 | 4400 | 0.3019 | 0.2035 |
0.1515 | 1.71 | 4500 | 0.3020 | 0.2101 |
0.1582 | 1.75 | 4600 | 0.3021 | 0.3075 |
0.1534 | 1.79 | 4700 | 0.3013 | 0.1989 |
0.1699 | 1.83 | 4800 | 0.3012 | 0.3055 |
0.1503 | 1.87 | 4900 | 0.3009 | 0.3055 |
0.14 | 1.9 | 5000 | 0.3006 | 0.3065 |
Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3