audio automatic-speech-recognition hf-asr-leaderboard

Whisper-large-et

This is a Whisper-large-v2 model openai/whisper-large-v2 finetuned on around 1200 hours of diverse Estonian data.

Model description

This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech.

Intended uses & limitations

This model is intended for general-purpose speech recognition, such as broadcast conversations, interviews, talks, etc.

How to use

Recommended: use faster-whisper.

For example:

Limitations and bias

Since this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:

Training data

Acoustic training data:

Type Amount (h)
Broadcast speech 991
Spontaneous speech 53
Elderly speech corpus 53
Talks, lectures 49
Parliament speeches 31
Total 1161

Training procedure

Finetuned using Espnet, and then comverted to transformers format using this script. Finetuning procedure is similar to this model. Finetuning was done for 3 epochs, with model averaging at the end of training.

Update: 2023-10-03 version of the model is trained on long segments (like the original Whisper model) and is therefore especially well suited to be used e.g. with faster-whisper to transcribe long speech recordings "end-to-end" (i.e., without any prior segmentation).

Evaluation results

WER

WER results below are obtained using greedy decoding (i.e., beam size 1).

Dataset WER
Common Voice 8.0 11.3
Common Voice 11.0 12.0