pytorch audio speech automatic-speech-recognition whisper wav2vec2

Versions:

Model Benchmarks:

Model Error Benchmarks:

Hindi to Hindi (test.tsv) Common Voice 14.0

Test done on RTX 3060 on 2557 Samples

WER MER WIL WIP CER
Original_Model (54 min) 52.02 47.86 66.82 33.17 23.76
This_Model (38 min) 54.97 47.86 66.83 33.16 30.23

Hindi to English (test.csv) Custom Dataset

Test done on RTX 3060 on 1000 Samples

WER MER WIL WIP CER
Original_Model (30 min) - - - - -
This_Model (20 min) - - - - -

English (LibriSpeech -> test-clean)

Test done on RTX 3060 on __ Samples

WER MER WIL WIP CER
Original_Model - - - - -
This_Model - - - - -

English (LibriSpeech -> test-other)

Test done on RTX 3060 on __ Samples

WER MER WIL WIP CER
Original_Model - - - - -
This_Model - - - - -

Code for conversion:

Usage

A file __init__.py is contained inside this repo which contains all the code to use this model.

Firstly, clone this repo and place all the files inside a folder.

Make sure you have git-lfs installed (https://git-lfs.com)

git lfs install
git clone https://huggingface.co/devasheeshG/whisper_medium_fp16_transformers

Please try in jupyter notebook

# Import the Model
from whisper_medium_fp16_transformers import Model, load_audio, pad_or_trim
# Initilise the model
model = Model(
            model_name_or_path='whisper_medium_fp16_transformers',
            cuda_visible_device="0", 
            device='cuda',
      )
# Load Audio
audio = load_audio('whisper_medium_fp16_transformers/test.wav')
audio = pad_or_trim(audio)
# Transcribe (First transcription takes time)
model.transcribe(audio)

Credits

It is fp16 version of openai/whisper-medium