automatic-speech-recognition whisper-event

<style> img { display: inline; } </style>

Model architecture Model size Language

Fine-tuned German whisper-large-v2 model for CTranslate2

This repository contains the bofenghuang/whisper-large-v2-cv11-german model converted to the CTranslate2 format.

Usage

from faster_whisper import WhisperModel
from huggingface_hub import snapshot_download

downloaded_model_path = snapshot_download(repo_id="bofenghuang/whisper-large-v2-cv11-german-ct2")

# Run on GPU with FP16
model = WhisperModel(downloaded_model_path, device="cuda", compute_type="float16")
# or run on GPU with INT8
# model = WhisperModel(downloaded_model_path, device="cuda", compute_type="int8_float16")
# or run on CPU with INT8
# model = WhisperModel(downloaded_model_path, device="cpu", compute_type="int8")

segments, info = model.transcribe("./sample.wav", beam_size=1)

print("Detected language '%s' with probability %f" % (info.language, info.language_probability))

for segment in segments:
    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

You can also use the following Google Colab Notebook to infer the converted Whisper models.

<a href="https://colab.research.google.com/#fileId=https://huggingface.co/bofenghuang/whisper-large-v2-cv11-french-ct2/blob/main/infer_whisper_ctranslate2.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

Conversion

The original model was converted with the following command:

ct2-transformers-converter --model bofenghuang/bofenghuang/whisper-large-v2-cv11-german --output_dir bofenghuang/whisper-large-v2-cv11-german-ct2 --quantization float16