audio automatic-speech-recognition hf-asr-leaderboard ken-lm robust-speech-event speech

wav2vec2-xls-r-1b-ft-cy

Fine-tuned facebook/wav2vec2-xls-r-1b with the Welsh Common Voice version 9 dataset.

Source code and scripts for training acoustic and KenLM language models, as well as examples of inference in transcribing or a self-hosted API service, can be found at https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy.

Usage

The wav2vec2-xls-r-1b-ft-cy (acoustic) model can be used directly (without a language model) as follows:

import torch
import torchaudio
import librosa

from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor

processor = Wav2Vec2Processor.from_pretrained("techiaith/wav2vec2-xls-r-1b-ft-cy")
model = Wav2Vec2ForCTC.from_pretrained("techiaith/wav2vec2-xls-r-1b-ft-cy")

audio, rate = librosa.load(audio_file, sr=16000)

inputs = processor(audio, sampling_rate=16_000, return_tensors="pt", padding=True)

with torch.no_grad():
  tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits

# greedy decoding
predicted_ids = torch.argmax(logits, dim=-1)

print("Prediction:", processor.batch_decode(predicted_ids))

Using the Language Model

See https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy/releases/tag/22.06 for more details and examples of a KenLM usage with the Parlance PyTorch CTC decode bindings library: https://github.com/parlance/ctcdecode

Evaluation

According to the Welsh Common Voice version 9 test set, the WER of techiaith/wav2vec2-xls-r-1b-ft-cy standalone is 19.68%

When assisted by the KenLM language model the same test produces a WER of 12.38%

See: https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy/blob/main/train/python/evaluate.py