wav2vec2-large-xlsr-galician


language: gl datasets:


Model

Fine-tuned model for Galician language

Based on the facebook/wav2vec2-large-xlsr-53 self-supervised model Fine-tune with audio labelled from OpenSLR and Mozilla Common_Voice (both datasets previously refined)

Check training metrics to see results

Testing

Make sure that the audio speech input is sampled at 16kHz (mono).

from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor

model = Wav2Vec2ForCTC.from_pretrained("ifrz/wav2vec2-large-xlsr-galician")
processor = Wav2Vec2Processor.from_pretrained("ifrz/wav2vec2-large-xlsr-galician")

# Reading taken audio clip
import librosa, torch
audio, rate = librosa.load("./gl_test_1.wav", sr = 16000)

# Taking an input value
input_values = processor(audio, sampling_rate=16_000, return_tensors = "pt", padding="longest").input_values
# Storing logits (non-normalized prediction values)
logits = model(input_values).logits
# Storing predicted ids
prediction = torch.argmax(logits, dim = -1)

# Passing the prediction to the tokenzer decode to get the transcription
transcription = processor.batch_decode(prediction)[0]
print(transcription)