generated_from_trainer

Swahili Automatic Speech Recognition (ASR)

Model details

The Swahili ASR is an end-to-end automatic speech recognition system that was finetuned on the Common Voice Corpus 11.0 Swahili dataset. This repository provides the necessary tools to perform ASR using this model, allowing for high-quality speech-to-text conversions in Swahili.

Example Usage

Here's an example of how you can use this model for speech-to-text conversion:

from datasets import load_dataset
from transformers import pipeline

# replace following lines to load an audio file of your choice
commonvoice_sw = load_dataset("mozilla-foundation/common_voice_11_0", "sw", split="test")
audio_file = commonvoice_sw[0]["audio"]

asr = pipeline("automatic-speech-recognition", model="thinkKenya/wav2vec2-large-xls-r-300m-sw", feature_extractor="thinkKenya/wav2vec2-large-xls-r-300m-sw")

translation = asr(audio_file)
EVAL_LOSS EVAL_WER EVAL_RUNTIME EVAL_SAMPLES_PER_SECOND EVAL_STEPS_PER_SECOND EPOCH
0.345414400100708 0.2602372795622284 578.4006 17.701 2.213 4.17

Intended Use

This model is intended for any application requiring Swahili speech-to-text conversion, including but not limited to transcription services, voice assistants, and accessibility technology. It can be particularly beneficial in any context where demographic metadata (age, sex, accent) is significant, as these features have been taken into account during training.

Dataset

The model was trained on the Common Voice Corpus 11.0 Swahili dataset, which consists of unique MP3 files and corresponding text files, totaling 16,413 validated hours. Additionally, much of the dataset includes valuable demographic metadata, such as age, sex, and accent, contributing to a more accurate and contextually-aware ASR model. Dataset link

Training Procedure

Pipeline Description

The ASR system has two interconnected stages: the Tokenizer (unigram) and the Acoustic model (wav2vec2.0 + CTC).

  1. Tokenizer (unigram): It transforms words into subword units, using a vocabulary extracted from the training and test datasets. The resulting Wav2Vec2CTCTokenizer is then pushed to the Hugging Face model hub.
  2. Acoustic model (wav2vec2.0 + CTC): Utilizes a pretrained wav2vec 2.0 model (facebook/wav2vec2-base), which is fine-tuned on the dataset. The processed audio data is passed through the CTC (Connectionist Temporal Classification) decoder, which converts the acoustic representations into a sequence of tokens/characters. The trained model is then also pushed to the Hugging Face model hub.

Technical Specifications

The ASR system uses the Wav2Vec2ForCTC model architecture from the Hugging Face's Transformers library. This model, with a built-in Connectionist Temporal Classification (CTC) layer, provides an optimal solution for speech recognition tasks. The model includes a pretrained wav2vec 2.0 model and a linear layer for CTC, which are trained together in an end-to-end manner. The ASR system's performance is measured using the Word Error Rate (WER) during the training process.

Compute Infrastructure

The training was performed using the following compute infrastructure:

Compute Value
vCPUs 32
Memory (GiB) 128.0
Memory per vCPU (GiB) 4.0
Physical Processor AMD EPYC 7R32
Clock Speed (GHz) 2.8
CPU Architecture x86_64
GPU 1
GPU Architecture nvidia a10g
Video Memory (GiB) 24
GPU Compute Capability (?) 7.5
FPGA 0

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Framework versions

About THiNK

THiNK is a technology initiative driven by a community of innovators and businesses. It brings together a collaborative platform that provides services to assist businesses in all sectors, particularly in their digital transformation journey.