afrospeech-wav2vec-yor
This model is a fine-tuned version of facebook/wav2vec2-base on the crowd-speech-africa, which was a crowd-sourced dataset collected using the afro-speech Space.
Training and evaluation data
The model was trained on a mixed audio data from Yoruba (yor
).
- Size of training set: 22
- Size of validation set: 6
Below is a distribution of the dataset (training and valdation)
Evaluation performace
It achieves the following results on the validation set:
- F1: 0.83
- Accuracy: 0.83
The confusion matrix below helps to give a better look at the model's performance across the digits. Through it, we can see the precision and recall of the model as well as other important insights.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- num_epochs: 150
Training results
Training Loss | Epoch | Validation Accuracy |
---|---|---|
0.596 | 1 | 0.5 |
0.0220 | 50 | 0.5 |
0.00305 | 100 | 0.667 |
0.0993 | 150 | 0.667 |
Framework versions
- Transformers 4.21.3
- Pytorch 1.12.0
- Datasets 1.14.0
- Tokenizers 0.12.1