Aina Project's Catalan multi-speaker text-to-speech model
Model description
This model was trained from scratch using the Coqui TTS toolkit on a combination of 3 datasets: Festcat, high quality open speech dataset of Google (can be found in OpenSLR 69) and Common Voice v8. For the training, 101460 utterances consisting of 257 speakers were used, which corresponds to nearly 138 hours of speech.
A live inference demo can be found in our spaces, here.
Intended uses and limitations
You can use this model to generate synthetic speech in Catalan with different voices.
How to use
Usage
Requiered libraries:
pip install git+https://github.com/coqui-ai/TTS@dev#egg=TTS
Synthesize a speech using python:
import tempfile
import gradio as gr
import numpy as np
import os
import json
from typing import Optional
from TTS.config import load_config
from TTS.utils.manage import ModelManager
from TTS.utils.synthesizer import Synthesizer
model_path = # Absolute path to the model checkpoint.pth
config_path = # Absolute path to the model config.json
speakers_file_path = # Absolute path to speakers.pth file
text = "Text to synthetize"
speaker_idx = "Speaker ID"
synthesizer = Synthesizer(
model_path, config_path, speakers_file_path, None, None, None,
)
wavs = synthesizer.tts(text, speaker_idx)
Training
Training Procedure
Data preparation
The data has been processed using the script process_data.sh, which reduces the sampling frequency of the audios, eliminates silences, adds padding and structures the data in the format accepted by the framework. You can find more information here.
Hyperparameter
The model is based on VITS proposed by Kim et al. The following hyperparameters were set in the coqui framework.
Hyperparameter | Value |
---|---|
Model | vits |
Batch Size | 16 |
Eval Batch Size | 8 |
Mixed Precision | false |
Window Length | 1024 |
Hop Length | 256 |
FTT size | 1024 |
Num Mels | 80 |
Phonemizer | espeak |
Phoneme Lenguage | ca |
Text Cleaners | multilingual_cleaners |
Formatter | vctk_old |
Optimizer | adam |
Adam betas | (0.8, 0.99) |
Adam eps | 1e-09 |
Adam weight decay | 0.01 |
Learning Rate Gen | 0.0001 |
Lr. schedurer Gen | ExponentialLR |
Lr. schedurer Gamma Gen | 0.999875 |
Learning Rate Disc | 0.0001 |
Lr. schedurer Disc | ExponentialLR |
Lr. schedurer Gamma Disc | 0.999875 |
The model was trained for 730962 steps.
Additional information
Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
Contact information
For further information, send an email to aina@bsc.es
Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
Licensing Information
Funding
This work was funded by the [Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of Projecte AINA.
Disclaimer
<details> <summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.