Pre-trained checkpoints for speech representation in Japanese

The models in this repository were pre-trained via self-supervised learning (SSL) for speech representation. The SSL models were built on the fairseq toolkit.

If you find this helpful, please consider citing the following paper.

@INPROCEEDINGS{ashihara_icassp23,
  author={Takanori Ashihara and Takafumi Moriya and Kohei Matsuura and Tomohiro Tanaka},
  title={Exploration of Language Dependency for Japanese Self-Supervised Speech Representation Models},
  booktitle={ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2023}
}