automatic-speech-recognition mozilla-foundation/common_voice_8_0 generated_from_trainer robust-speech-event ja hf-asr-leaderboard

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Wer Cer
1.7019 12.65 1000 1.0510 0.9832 0.2589
1.6385 25.31 2000 0.6670 0.9915 0.1851
1.4344 37.97 3000 0.6183 1.0213 0.1797

Framework versions

Evaluation Commands

  1. To evaluate on mozilla-foundation/common_voice_8_0 with split test
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs
  1. To evaluate on mozilla-foundation/common_voice_8_0 with split test
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0