The Relevance of the Source Language in Transfer Learning for ASR

Authors

  • Nils Hjortnæs Indiana University
  • Niko Partanen University of Helsinki
  • Michael Rießler University of Eastern Finland
  • Francis M. Tyers Indiana University

DOI:

https://doi.org/10.33011/computel.v1i.959

Abstract

This study presents new experiments on Zyrian Komi speech recognition. We use Deep-Speech to train ASR models from a language documentation corpus that contains both contemporary and archival recordings. Earlier studies have shown that transfer learning from English and using a domain matching Komi language model both improve the CER and WER. In this study we experiment with transfer learning from a more relevant source language, Russian, and including Russian text in the language model construction. The motivation for this is that Russian and Komi are contemporary contact languages, and Russian is regularly present in the corpus. We found that despite the close contact of Russian and Komi, the size of the English speech corpus yielded greater performance when used as the source language. Additionally, we can report that already an update in DeepSpeech version improved the CER by 3.9% against the earlier studies, which is an important step in the development of Komi ASR.

Downloads

Published

2021-03-02