Evaluating and Improving Child-Directed Automatic Speech Recognition

Eric Booth, Jake Carns, Casey Kennington, Nader Rafla

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Speech recognition has seen dramatic improvements in the last decade, though those improvements have focused primarily on adult speech. In this paper, we assess child-directed speech recognition and leverage a transfer learning approach to improve child-directed speech recognition by training the recent DeepSpeech2 model on adult data, then apply additional tuning to varied amounts of child speech data. We evaluate our model using the CMU Kids dataset as well as our own recordings of child-directed prompts. The results from our experiment show that even a small amount of child audio data improves significantly over a baseline of adult-only or child-only trained models. We report a final general Word-Error-Rate of 29% over a baseline of 62% that uses the adult-trained model. Our analyses show that our model adapts quickly using a small amount of data and that the general child model works better than school grade-specific models. We make available our trained model and our data collection tool.

Original languageAmerican English
Title of host publicationLREC 2020, Twelfth International Conference on Language Resources and Evaluation
StatePublished - 1 Jan 2020

Keywords

  • children
  • data collection
  • speech recognition
  • transfer learning

EGS Disciplines

  • Computer Sciences

Fingerprint

Dive into the research topics of 'Evaluating and Improving Child-Directed Automatic Speech Recognition'. Together they form a unique fingerprint.

Cite this