Skip to main content

2016 | OriginalPaper | Buchkapitel

Automatic Speech Recognition with Deep Neural Networks for Impaired Speech

verfasst von : Cristina España-Bonet, José A. R. Fonollosa

Erschienen in: Advances in Speech and Language Technologies for Iberian Languages

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Automatic Speech Recognition has reached almost human performance in some controlled scenarios. However, recognition of impaired speech is a difficult task for two main reasons: data is (i) scarce and (ii) heterogeneous. In this work we train different architectures on a database of dysarthric speech. A comparison between architectures shows that, even with a small database, hybrid DNN-HMM models outperform classical GMM-HMM according to word error rate measures. A DNN is able to improve the recognition word error rate a 13 % for subjects with dysarthria with respect to the best classical architecture. This improvement is higher than the one given by other deep neural networks such as CNNs, TDNNs and LSTMs. All the experiments have been done with the Kaldi toolkit for speech recognition for which we have adapted several recipes to deal with dysarthric speech and work on the TORGO database. These recipes are publicly available.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Abdel-Hamid, O., Deng, L., Yu, D.: Exploring convolutional neural network structures and optimization techniques for speech recognition. In: Interspeech 2013, Lyon, France, 25–29 August 2013, pp. 3366–3370 (2013) Abdel-Hamid, O., Deng, L., Yu, D.: Exploring convolutional neural network structures and optimization techniques for speech recognition. In: Interspeech 2013, Lyon, France, 25–29 August 2013, pp. 3366–3370 (2013)
2.
Zurück zum Zitat Amodei, D., Anubhai, R., Battenberg, E., Case, C., Casper, J., Catanzaro, B.C., Chen, J., Chrzanowski, M., Coates, A., Diamos, G., Elsen, E., Engel, J., Fan, L., Fougner, C., Han, T., Hannun, A.Y., Jun, B., LeGresley, P., Lin, L., Narang, S., Ng, A.Y., Ozair, S., Prenger, R., Raiman, J., Satheesh, S., Seetapun, D., Sengupta, S., Wang, Y., Wang, Z., Wang, C., Xiao, B., Yogatama, D., Zhan, J., Zhu, Z.: Deep Speech 2: End-to-End Speech Recognition in English and Mandarin. CoRR abs/1512.02595 (2015) Amodei, D., Anubhai, R., Battenberg, E., Case, C., Casper, J., Catanzaro, B.C., Chen, J., Chrzanowski, M., Coates, A., Diamos, G., Elsen, E., Engel, J., Fan, L., Fougner, C., Han, T., Hannun, A.Y., Jun, B., LeGresley, P., Lin, L., Narang, S., Ng, A.Y., Ozair, S., Prenger, R., Raiman, J., Satheesh, S., Seetapun, D., Sengupta, S., Wang, Y., Wang, Z., Wang, C., Xiao, B., Yogatama, D., Zhan, J., Zhu, Z.: Deep Speech 2: End-to-End Speech Recognition in English and Mandarin. CoRR abs/1512.02595 (2015)
3.
Zurück zum Zitat Bourlard, H.A., Morgan, N.: Connectionist Speech Recognition: A Hybrid Approach. Kluwer Academic Publishers, Norwell (1993) Bourlard, H.A., Morgan, N.: Connectionist Speech Recognition: A Hybrid Approach. Kluwer Academic Publishers, Norwell (1993)
4.
Zurück zum Zitat Dahl, G., Yu, D., Deng, L., Acero, A.: Context-dependent pre-trained deep neural networks for large vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 20(1), 30–42 (2012)CrossRef Dahl, G., Yu, D., Deng, L., Acero, A.: Context-dependent pre-trained deep neural networks for large vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 20(1), 30–42 (2012)CrossRef
5.
Zurück zum Zitat Dehak, N., Dehak, R., Kenny, P., Brummer, N., Ouellet, P., Dumouchel, P.: Support vector machines versus fast scoring in the low-dimensional total variability space for speaker verification. In: Interspeech 2009, Brighton, United Kingdom, 6–10 September 2009, pp. 1559–1562 (2009) Dehak, N., Dehak, R., Kenny, P., Brummer, N., Ouellet, P., Dumouchel, P.: Support vector machines versus fast scoring in the low-dimensional total variability space for speaker verification. In: Interspeech 2009, Brighton, United Kingdom, 6–10 September 2009, pp. 1559–1562 (2009)
6.
Zurück zum Zitat Gopinath, R.A.: Constrained maximum likelihood modeling with Gaussian distributions. In: Proceedings of ICASSP 1998, Seattle, Washington, USA, 12–15 May 1998, pp. 661–664 (1998) Gopinath, R.A.: Constrained maximum likelihood modeling with Gaussian distributions. In: Proceedings of ICASSP 1998, Seattle, Washington, USA, 12–15 May 1998, pp. 661–664 (1998)
7.
Zurück zum Zitat Li, X., Wu, X.: Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2015, South Brisbane, Queensland, Australia, 19–24 April 2015, pp. 4520–4524 (2015) Li, X., Wu, X.: Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2015, South Brisbane, Queensland, Australia, 19–24 April 2015, pp. 4520–4524 (2015)
8.
Zurück zum Zitat Maas, A.L., Hannun, A.Y., Lengerich, C.T., Qi, P., Jurafsky, D., Ng, A.Y.: Increasing Deep Neural Network Acoustic Model Size for Large Vocabulary Continuous Speech Recognition. CoRR abs/1406.7806 (2014) Maas, A.L., Hannun, A.Y., Lengerich, C.T., Qi, P., Jurafsky, D., Ng, A.Y.: Increasing Deep Neural Network Acoustic Model Size for Large Vocabulary Continuous Speech Recognition. CoRR abs/1406.7806 (2014)
9.
Zurück zum Zitat Mengistu, K.T., Rudzicz, F.: Adapting acoustic and lexical models to dysarthric speech. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP11), pp. 4924–4927. IEEE (2011) Mengistu, K.T., Rudzicz, F.: Adapting acoustic and lexical models to dysarthric speech. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP11), pp. 4924–4927. IEEE (2011)
10.
Zurück zum Zitat Miao, Y., Metze, F.: Improving low-resource CD-DNN-HMM using dropout and multilingual DNN training. In: Bimbot, F., Cerisara, C., Fougeron, C., Gravier, G., Lamel, L., Pellegrino, F., Perrier, (eds.) Interspeech, pp. 2237–2241. ISCA (2013) Miao, Y., Metze, F.: Improving low-resource CD-DNN-HMM using dropout and multilingual DNN training. In: Bimbot, F., Cerisara, C., Fougeron, C., Gravier, G., Lamel, L., Pellegrino, F., Perrier, (eds.) Interspeech, pp. 2237–2241. ISCA (2013)
11.
Zurück zum Zitat Miao, Y., Metze, F.: On speaker adaptation of long short-term memory recurrent neural networks. In: Interspeech 2015, Dresden, Germany, 6–10 September 2015, pp. 1101–1105 (2015) Miao, Y., Metze, F.: On speaker adaptation of long short-term memory recurrent neural networks. In: Interspeech 2015, Dresden, Germany, 6–10 September 2015, pp. 1101–1105 (2015)
12.
Zurück zum Zitat Peddinti, V., Chen, G., Povey, D., Khudanpur, S.: Reverberation robust acoustic modeling using i-vectors with time delay neural networks. In: Interspeech 2015, Dresden, Germany, 6–10 September 2015, pp. 2440–2444 (2015) Peddinti, V., Chen, G., Povey, D., Khudanpur, S.: Reverberation robust acoustic modeling using i-vectors with time delay neural networks. In: Interspeech 2015, Dresden, Germany, 6–10 September 2015, pp. 2440–2444 (2015)
13.
Zurück zum Zitat Peddinti, V., Povey, D., Khudanpur, S.: A time delay neural network architecture for efficient modeling of long temporal contexts. In: Interspeech 2015, Dresden, Germany, 6–10 September 2015, pp. 3214–3218 (2015) Peddinti, V., Povey, D., Khudanpur, S.: A time delay neural network architecture for efficient modeling of long temporal contexts. In: Interspeech 2015, Dresden, Germany, 6–10 September 2015, pp. 3214–3218 (2015)
14.
Zurück zum Zitat Povey, D., Burget, L., Agarwal, M., Akyazi, P., Kai, F., Ghoshal, A., Glembek,O., Goel, N., Karafiát, M., Rastrow, A., Rose, R.C., Schwarz, P., Thomas, S.: The subspace Gaussian mixture model - a structured model for speech recognition. Comput. Speech Lang. 25(2), 404–439 (2011) Povey, D., Burget, L., Agarwal, M., Akyazi, P., Kai, F., Ghoshal, A., Glembek,O., Goel, N., Karafiát, M., Rastrow, A., Rose, R.C., Schwarz, P., Thomas, S.: The subspace Gaussian mixture model - a structured model for speech recognition. Comput. Speech Lang. 25(2), 404–439 (2011)
15.
Zurück zum Zitat Povey, D., Ghoshal, A., Boulianne, G., Goel, N., Hannemann, M., Qian, Y., Schwarz, P., Stemmer, G.: The Kaldi speech recognition toolkit. In: IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society (2011) Povey, D., Ghoshal, A., Boulianne, G., Goel, N., Hannemann, M., Qian, Y., Schwarz, P., Stemmer, G.: The Kaldi speech recognition toolkit. In: IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society (2011)
16.
Zurück zum Zitat Povey, D., Saon, G.: Feature and model space speaker adaptation with full covariance Gaussians. In: Interspeech 2016 ICSLP, Pittsburgh, PA, USA, 17–21 September (2006) Povey, D., Saon, G.: Feature and model space speaker adaptation with full covariance Gaussians. In: Interspeech 2016 ICSLP, Pittsburgh, PA, USA, 17–21 September (2006)
17.
Zurück zum Zitat Sainath, T.N., Mohamed, A., Kingsbury, B., Ramabhadran, B.: Deep convolutional neural networks for LVCSR. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2013, Vancouver, BC, Canada, 26–31 May 2013, pp. 8614–8618 (2013) Sainath, T.N., Mohamed, A., Kingsbury, B., Ramabhadran, B.: Deep convolutional neural networks for LVCSR. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2013, Vancouver, BC, Canada, 26–31 May 2013, pp. 8614–8618 (2013)
18.
Zurück zum Zitat Sak, H., Senior, A.W., Beaufays, F.: Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In: Interspeech 2014, Singapore, 14–18 September 2014, pp. 338–342 (2014) Sak, H., Senior, A.W., Beaufays, F.: Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In: Interspeech 2014, Singapore, 14–18 September 2014, pp. 338–342 (2014)
19.
Zurück zum Zitat Saon, G., Sercu, T., Rennie, S.J., Kuo, H.J.: The IBM 2016 English Conversational Telephone Speech Recognition System. CoRR abs/1604.08242 (2016) Saon, G., Sercu, T., Rennie, S.J., Kuo, H.J.: The IBM 2016 English Conversational Telephone Speech Recognition System. CoRR abs/1604.08242 (2016)
20.
Zurück zum Zitat Saon, G., Soltau, H., Nahamoo, D., Picheny, M.: Speaker adaptation of neural network acoustic models using i-vectors. In: ASRU, pp. 55–59. IEEE (2013) Saon, G., Soltau, H., Nahamoo, D., Picheny, M.: Speaker adaptation of neural network acoustic models using i-vectors. In: ASRU, pp. 55–59. IEEE (2013)
21.
Zurück zum Zitat Seide, F., Li, G., Yu, D.: Conversational speech transcription using context-dependent deep neural networks. In: Interspeech 2011, Florence, Italy, 27–31 August 2011, pp. 437–440 (2011) Seide, F., Li, G., Yu, D.: Conversational speech transcription using context-dependent deep neural networks. In: Interspeech 2011, Florence, Italy, 27–31 August 2011, pp. 437–440 (2011)
22.
Zurück zum Zitat Stolcke, A.: SRILM - an extensible language modeling toolkit. In: Proceedings of the Seventh International Conference of Spoken Language Processing (ICSLP2002), Denver, Colorado, USA, pp. 901–904 (2002) Stolcke, A.: SRILM - an extensible language modeling toolkit. In: Proceedings of the Seventh International Conference of Spoken Language Processing (ICSLP2002), Denver, Colorado, USA, pp. 901–904 (2002)
23.
Zurück zum Zitat Tan, T., Qian, Y., Yu, D., Kundu, S., Lu, L., Sim, K.C., Xiao, X., Zhang, Y.: Speaker-aware training of LSTM-RNNS for acoustic modelling. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016, Shanghai, China, 20–25 March 2016, pp. 5280–5284 (2016) Tan, T., Qian, Y., Yu, D., Kundu, S., Lu, L., Sim, K.C., Xiao, X., Zhang, Y.: Speaker-aware training of LSTM-RNNS for acoustic modelling. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016, Shanghai, China, 20–25 March 2016, pp. 5280–5284 (2016)
24.
Zurück zum Zitat Trentin, E., Gori, M.: A survey of hybrid ANN/HMM models for automatic speech recognition. Neurocomputing 37(14), 91–126 (2001)CrossRefMATH Trentin, E., Gori, M.: A survey of hybrid ANN/HMM models for automatic speech recognition. Neurocomputing 37(14), 91–126 (2001)CrossRefMATH
25.
Zurück zum Zitat Veselý, K., Ghoshal, A., Burget, L., Povey, D.: Sequence-discriminative training of deep neural networks. In: Interspeech 2013, Lyon, France, 25–29 August 2013, pp. 2345–2349 (2013) Veselý, K., Ghoshal, A., Burget, L., Povey, D.: Sequence-discriminative training of deep neural networks. In: Interspeech 2013, Lyon, France, 25–29 August 2013, pp. 2345–2349 (2013)
Metadaten
Titel
Automatic Speech Recognition with Deep Neural Networks for Impaired Speech
verfasst von
Cristina España-Bonet
José A. R. Fonollosa
Copyright-Jahr
2016
DOI
https://doi.org/10.1007/978-3-319-49169-1_10

Premium Partner