Elsevier

Computer Speech & Language

Volume 40, November 2016, Pages 46-59
Computer Speech & Language

On the use of deep feedforward neural networks for automatic language identification

https://doi.org/10.1016/j.csl.2016.03.001Get rights and content
Under a Creative Commons license
open access

Highlights

  • This work presents a comprehensive study on the use of deep neural networks for automatic language identification.

  • It includes a detailed performance analysis for different data selection strategies and DNN architectures.

  • Proposed systems are tested on the NIST Language Recognition Evaluation 2009, against an state-of-the-art i-vector baseline.

  • It also presents a novel approach that combines DNN and i-vector systems by using bottleneck features.

  • The combination of i-vector and bottleneck systems outperforms our baseline system by 45% in EER and Cavg, on 3s and 10s.

Abstract

In this work, we present a comprehensive study on the use of deep neural networks (DNNs) for automatic language identification (LID). Motivated by the recent success of using DNNs in acoustic modeling for speech recognition, we adapt DNNs to the problem of identifying the language in a given utterance from its short-term acoustic features. We propose two different DNN-based approaches. In the first one, the DNN acts as an end-to-end LID classifier, receiving as input the speech features and providing as output the estimated probabilities of the target languages. In the second approach, the DNN is used to extract bottleneck features that are then used as inputs for a state-of-the-art i-vector system. Experiments are conducted in two different scenarios: the complete NIST Language Recognition Evaluation dataset 2009 (LRE'09) and a subset of the Voice of America (VOA) data from LRE'09, in which all languages have the same amount of training data. Results for both datasets demonstrate that the DNN-based systems significantly outperform a state-of-art i-vector system when dealing with short-duration utterances. Furthermore, the combination of the DNN-based and the classical i-vector system leads to additional performance improvements (up to 45% of relative improvement in both EER and Cavg on 3s and 10s conditions, respectively).

Keywords

LID
DNN
Bottleneck
i-vectors

Cited by (0)