Elsevier

Neurocomputing

Volume 80, 15 March 2012, Pages 83-92
Neurocomputing

Robust real-time identification of tongue movement commands from interferences

https://doi.org/10.1016/j.neucom.2011.09.018Get rights and content
Under a Creative Commons license
open access

Abstract

This study aimed to improve the accuracy and robustness of a real-time assistive human machine interface system by classifying between the controlled movements related tongue-movement ear pressure (TMEP) signals and the interfering signals. The controlled movement TMEP signals were collected during left, right, up, down, flicking and pushing tongue motions. The TMEP signals were processed and classified using detection, segmentation, feature extraction and classification. The segmented signals were decomposed into the time-scale domain using a wavelet packet transform. The variance of the wavelet packet coefficients and its ratio between low-to-high scales were defined as features and the intended tongue movement commands and interfering signals were classified using both a Bayesian and support vector machine (SVM) classifiers for comparison. The average classification accuracy for discriminating between the controlled movements and the interfering signals achieved 97.8% (Bayesian) and 98.5% (SVM). The classifiers were robust remaining at a similar performance level when generalised interferences from all subjects were used. It was shown that the Bayesian classifier performed better than the SVM in a real-time environment. The approach of combining the Bayesian classifier and the wavelet packet transform provides a robust and efficient method for a real-time assistive human machine interface based on tongue-movement ear pressure signals.

Keywords

Tongue-movement ear pressure signals
Wavelet packet transform
Bayesian classifier
Human machine interface

Cited by (0)

Khondaker A. Mamun received the B.Sc. degree in Computer Science & Engineering from Ahsanullah University of Science and Technology, Dhaka, Bangladesh, in 2002, and M.Sc. degree in Computer Science & Engineering from Bangladesh University of Engineering and Technology, Dhaka, Bangladesh, in 2007. He is currently a Ph.D. candidate in Institute of Sound and Vibration Research, University of Southampton, UK. His research interests include biomedical signal processing and machine learning, rehabilitation systems, human machine interfaces, brain computer interfaces and neuroinformatics. He has published a number of papers in international journals and conferences. He is a student member of IEEE.

Michael Mace received the M.Eng. degree in mechanical engineering from the University of Bristol, in 2008. He is currently a Ph.D. candidate in the Department of Mechanical Engineering at the University of Bristol, Bristol, UK. His research interests are in pattern recognition and physiological signal processing with emphasis on bio-acoustic signal classification and assistive human–machine (robotic) interfaces.

Lalit Gupta received the B.E. (Hons.) degree in electrical engineering from Birla Institute of Technology and Science, Pilani, India, in 1976, the M.S. degree in digital systems from Brunel University, Middlesex, U.K., in 1981, and the Ph.D. degree in electrical engineering from Southern Methodist University, Dallas, TX, in 1986. He is currently a Professor of electrical and computer engineering at Southern Illinois University, Carbondale. He has been awarded contracts from the Army Research office to conduct research in the development of smart munitions, from Seagate Technology on image compression research, Cleveland Medical Devices on National Institute of Health (NIH) funded projects related to brain wave form analysis and classification, Think-A-Move, Inc., on an NIH funded project related to human–machine interfacing, Neuronetrix on an NIH funded project on detecting neurological disorders from evoked potentials, and the Naval Postgraduate School on research related to human–machine Interfacing. His current research interests include pattern recognition, neuroinformatics, neural networks and signal processing. He has numerous publications in the areas of neural networks, evoked potential analyses and classification and multichannel/sensor information fusion strategies. Dr. Gupta is an Associate Editor of the Pattern Recognition Journal.

Carl A. Verschuur is a Lecturer in Audiology at the Sound and Vibration Research. He has a background in clinical audiology as well as theoretical and practical applications of speech sciences. His primary research interest is in the relationship between speech processing in hearing aids and cochlear implants and speech perception. He has published work in speech perception in cochlear implant users, auditory localisation in bilateral cochlear implant users and speech perception in hearing aid users. His current work focuses on the role of low frequency speech cues in determining success with auditory prosthetic devices, and also the role inflammatory processes in determining loss of hearing.

Mark E. Lutman is Professor of Audiology and Head of the Hearing and Balance Centre at the Institute of Sound and Vibration Research of the University of Southampton. His research interests include: signal processing for hearing aids and cochlear implants, measurement of cochlear function from otoacoustic emissions, neonatal and adult hearing screening, cochlear implantation, epidemiology of hearing impairment and balance disorder, evaluation of benefit from hearing instruments and noise-induced hearing loss. He was editor of the British Journal of Audiology from 1991 to 1995 and President of the British Academy of Audiology from 2007 to 2008.

Maria Stokes is Professor of Neuromusculoskeletal Rehabilitation in the Faculty of Health Sciences, University of Southampton, and is a physiotherapist by background, with a PhD in Neuromuscular Physiology. Her research interests cover active living and healthy ageing and developing health technologies. Themes: 1) mechanisms of musculoskeletal function, dysfunction and recovery (across sporting elite to frail); 2) prevention and rehabilitation of musculoskeletal conditions. Development of technologies as assessment tools and assistive devices includes: rehabilitative ultrasound imaging (RUSI) to evaluate and re-educate muscle function; brain-computer interfacing (BCI) for communication and investigating neuromuscular mechanisms; mechanomyography (muscle sounds) and myotonometry to investigate muscle mechanical properties. A key feature is multidisciplinary collaboration, primarily with engineering scientists.

Ravi Vaidyanathan is a Senior Lecturer in Mechatronic Systems at Imperial College London, UK and a Research Assistant Professor at the US Naval Postgraduate School, USA. He earned his Ph.D. in biologically inspired systems at Case Western Reserve University in 2001, and worked in industry through 2004, holding directorships in control systems and medical engineering. His current research interests include biologically inspired robotics, human–machine interface and complex adaptive systems. He has led more than 20 research separate program in U.S.A., Singapore and U.K, authored over 90 refereed publications, and two (pending) patents. Dr. Vaidyanathan has been the recipient of international awards from organizations such as the IEEE Robotics and Automation Society, the Robotics Society of Japan (RSJ) and the American Institute of Aeronautics and Astronautics (AIAA), including Best Paper in Conference at the IEEE International Conference on Intelligent Robots and Systems (IROS) and a Finalist for the New Technology Foundation (NTF) Award on Entertainment and Robotic Systems celebrating top innovations in robotics from 1987 to 2007. He also holds honorary academic posts at the University of Bristol (UK), Case Western Reserve University (USA) and Anna University (India).

Shouyan Wang is a Lecturer at the Hearing and Balance Centre at the Institute of Sound and Vibration Research of the University of Southampton. His research interests include: neural signal processing, brain plasticity after neural prostheses stimulation, brain connectivity modelling, neural engineering for neural prostheses, signal processing for hearing aids and cochlear implants.