Skip to main content

2017 | Buch

Sound-Based Assistive Technology

Support to Hearing, Speaking and Seeing

insite
SUCHEN

Über dieses Buch

This book presents a technology to help speech-, hearing- and sight-impaired people. It explains how they will benefit from an enhancement in their ability to recognize and produce speech or to detect sounds in their surroundings. Additionally, it is considered how sound-based assistive technology might be applied to the areas of speech recognition, speech synthesis, environmental recognition, virtual reality and robots. The primary focus of this book is to provide an understanding of both the methodology and basic concepts of assistive technology rather than listing the variety of assistive devices developed. This book presents a number of different topics which are sufficiently independent from one another that the reader may begin at any chapter without lacking background information. Much of the research quoted in this book was conducted in the author's laboratories at Hokkaido University and University of Tokyo. This book offers the reader a better understanding of a number of unsolved problems that still persist in the field of sound-based assistive technology.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Basis for Sound-Based Assistive Technology
Abstract
In this chapter, the author discusses the methodology as well as the basic concepts of assistive technology based on cybernetics that regards the human body as a feedback system comprising the senses, the brain, and the motor functions. In the latter part of this chapter, the author explains some mechanisms of the auditory sense and speech production of human beings that are needed to understand the contents of the book. The author introduces a national project called “The Creation of Sciences, Technologies and Systems to Enrich the Lives of the Super-aged Society” that he has promoted since 2010.
Tohru Ifukube
Chapter 2. Sound Signal Processing for Auditory Aids
Abstract
In this chapter, signal-processing methods for digital hearing aids are discussed and the author also describes how auditory characteristics and speech understanding abilities change because of hearing impairment as well as aging. In particular, the author introduces various approaches to signal processing such as noise-reduction methods, consonant-enhancement methods, and speech rate or intonation-conversion methods. Past and recent artificial middle ears are also discussed as one form of hearing aid.
Tohru Ifukube
Chapter 3. Functional Electrical Stimulation to Auditory Nerves
Abstract
In this chapter, first, the author discusses functional electrical stimulation (FES) of the auditory nerves using “cochlear implants”, especially the history, principle, effects, and design concept of cochlear implants. Next, the author shows the recent progress in cochlear implants as well as auditory brainstem implants and secondary effects such as tinnitus suppression achieved by electrical stimulation. Finally, the author introduces “artificial retina” that has been served by the success of cochlear implants.
Tohru Ifukube
Chapter 4. Tactile Stimulation Methods for the Deaf and/or Blind
Abstract
In this chapter, the author introduces tactile stimulation methods for sensory substitutes of the auditory and/or the visual sense. Recent reports concerning brain research show that lost sensory functions might be compensated for by the other sensory cortex due to the plasticity of the neural network in the human brain. Based on the latest findings regarding brain plasticity, the author talks about the effectiveness of tactile stimulation methods that support speech recognition of the deaf, vocal training of the deaf-blind, and auditory localization substitutes. The author also describes how the findings and technologies obtained from tactile stimulation studies will lead to new concepts in the design of tactile displays used for virtual reality systems and robots.
Tohru Ifukube
Chapter 5. Speech Recognition Systems for the Hearing Impaired and the Elderly
Abstract
In this chapter, after the author describes how various attempts to convert speech sounds into visual patterns have been applied to auditory substitutes, he introduces recent speech-recognition technologies that have been applied to captioning systems for the hearing impaired as well as the elderly. For supporting elderly people with cognitive decline, the number of which is rapidly increasing in a super-aged society, the author introduces an example of communication robots that give notification when it is time to take medication and remind elderly people of their daily schedule.
Tohru Ifukube
Chapter 6. Assistive Tool Design for Speech Production Disorders
Abstract
In this chapter, the author discusses an artificial electro-larynx that can produce intonation and fluctuation of the larynx voice for speech disorders, especially for laryngectomees. In the design of the artificial larynx, he emphasizes that many hints were taken from the vocalization mechanism of a talking bird, the mynah. The author also introduces a voice synthesizer called “Let’s talk by a finger” for people with articulation disorders that make it hard to control their voice organs because of neuromuscular disease or speech apraxia. The synthesizer, which can produce any speech sound just by touching and stroking the touchpad of a mobile phone, was modeled after the vocalization mechanism of a ventriloquist who can produce any speech sound without moving his lips. Furthermore, evaluation methods are indicated for aids and treatment of speech organ disorders caused by a cleft palate and a reversed occlusion.
Tohru Ifukube
Chapter 7. Sound Information Aiding for the Visually Impaired
Abstract
In this chapter, the author mainly discusses two assistive tools for the visually impaired. One of these is a kind of screen reader that we call the “Tactile Jog-dial”. It converts verbal information such as text into speech signals for which the speech speed can be controlled by blind users, while displaying non-verbal information such as rich text to the tactile sense of a fingertip. This is based on the experimental result that most blind people can recognize spoken language at around three times the speech speed than standard. The other are mobility aid devices that detect environmental information and display it to the auditory sense of the blind using sounds so that the blind can recognize the environment, especially obstacles, surrounding them. These were modeled after the echolocation function of bats and also an ability of obstacle sense that the blind acquire. Moreover, he talks about a method of controlling the balance function by sound localization.
Tohru Ifukube
Backmatter
Metadaten
Titel
Sound-Based Assistive Technology
verfasst von
Tohru Ifukube
Copyright-Jahr
2017
Electronic ISBN
978-3-319-47997-2
Print ISBN
978-3-319-47996-5
DOI
https://doi.org/10.1007/978-3-319-47997-2

Neuer Inhalt