Skip to main content

2014 | Buch

Guide to Brain-Computer Music Interfacing

herausgegeben von: Eduardo Reck Miranda, Julien Castet

Verlag: Springer London

insite
SUCHEN

Über dieses Buch

This book presents a world-class collection of Brain-Computer Music Interfacing (BCMI) tools. The text focuses on how these tools enable the extraction of meaningful control information from brain signals, and discusses how to design effective generative music techniques that respond to this information. Features: reviews important techniques for hands-free interaction with computers, including event-related potentials with P300 waves; explores questions of semiotic brain-computer interfacing (BCI), and the use of machine learning to dig into relationships among music and emotions; offers tutorials on signal extraction, brain electric fields, passive BCI, and applications for genetic algorithms, along with historical surveys; describes how BCMI research advocates the importance of better scientific understanding of the brain for its potential impact on musical creativity; presents broad coverage of this emerging, interdisciplinary area, from hard-core EEG analysis to practical musical applications.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Brain–Computer Music Interfacing: Interdisciplinary Research at the Crossroads of Music, Science and Biomedical Engineering
Abstract
Research into brain–computer music interfacing (BCMI) involves three major challenges: the extraction of meaningful control information from signals emanating from the brain, the design of generative music techniques that respond to such information and the definition of ways in which such technology can effectively improve the lives of people with special needs and address therapeutic needs. This chapter discusses the first two challenges, in particular the music technology side of BCMI research, which has been largely overlooked by colleagues working in this field. After a brief historical account of the field, the author reviews the pioneering research into BCMI that has been developed at Plymouth University’s Interdisciplinary Centre for Computer Music Research (ICCMR) within the last decade or so. The chapter introduces examples illustrating ICCMR’s developments and glances at current work informed by cognitive experiments.
Eduardo Reck Miranda
Chapter 2. Electroencephalogram-based Brain–Computer Interface: An Introduction
Abstract
Electroencephalogram (EEG) signals are useful for diagnosing various mental conditions such as epilepsy, memory impairments and sleep disorders. Brain–computer interface (BCI) is a revolutionary new area using EEG that is most useful for the severely disabled individuals for hands-off device control and communication as they create a direct interface from the brain to the external environment, therefore circumventing the use of peripheral muscles and limbs. However, being non-invasive, BCI designs are not necessarily limited to this user group and other applications for gaming, music, biometrics etc., have been developed more recently. This chapter will give an introduction to EEG-based BCI and existing methodologies; specifically those based on transient and steady state evoked potentials, mental tasks and motor imagery will be described. Two real-life scenarios of EEG-based BCI applications in biometrics and device control will also be briefly explored. Finally, current challenges and future trends of this technology will be summarised.
Ramaswamy Palaniappan
Chapter 3. Contemporary Approaches to Music BCI Using P300 Event Related Potentials
Abstract
This chapter is intended as a tutorial for those interested in exploring the use of P300 event related potentials (ERPs) in the creation of brain computer music interfaces (BCMIs). It also includes results of research in refining digital signal processing (DSP) approaches and models of interaction using low-cost, portable BCIs. We will look at a range of designs for BCMIs using ERP techniques. These include the P300 Composer, the P300 Scale Player, the P300 DJ and the P300 Algorithmic Improviser. These designs have all been used in both research and performance, and are described in such a way that they should be reproducible by other researchers given the methods and guidelines indicated. The chapter is not intended to be exhaustive in terms of its neuroscientific detail, although the systems and approaches documented here have been reproduced by many labs, which should be an indication of their quality. Instead, what follows is a basic introduction to what ERPs are, what the P300 is, and how it can be applied in the development of these BCMI designs. This description of ERPs is not intended to be exhaustive, and at best should be thought of as an illustration designed to allow the reader to begin to understand how such approaches can be used for new instrument development. In this way, this chapter is intended to be indicative of what can be achieved, and to encourage others to think of BCMI problems in ways that focus on the measurement and understanding of signals that reveal aspects of human cognition. With this in mind, towards the end of the chapter we look at the results of our most recent research in the area of P300 BCIs that may have an impact on the usability of future BCI systems for music.
Mick Grierson, Chris Kiefer
Chapter 4. Prospective View on Sound Synthesis BCI Control in Light of Two Paradigms of Cognitive Neuroscience
Abstract
Different trends and perspectives on sound synthesis control issues within a cognitive neuroscience framework are addressed in this article. Two approaches for sound synthesis based on the modelling of physical sources and on the modelling of perceptual effects involving the identification of invariant sound morphologies (linked to sound semiotics) are exposed. Depending on the chosen approach, we assume that the resulting synthesis models can fall under either one of the theoretical frameworks inspired by the representational-computational or enactive paradigms. In particular, a change of viewpoint on the epistemological position of the end-user from a third to a first person inherently involves different conceptualizations of the interaction between the listener and the sounding object. This differentiation also influences the design of the control strategy enabling an expert or an intuitive sound manipulation. Finally, as a perspective to this survey, explicit and implicit brain-computer interfaces (BCI) are described with respect to the previous theoretical frameworks, and a semiotic-based BCI aiming at increasing the intuitiveness of synthesis control processes is envisaged. These interfaces may open for new applications adapted to either handicapped or healthy subjects.
Mitsuko Aramaki, Richard Kronland-Martinet, Sølvi Ystad, Jean-Arthur Micoulaud-Franchi, Jean Vion-Dury
Chapter 5. Machine Learning to Identify Neural Correlates of Music and Emotions
Abstract
While music is widely understood to induce an emotional response in the listener, the exact nature of that response and its neural correlates are not yet fully explored. Furthermore, the large number of features which may be extracted from, and used to describe, neurological data, music stimuli, and emotional responses, means that the relationships between these datasets produced during music listening tasks or the operation of a brain–computer music interface (BCMI) are likely to be complex and multidimensional. As such, they may not be apparent from simple visual inspection of the data alone. Machine learning, which is a field of computer science that aims at extracting information from data, provides an attractive framework for uncovering stable relationships between datasets and has been suggested as a tool by which neural correlates of music and emotion may be revealed. In this chapter, we provide an introduction to the use of machine learning methods for identifying neural correlates of musical perception and emotion. We then provide examples of machine learning methods used to study the complex relationships between neurological activity, musical stimuli, and/or emotional responses.
Ian Daly, Etienne B. Roesch, James Weaver, Slawomir J. Nasuto
Chapter 6. Emotional Responses During Music Listening
Abstract
The aim of this chapter is to summarize and present the current knowledge about music and emotion from a multi-disciplinary perspective. Existing emotional models and their adequacy in describing emotional responses to music are described and discussed in different applications. The underlying emotion induction mechanisms beside cognitive appraisal are presented, and their implications on the field are analyzed. Musical characteristics such as tempo, mode, loudness, and so on are inherent properties of the musical structure and have been shown to influence the emotional states during music listening. The role of each individual parameter on emotional responses as well as their interactions is reviewed and analyzed. Different ways of measuring emotional responses to music are described, and their adequacy in accounting for emotional responses to music is discussed. The main physiological responses to music listening are briefly discussed, and their application to emotion recognition and to emotion intelligence in human–machine interaction is described. Music processing in the brain involves different brain areas and several studies attempted to investigate brain activity in relation to emotion during music listening through EEG signals. The issues and challenges of assessing human emotion through EEG are presented and discussed. Finally, an overview of problems that remain to be addressed in future research is given.
Konstantinos Trochidis, Emmanuel Bigand
Chapter 7. A Tutorial on EEG Signal-processing Techniques for Mental-state Recognition in Brain–Computer Interfaces
Abstract
This chapter presents an introductory overview and a tutorial of signal-processing techniques that can be used to recognize mental states from electroencephalographic (EEG) signals in brain–computer interfaces. More particularly, this chapter presents how to extract relevant and robust spectral, spatial, and temporal information from noisy EEG signals (e.g., band-power features, spatial filters such as common spatial patterns or xDAWN, etc.), as well as a few classification algorithms (e.g., linear discriminant analysis) used to classify this information into a class of mental state. It also briefly touches on alternative, but currently less used approaches. The overall objective of this chapter is to provide the reader with practical knowledge about how to analyze EEG signals as well as to stress the key points to understand when performing such an analysis.
Fabien Lotte
Chapter 8. An Introduction to EEG Source Analysis with an Illustration of a Study on Error-Related Potentials
Abstract
Over the last twenty years, blind source separation (BSS) has become a fundamental signal processing tool in the study of human electroencephalography (EEG), other biological data, as well as in many other signal processing domains such as speech, images, geophysics, and wireless. This chapter introduces a short review of brain volume conduction theory, demonstrating that BSS modeling is grounded on current physiological knowledge. Then, it illustrates a general BSS scheme requiring the estimation of second-order statistics (SOS) only. A simple and efficient implementation based on the approximate joint diagonalization of covariance matrices (AJDC) is described. The method operates in the same way in the time or frequency domain (or both at the same time) and is capable of modeling explicitly physiological and experimental source of variations with remarkable flexibility. Finally, this chapter provides a specific example illustrating the analysis of a new experimental study on error-related potentials.
Marco Congedo, Sandra Rousseau, Christian Jutten
Chapter 9. Feature Extraction and Classification of EEG Signals. The Use of a Genetic Algorithm for an Application on Alertness Prediction
Abstract
This chapter presents a method to automatically determine the alertness state of humans. Such a task is relevant in diverse domains, where a person is expected or required to be in a particular state of alertness. For instance, pilots, security personnel, or medical personnel are expected to be in a highly alert state, and this method could help to confirm this or detect possible problems. In this work, electroencephalographic (EEG) data from 58 subjects in two distinct vigilance states (state of high and low alertness) was collected via a cap with 58 electrodes. Thus, a binary classification problem is considered. To apply the proposed approach in a real-world scenario, it is necessary to build a prediction method that requires only a small number of sensors (electrodes), minimizing the total cost and maintenance of the system while also reducing the time required to properly setup the EEG cap. The approach presented in this chapter applies a preprocessing method for EEG signals based on the use of discrete wavelet decomposition (DWT) to extract the energy of each frequency in the signal. Then, a linear regression is performed on the energies of some of these frequencies and the slope of this regression is retained. A genetic algorithm (GA) is used to optimize the selection of frequencies on which the regression is performed and to select the best recording electrode. Results show that the proposed strategy derives accurate predictive models of alertness.
Pierrick Legrand, Laurent Vézard, Marie Chavent, Frédérique Faïta-Aïnseba, Leonardo Trujillo
Chapter 10. On Mapping EEG Information into Music
Abstract
With the rise of ever-more affordable EEG equipment available to musicians, artists and researchers, designing and building a brain–computer music interface (BCMI) system has recently become a realistic achievement. This chapter discusses previous research in the fields of mapping, sonification and musification in the context of designing a BCMI system and will be of particular interest to those who seek to develop their own. Design of a BCMI requires unique considerations due to the characteristics of the EEG as a human interface device (HID). This chapter analyses traditional strategies for mapping control from brainwaves alongside previous research in biofeedback musical systems. Advances in music technology have helped provide more complex approaches with regard to how music can be affected and controlled by brainwaves. This, paralleled with developments in our understanding of brainwave activity has helped push brain–computer music interfacing into innovative realms of real-time musical performance, composition and applications for music therapy.
Joel Eaton, Eduardo Reck Miranda
Chapter 11. Retroaction Between Music and Physiology: An Approach from the Point of View of Emotions
Abstract
It is a well-known fact that listening to music produces particular physiological reactions for the auditor, and the study of these relationships remains a wide unexplored field of study. When one starts analyzing physiological signals measured on a person listening to music, one has to firstly define models to know what information could be observed with these signals. Conversely, when one starts trying to generate some music from physiological data, in fact, it is an attempt to create the inverse relationship of the one happening naturally, and in order to do that, one also has to define models enabling the control of all the parameters of a generative music system from the few physiological information available, and in a coherent way. The notion of emotion, aside from looking particularly appropriate in the context, reveals itself to be a central concept allowing the articulation between musical and physiological models. We suggest in this article an experimental real-time system aiming at studying the interactions and retroactions between music and physiology, based on the paradigm of emotions.
Pierre-Henri Vulliard, Joseph Larralde, Myriam Desainte-Catherine
Chapter 12. Creative Music Neurotechnology with Symphony of Minds Listening
Abstract
A better understanding of the musical brain combined with technical advances in biomedical engineering and music technology is pivotal for the development of increasingly more sophisticated brain–computer music interfacing (BCMI) systems. BCMI research has been very much motivated by its potential benefits to the health and medical sectors, as well as to the entertainment industry. However, we advocate that the potential impact on musical creativity of better scientific understanding of the brain, and the development of increasingly sophisticated technology to scan its activity, should not be ignored. In this chapter, we introduce an unprecedented new approach to musical composition, which combines brain imaging technology, musical artificial intelligence and neurophilosophy. We discuss Symphony of Minds Listening, an experimental composition for orchestra in three movements, based on the fMRI scans taken from three different people, while they listened to the second movement of Beethoven’s Seventh Symphony.
Eduardo Reck Miranda, Dan Lloyd, Zoran Josipovic, Duncan Williams
Chapter 13. Passive Brain–Computer Interfaces
Abstract
Passive brain–computer interfaces (passive BCI), also named implicit BCI, provide information from user mental activity to a computerized application without the need for the user to control his brain activity. Passive BCI seem particularly relevant in the context of music creation where they can provide novel information to adapt the music creation process (e.g., user mental concentration state to adapt the music tempo). In this chapter, we present an overview of the use of passive BCI in different contexts. We describe how passive BCI are used and the commonly employed signal processing schemes.
Laurent George, Anatole Lécuyer
Backmatter
Metadaten
Titel
Guide to Brain-Computer Music Interfacing
herausgegeben von
Eduardo Reck Miranda
Julien Castet
Copyright-Jahr
2014
Verlag
Springer London
Electronic ISBN
978-1-4471-6584-2
Print ISBN
978-1-4471-6583-5
DOI
https://doi.org/10.1007/978-1-4471-6584-2

Neuer Inhalt