Skip to main content
main-content

Über dieses Buch

This unique reference book offers a holistic description of the multifaceted field of systematic musicology, which is the study of music, its production and perception, and its cultural, historical and philosophical background. The seven sections reflect the main topics in this interdisciplinary subject. The first two parts discuss musical acoustics and signal processing, comprehensively describing the mathematical and physical fundamentals of musical sound generation and propagation. The complex interplay of physiology and psychology involved in sound and music perception is covered in the following sections, with a particular focus on psychoacoustics and the recently evolved research on embodied music cognition. In addition, a huge variety of technical applications for professional training, music composition and consumer electronics are presented. A section on music ethnology completes this comprehensive handbook. Music theory and philosophy of music are imbedded throughout. Carefully edited and written by internationally respected experts, it is an invaluable reference resource for professionals and graduate students alike.

Inhaltsverzeichnis

Frontmatter

1. Systematic Musicology: A Historical Interdisciplinary Perspective

A brief account of the historical development of systematic musicology as a field of interdisciplinary research is given that spans from Greek antiquity to the present. Selected topics cover the rise of music theory from the Renaissance to modern times, the issue of harmonic dualism from Zarlino and Rameau to the 20th century and the controversy about physicalism versus musical logic in music theory. Sections of this chapter further relate to the notion of a system and the concept of systematic research (which is exemplified with respect to the work of Chladni, Helmholtz, Stumpf, and Riemann), and to the concept of Gestalt quality that spawned contributions to music perception from Gestalt psychology. In addition, some developments in music psychology outside the Gestalt movement as well as in the sociology of music are sketched, followed by a paragraph on modern research trends, which include semiotic, computational, and linguistic approaches to music perception and cognition as well as contributions from the neurosciences. A final paragraph provides some of the background that led to establishing systematic musicology as an academic discipline around 1870–1910, from where further disciplinary and scientific developments defined the field in the 20th century.

Albrecht Schneider

Musical Acoustics and Signal Processing

Frontmatter

2. Vibrations and Waves

This chapter deals with vibration and wave propagation wave propagationunder the general assumption that amplitudes are sufficiently small in order to neglect nonlinear effects when vibrations or waves are superimposed. It will be shown how wave equations can be derived for strings, bars and air columns and how analytic results can be obtained for some boundary conditions. This chapter will also review techniques for the calculation of resonance frequencies. Finally an introduction into the analysis of real musical instruments in the frequency domain will be given.

Wilfried Kausel

3. Waves in Two and Three Dimensions

This chapter deals with the generalization of the wave equation to describe wave propagation on two-dimensional surfaces and sound waves in a three-dimensional space. Again linearity is postulated, which is only justified if amplitudes are sufficiently small. It will be shown how wave equations can be derived for rectangular and circular membranesmembrane, plates and disks and how analytic results can be obtained for a three-dimensional case with relatively simple boundary conditions. This chapter will also review techniques for the calculation of resonance frequencies and for the prediction of associated modal shapes.

Wilfried Kausel

4. Construction of Wooden Musical Instruments

This work aims to provide an overview of why and how wood is used in musical instruments, primarily strings, woodwind and percussion. The introduction is a description of the desirable properties of a musical instrument and how these relate to the physical properties of wood. A summary is given of the most important woods mentioned in this chapter, including common and Latin names. Section 4.2 discusses the physical properties of woods most relevant to musical instruments and how they relate to their biological taxonomy and also to organology. Sections 4.3 and 4.4 are devoted respectively to woods that make up the acoustically radiant parts of instruments (tonewoods), and those whose function is to transmit vibrations from one part to another, or are simply structural (framewoods). Section 4.5 deals with how the wood is selected, prepared, assembled into an instrument, and finished.

Chris Waltham, Shigeru Yoshikawa

5. Measurement Techniques

The measurements normally required to understand the physics of musical instruments, including the human voice, usually fall into one of three categories: measuring the airborne sound, measuring the deflection of the surface of an instrument, or measuring the input impedance. This chapter introduces the most common measurement techniques that provide information on these three physical parameters with an emphasis on the first two, which are the measurements most commonly desired by musical acousticians. The chapter begins with a discussion of airborne sound and how it is sensed. Specifically, several types of microphones are introduced followed by a discussion of some of the techniques that rely on sensing by microphones. A review of the techniques for measuring and visualizing deflection shapes is then presented. These techniques range from observing nodal lines using simple Chladni patterns to visualizing deflection shapes using electronic speckle pattern interferometry. The topic of impedance measurement is addressed next, with discussions of both measurements of the input impedance of wind instruments and the measurement of mechanical impedance. This review is not meant to be a complete analysis of each measurement technique. Instead, it is meant to serve as an introduction to the most commonly used techniques and provide references for the interested reader to pursue further study. The advent of new technologies continually changes the equipment that is available to the scientist, but the underlying physical principles remain relevant.

Thomas Moore

6. Some Observations on the Physics of Stringed Instruments

We provide a general introduction to stringed instruments, focusing on the piano, guitar, and violin. These are representative of instruments in which the strings are excited by striking (the piano), plucking (the guitar), and bowing (the violin). We begin by discussing, in a general way, the strings and soundboards, and how these couple to the surrounding air to generate sound. Important features specific to these instruments are then discussed, with particular attention to the different ways the strings are set into motion, key differences in the way the soundboards vibrate, and the effects of these differences on the resulting musical tones.

Nicholas Giordano

7. Modeling of Wind Instruments

Wind instrumentswind instrument driven by a constant pressure air reservoir produce a steady oscillation and associated sound waves. This self-sustained oscillationself-sustained oscillation can be explained in terms of a lumped elementlumped element feedback loop composed of an exciter, such as a reed-valve or an unstable jet, coupled to an acoustical air column resonator, usually a pipe. In this chapter this simplified model is used to classify wind instruments. Five prototype wind instruments are selected: the clarinetclarinet, the oboeoboe, the harmonicaharmonica, the trombonetrombone and the modern transverse fluteflute. The elements of this feedback loop are described for each instrument. In simplified models the player is reduced to the role of a pressure reservoir. The player's control, also called the embouchureembouchureplayer's control is however essential. This aspect is discussed briefly for each instrument.

Benoit Fabre, Joël Gilbert, Avraham Hirschberg

8. Properties of the Sound of Flue Organ Pipes

This chapter is an overview of the characteristic sound properties of flue organ pipes. The characteristic properties of the stationary spectrum and attack transient have been surveyed and assigned to properties of the physical systems (air column as acoustic resonator, air jet as hydrodynamic oscillator, and pipe wall as mechanical resonator) involved in the sound generation process. The measurements presented underline the primary role of the acoustic resonator in the stationary sound and of the edge tone in the attack.

Judit Angster, András Miklós

9. Percussion Musical Instruments

Percussion instruments are an important part of every musical culture. Although they are probably our oldest musical instruments (with the exception of the human voice), there has been less research on the acoustics of percussion instruments, as compared to wind or string instruments. Quite a number of scientists, however, continue to study these instruments.Over the years we have written several review articles on the acoustics of percussion instruments [9.1, 9.2] as well as a book [9.3]. They are also the subject of chapters in most books on musical acoustics and on musical instruments [9.4, 9.5, 9.6, 9.7, 9.8].

Andrew C. Morrison, Thomas D. Rossing

10. Musical Instruments as Synchronized Systems

Most musical instrument families have nearly perfect harmonic overtone series, for example plucked, bowed, or wind instruments. However, when considering the complex geometry and nonlinear driving mechanisms many of these instruments have we would expect them to have very inharmonic overtone series. So to make musical instruments play notes that we accept as harmonic sounds, synchronization needs to occur to arrive at the perfect harmonic overtone series the instruments actually produce. The reasons for this synchronization are different in the singing voice, organs, saxophones or clarinets, violin bowing or in plucked stringed instruments. However, when examining the mechanisms of synchronization further, we find general rules and suitable algorithms to understand the basic behavior of these instruments.

Rolf Bader

11. Room Acoustics – Fundamentals and Computer Simulation

In room acoustics analytical formulas and computer simulations can be used to predict the acoustics of spaces, not only in terms of reverberation but other perceptual aspects, too, which are related to the perception of music or speech. In this context the room impulse response is the function of main interest. It can be measured by using sophisticated instrumentation and signal processing, or it can be simulated with computer models. In the process of auralization the data and signal processing enables one to listen into the simulated rooms in order to interpret the sound in the room aurally. In real-time implementation, this is a valuable extension of the technique of virtual reality.

Michael Vorländer

Signal Processing

Frontmatter

12. Music Studio Technology

Music studio technology is reviewed with respect to the different tasks involved in recordings, broadcasts, and live concerts. The chapter covers microphones and microphone arrangements, signal preconditioning and sound effects, and matters of digitalization. It also covers equipment technology such as mixing consoles, synthesizers and sequencers. Historical and contemporary audio formats are reviewed including the issues of restoration. Practical matters such as signals, connectors, cables and grounding problems are addressed due to their significance to sound quality. The general trend towards audio networks is shown. Finally, speakers, reference listening and reinforcement systems are outlined, including some of the multidimensional formats.

Robert Mores

13. Delay-Lines and Digital Waveguides

A digital delay line is a particular type of finite impulse response (FIRfinite impulse response (FIR)) filter that has many useful applications in audio signal processing. Simply put, signals that are input to a delay line reappear at the output after a specified time period (in samples). Delay lines are often implemented to support delay times that can vary dynamically. As well, delay times corresponding to noninteger sample lengths can be approximated.Time delay of signals is fundamental to signal processing systems. In this chapter, we focus on applications in digital audio signal processing and in particular, the modeling of wave propagation in air and in strings. The fundamentals of delay lines will be introduced and their implementation detailed, including common fractional-delay filtering techniques. Feedforward and feedback comb filters are simple signal processing structures built with delay lines and they exhibit characteristics that not only make them interesting for delay-based audio effects algorithms, but also as simple models of acoustic wave propagation. Finally, the use of delay lines to simulate wave propagation in one-dimensional waveguides will be introduced with a focus on the synthesis of plucked string instrument sounds.

Gary Scavone

14. Convolution, Fourier Analysis, Cross-Correlation and Their Interrelationship

The scope of this chapter is to derive and explain three fundamental concepts of acoustical signal analysis and synthesis: convolution, Fourier transformation, and cross-correlation. Convolution is an important process in acoustics to determine how a signal is transformed by an acoustical system that can be described through an impulse response, a room, for example. Fourier analysis enables us to analyze a signal's properties at different frequencies. This method is then extended to Fourier transformation to convert signals from the time domain to the frequency domain and vice versa. Further, the method of cross-correlation is introduced by extending the orthogonality relations for trigonometric functions that were used to derive Fourier analysis. The cross-correlation method is a fundamental concept to compare two signals. We will use this method to extract the impulse response of a room by comparing a signal measured with a microphone after being transformed by a room with the original, measurement signal emitted into the room using a loudspeaker. Based on this and other examples, the mathematical relationships between convolution, Fourier transformation, and correlation are explained to facilitate deeper understanding of these fundamental concepts.

Jonas Braasch

15. Audio Source Separation in a Musical Context

When musical instruments are recorded in isolation, modern editing and mixing tools allow correction of small errors without requiring a group to re-record an entire passage. Isolated recording also allows rebalancing of levels between musicians without re-recording and application of audio effects to individual instruments. Many of these techniques require (nearly) isolated instrumental recordings to work. Unfortunately, there are many recording situations (e. g., a stereo recording of a 10-piece ensemble) where there are many more instruments than there are microphones, making many editing or remixing tasks difficult or impossible.Audio source separation is the process of extracting individual sound sources (e. g., a single flute) from a mixture of sounds (e. g., a recording of a concert band using a single microphone). Effective source separation would allow application of editing and remixing techniques to existing recordings with multiple instruments on a single track.In this chapter we will focus on a pair of source separation approaches designed to work with music audio. The first seeks the repeated elements in the musical scene and separates the repeating from the nonrepeating. The second looks for melodic elements, pitch tracking and streaming the audio into separate elements. Finally, we consider informing source separation with information from the musical score.

Bryan Pardo, Zafar Rafii, Zhiyao Duan

16. Automatic Score Extraction with Optical Music Recognition (OMR)

Optical music recognition (OMRoptical music recognition (OMR)) describes the process of automatically transcribing music notation from a digital image. Although similar to optical character recognition (OCRoptical character recognition (OCR)), the process and procedures of OMR diverge due to the fundamental differences between text and music notation, such as the two-dimensional nature of the notation system and the overlay of music symbols on top of staff lines. The OMR process can be described as a sequence of steps, with techniques adapted from disciplines including image processing, machine learning, grammars, and notation encoding. The sequence and specific techniques used can differ depending on the condition of the image, the type of notation, and the desired output.Several commercial and open-source OMR software systems have been available since the mid-1990s. Most of them are designed to be used by individuals and recognize common (post-18th-century) Western music notation, though there have been some efforts to recognize other types of music notation such as for the lute and for earlier Western music.Even though traditional applications of OMR have focused on small-scale recognition tasks, typically as an automated method of musical entry for score editing, new applications of large-scale OMR are under development, where automated recognition is the central technology for building full-music search systems, similar to the large-scale full-text recognition efforts.

Ichiro Fujinaga, Andrew Hankinson, Laurent Pugin

17. Adaptive Musical Control of Time-Frequency Representations

In this chapter we consider control structures and mapping in the process of deciding upon the underlying sonic algorithm for a digital musical instrument. We focus on control of timbral and textural phenomena that arise from the interaction and modulation of stationary spectral components, as well as from stochastic elements of sound. Given this observation and general design criteria, we focus on a family of sound models that parameterize the stationary and stochastic components using a spectral representation that is commonly based on an underlying short-time Fourier transform (STFTshort-term Fourier transform/short-time Fourier transform (STFT)) analysis. Using this as a fundamental approach we build a dynamic model of sound analysis and synthesis, focusing on a design that will simultaneously lead to musically interesting transformations of textural and noise-based sound features while allowing for control structures to be integrated into the sound dynamics. Building upon well-established adaptive algorithms such as the Kalman Filter, we present a recursive-exponential implementation, and exploit a fast algorithm derivation in order to process both additive data and the full underlying phase vocoder. The model is further augmented to allow for nonlinear adaptive control, pointing towards new directions for adaptive musical control of time-frequency models.

Doug Van Nort, Phillippe Depalle

18. Wave Field Synthesis

Wave field synthesis enables acoustic control in a listening area by systematic regulation of loudspeaker signals on its boundary. This chapter starts with an overview including the history of wave field synthesis and some exemplary installations. Next, the theoretic fundamentals of wave field synthesis are detailed. Technical implementations demand drastic simplifications of the theoretical core, which come along with restrictions of the acoustic control as well as with synthesis errors. Simplifications, resulting synthesis errors as well as the working principles of compensation methods and their effects on the wave field are extensively discussed. Finally, the current state of research and development is addressed.

Tim Ziemer

19. Finite-Difference Schemes in Musical Acoustics: A Tutorial

The functioning of musical instruments is well described by systems of partial differential equations. Whether one's interest is in pure musical acoustics or physical modeling of sound synthesis, numerical simulation is a necessary tool, and may be carried out by a variety of means. One approach is to make use of so-called finite-difference or finite-difference time-domain methods, whereby the numerical solution is computed as a recursion operating over a grid. This chapter is intended as a basic tutorial on the design and implementation of such methods, for a variety of simple systems. The 1-Done-dimensional (1-D) wave equation and simple difference schemes are covered in Sect. 19.1, accompanied by an analysis of numerical dispersion and stability, as well as implementation details via vector-matrix representations. Similar treatments follow for the case of the ideal stiff bar in Sect. 19.2, the acoustic tube in Sect. 19.3, the 2-Dtwo-dimensional (2-D) and 3-Dthree-dimensional (3-D) wave equations in Sect. 19.4, and finally the stiff plate in Sect. 19.5. Some more general nontechnical comments on more complex extensions to nonlinear systems appear in Sect. 19.6.

Stefan Bilbao, Brian Hamilton, Reginald Harrison, Alberto Torin

20. Real-Time Signal Processing on Field Programmable Gate Array Hardware

Over the last 50 years, advances in high-speed digital signal processing (DSPdigital signal processing (DSP)) and numerical methods for audio signal processing in general were fueled by the rising processing capabilities of personal computers (PCpersonal computer (PC)s). Added to this was the advent of specialized coprocessing platforms like general purpose graphics processing units (GPGPUgeneral purpose graphics processing unit (GPGPU)s), central processing unit (CPUcentralprocessing unit (CPU))-based accelerators like Intel's Xeon Phi platforms as well as high-performance digital signal processing (DSPdigital signal processing (DSP)) chips like Analog Devices' TigerSHARC. Still, there are applications that are not realizable on the mentioned devices in real time or even close to real time. This chapter gives an introduction to field programmable gate array (FPGAfield programmable gate array (FPGA)) hardware, a flexible computing platform with massively parallel logic capability that is applicable for problems of high data throughput, high clock rates and high parallelism. After an introduction to the basic structure of FPGAs, several features that enable high-throughput DSP applications are highlighted. An introduction to development platforms as well as the development methodology is given, along with an overview of current FPGA devices and their specific capabilities. Two application examples and an outlook and summary complete this chapter.

Florian Pfeifle

Music Psychology – Physiology

Frontmatter

21. Auditory Time Perception

In this chapter, we propose to review studies on the capability of making explicit judgments about the duration of auditory time intervals. After a brief look at the main methods used to study time perception, we then focus on factors affecting sensitivity to time (e. g., discrimination levels) including repetition of intervals and the duration range under investigation, as well as whether the interval is embedded in a music or in a speech sequence. Factors affecting the perceived duration of sounds and time intervals are then described using for example markers' length, space, pitch, and intensity. The last section of this chapter reviews the main theoretical perspectives in the field of temporal information processing, with an emphasis on the distinction between the traditional beat-based versus interval-based mechanisms.

Simon Grondin, Emi Hasuo, Tsuyoshi Kuroda, Yoshitaka Nakajima

22. Automatic Processing of Musical Sounds in the Human Brain

This chapter introduces neurophysiological evidence on the dissociation between unconscious and conscious aspects of musical sound perception. The focus is on research conducted with the event-related potential (ERPevent-relatedpotential (ERP)) technique, which allows chronometric investigation of information-processing stages during music listening. Findings suggest that automatic processes are confined to the auditory cortex and might even involve the discrimination of deviations from simple musical scale rules. In turn, voluntary, cognitive processes, likely originating from the inferior prefrontal cortex, are necessary to understand more complex musical rules, such as tonality and harmony. The implications of understanding how and to what extent music is processed below the level of consciousness are discussed in rehabilitation and therapeutic settings.

Elvira Brattico, Chiara Olcese, Mari Tervaniemi

23. Long-Term Memory for Music

This chapter begins with a brief overview of traditional approaches to long-term memory in general. It highlights the memory system known as semantic memory, and then in subsequent sections explores the proposal that a purely musical semantic memory may be identified and is isolable from semantic memory in other, nonmusical, domains. Finally, neuropsychological evidence for the selective sparing of the musical semantic memory system in dementia is reviewed.

Lola L. Cuddy

24. Auditory Working Memory

This chapter reviews behavioral and neuroimaging findings on:1.The comparison between verbal and tonal working memory (WMworking memory (WM))2.The impact of musical training3.The role of sound mimicry for auditory memory4.The influence of long-term memory (LTMlong-term memory (LTM)) on auditory WM performance, i. e., the effect of strategy use on auditory WM.Whereas the core structures, namely Broca's area, the premotor cortex, and the inferior parietal lobule, show a substantial overlap, results in musicians suggest that there are also different subcomponents involved during verbal and tonal WM. If confirmed, these results indicate that musicians develop either independent tonal and phonological loops or unique processing strategies that allow novel interactive use of the WM systems. We furthermore present and discuss data that provide substantial support for the hypothesis that motor-related processes assist auditory WM, and as a result we propose a strong link between sound mimicry and auditory WM.

Katrin Schulze, Stefan Koelsch, Victoria Williamson

25. Musical Syntax I: Theoretical Perspectives

The understanding of musical syntax is a topic of fundamental importance for systematic musicology and lies at the core intersection of music theory and analysis, music psychology, and computational modeling. This chapter discusses the notion of musical syntax and its potential foundations based on notions such as sequence grammaticality, expressive unboundedness, generative capacity, sequence compression and stability. Subsequently, it discusses problems concerning the choice of musical building blocks to be modeled as well as the underlying principles of sequential structure building. The remainder of the chapter reviews the main theoretical proposals that can be characterized under different mechanisms of structure building, in particular approaches using finite-context or finite-state models as well as tree-based models of context-free complexity (including the Generative Theory of Tonal Music) and beyond. The chapter concludes with a discussion of the main issues and questions driving current research and a preparation for the subsequent empirical chapter Musical Syntax II.

Martin Rohrmeier, Marcus Pearce

26. Musical Syntax II: Empirical Perspectives

Efforts to develop a formal characterization of musical structure are often framed in syntactic terms, sometimes but not always with direct inspiration from research on language. In Chap. 25, we present syntactic approaches to characterizing musical structure and survey a range of theoretical issues involved in developing formal syntactic theories of sequential structure in music. Such theories are often computational in nature, lending themselves to implementation and our first goal here is to review empirical research on computational modeling of musical structure from a syntactic point of view. We ask about the motivations for implementing a model and assess the range of approaches that have been taken to date. It is important to note that while a computational model may be capable of deriving an optimal structural description of a piece of music, human cognitive processing may not achieve this optimal performance, or may even process syntax in a different way. Therefore we emphasize the difference between developing an optimal model of syntactic processing and developing a model that simulates human syntactic processing. Furthermore, we argue that, while optimal models (e. g., optimal compression or prediction) can be useful as a benchmark or yardstick for assessing human performance, if we wish to understand human cognition then simulating human performance (including aspects that are nonoptimal or even erroneous) should be the priority. Following this principle, we survey research on processing of musical syntax from the perspective of computational modeling, experimental psychology and cognitive neuroscience. There exists a large number of computational models of musical syntax, but we limit ourselves to those that are explicitly cognitively motivated, assessing them in the context of theoretical, psychological and neuroscientific research.

Marcus Pearce, Martin Rohrmeier

27. Rhythm and Beat Perception

From established musicians to musical novices, humans perceive temporal patterns in music and respond to them. There is much that we still do not understand, however, about how the temporal patterns of music are processed in the brain. Understanding the neural mechanisms that underlie processing of temporal sequences will help us learn why humans perceive the temporal regularities, or periodicities, in musical rhythms. Therefore, in this chapter, we discuss the latest findings in beat perception research, touching on both behavioral and neuroimaging findings from studies that have used electroencephalography (EEGelectroencephalogram/electroencephalography (EEG)), magnetoencephalography (MEGmagnetoencephalography (MEG)), functional magnetic resonance imaging (fMRIfunctional magnetic resonance imaging (fMRI)), and transcranial magnetic stimulation (TMStranscranial magnetic stimulation (TMS)). Overall, the findings establish the importance of both auditory and motor brain areas in rhythm and beat processing. The authors also discuss the implications of beat perception research and highlight the challenges currently facing the field.

Tram Nguyen, Aaron Gibbings, Jessica Grahn

28. Music and Action

In this chapter, the relationship between music and action is examined from two perspectives: one where individuals learn to play an instrument, and another where music induces movement in a listener. For both perspectives, we review experimental research, mostly consisting of neuroscientific studies, as well as select behavioral investigations. We first review research examining how learning to play music induces functional coupling between motor and sensory neural processes, which ultimately changes the way in which music is perceived. Next, we review research examining how certain temporal properties of music (such as the rhythm or the beat) induce motor processes in a listener, depending on or irrespective of musical training. The coupling of perceptual and motor processes underpins predictive computations that facilitate the anticipation and adaptation of one's movement to music. Such skills in turn support the capacity to coordinate one's movements with another in the context of joint musical performance. This picture emphasizes how studying the relationship between music and action will ultimately lead us to understand music's powerful social and interpersonal potential.

Giacomo Novembre, Peter E. Keller

29. Music and Emotions

The rapid rise in emotion research in psychology has brought forth a rich palette of concepts and tools for studying emotions expressed and induced by music. This chapter summarizes the current state of music and emotion research, starting with the fundamental definitions and the assumed structures of emotions (Sect. 29.2). A synthesis of the core affects, basic emotions and complex emotions is offered to clarify this complex landscape. A vital development for the field has been the introduction of a set of mechanisms and modifiers for the induction of emotion via music that are here connected to the structures of emotions (Sects. 29.3 and 29.4). Particular attention is given to challenges that are still waiting to be resolved, such as the cultural context and the situal context of music listening (Sect. 29.5).

Tuomas Eerola

Psychophysics/Psychoacoustics

Frontmatter

30. Fundamentals

This part of the handbook deals with sensation and perception of pitch, timbre, and loudness in humans, largely from a psychoacoustic perspective. Since there is broad range of publications available on subjects such as hearing (including anatomy and physiology), psychoacoustics as well as signal processing in relation to perceptual modeling (for comprehensive summaries see, e. g., [30.1, 30.10, 30.11, 30.12, 30.2, 30.3, 30.4, 30.5, 30.6, 30.7, 30.8, 30.9]), it would be quite difficult, if possible at all, to condense relevant matter that has found detailed discussion elsewhere so as to fit into one part of this handbook. Rather, the approach taken here is selective such that phenomena that have found extensive treatment in works on psychoacoustics and audio processing (e. g., masking) will only be briefly addressed while a number of aspects usually given less attention shall be included. The perspective chosen aims at presenting facts and models but also turns to theoretical and methodological issues deemed necessary to understand lines of development in research. To this end, Chap. 30 of this part addresses fundamental concepts such as sensation, perception, and apperception. Since such concepts have been developed in a long process of research, and from certain philosophical backgrounds, it seems adequate to refer to at least some of the discussion found in disciplines such as philosophy, psychology, and neuroscience with respect to epistemology and research strategies. Further, ideas that are of special interest in regard to the history of psychophysics and in particular concepts developed by Theodor Fechner and Stanley Stevens are given a critical examination. To illustrate certain facts or problems, examples are provided (such as sound analyses or other empirical data). Though music perception in humans seems to be unique with respect to the involvement of cognitive factors, in research involving sensation and perception (e. g., of pitch and loudness) reference must also be made to other mammals we share basic anatomical, physiological and neuronal structures and functions with [30.13, 30.14]. Sensation and perception of pitch and timbre basically is viewed as a functional relation between certain types of sound and processing of sounds within the sensory organ as well as on several levels along the auditory pathway (AuPauditorypathway (AuP)). Functional in this context means that sensations and perceptions in general can be related to features inherent in sounds and that, notwithstanding variability in capabilities and performance among individuals, a relation of cause and effect holds that permits us to assume intra-individual as well as interindividual similarity and consistency of sensations and perceptions for the same set of stimulus conditions. At least it seems reasonable, as a working hypothesis, to assume the same objective causes will provoke similar effects, in human subjects, within a certain range.

Albrecht Schneider

31. Pitch and Pitch Perception

This chapter addresses sensation and perception of pitch mainly from a functional perspective. Anatomical and physiological facts concerning the auditory pathway are provided to the extent necessary to understand excitation processes resulting from sound energy in the middle ear as well as within the cochlea. Place coding and temporal coding of sound features is viewed in regard to frequency and period as two parameters relevant for pitch perception. The Wiener–Khintchine theorem is taken as a basis to explain the correspondence between temporal periodicity and spectral harmonicity as two principles fundamental to perception of pitch and timbre. The basics of some models of the auditory periphery suited to extracting pitch from complex sounds either in the time or in the frequency domain will be outlined along with examples demonstrating how such models work for certain sounds. Sections of this chapter also address tone height and tonal quality as components of pitch as well as the rather dubious nature of the so-called tone chroma. Issues such as isolating tone quality from height (as in Shepard tones) and an alleged preference of subjects for stretched octaves are covered in a critical assessment. A subchapter on psychophysics includes just-noticeable difference (JNDjust-noticeable difference (JND)) and difference limen (DLdifferencelimen (DL)) for pitch, the concept of auditory filters known as critical bands, the sensation of roughness and dissonance as well as special pitch phenomena (the residue and the missing fundamental, the concept of virtual pitch, combination tones). Another section covers spectral fusion, Stumpf's concept of Verschmelzung, and the sensation of consonance. Further, there are sections on categorical pitch perception as well as on absolute and relative pitch followed by a brief survey of scale types, tone systems and intonation. The chapter closes with a section on geometric pitch models and some basic features of tonality in music.

Albrecht Schneider

32. Perception of Timbre and Sound Color

This chapter deals with perception of timbre or sound color. Both concepts can be distinguished in regard to terminology as well as to their historical and factual background even though both relate to some common features for which an objective (acoustic) basis exists. Sections of this chapter review in brief developments in (traditional and electronic) musical instruments as well as in research on timbre and sound color. A subchapter on sensation and perception of timbre offers a retrospective on classical concepts of tone color or sound color and reviews some modern approaches from Schaeffer's objet sonore to semantic differentials and multidimensional scaling. Taking a functional approach, acoustical features (such as transients and modulation) and perceptual attributes of timbre as well as interrelations between pitch and timbre are discussed. In a final section, fundamentals of sound segregation and auditory streaming are outlined. For most of the phenomena covered in this chapter, examples are provided including sound analyses obtained with signal processing methods.

Albrecht Schneider

33. Sensation of Sound Intensity and Perception of Loudness

This chapter is on sensation of sound intensity and perception of loudness. Since some of the relevant matter (on scaling concepts of loudness) has been presented in Chap. 30, and because a considerable portion of research on loudness is done outside musical contexts (namely, in industrial and environmental noise control as well as in audiology), this chapter condenses facts and models more than the previous two on pitch and timbre respectively. Section 33.1 of this chapter offers the physical and physiological basis of sound intensity sensation while Sect. 33.2 discusses features of some models of loudness sensation that have been established in psychoacoustics over the past decades. Since these models were originally designed for stationary sound signals and levels, and have been tested mostly in lab situations, they cannot adequately cover a range of real-world sound types found in natural or technical environments. In music genres such as techno presented in discos, or heavy metal performed in live music venues or at open air festivals to audiences at very high sound pressure levels, sound is heavily processed in regard to dynamics and spectral energy, which calls for appropriate measurement and assessment of sensory effects. Different from perception of pitch (where samples of subjects respond more or less in similar ways to certain types of sound signals), perception of loudness shows a high degree of variability even within groups of musically trained subjects reflecting their musical background and preferences (Sect. 33.3). Recent empirical evidence demonstrates that subjects judge loudness for various musical genres on a category scale (from very soft to very loud), however, the center (relative to loudness level and loudness scales) and the range of each category differ considerably, for individual subjects.Finally, there is a concluding section (Sect. 33.4) in which some of the major topics and issues discussed in Chaps. 30–33 of Part D are summed up. In addition, a tentative model of the interrelationship of pitch, timbre and loudness perception is sketched.

Albrecht Schneider

Music Embodiment

Frontmatter

34. What Is Embodied Music Cognition?

Over the past decade, embodied music cognition has become an influential paradigm in music research. The paradigm holds that music cognition is strongly determined by corporeally mediated interactions with music. They determine the way in which music can be conceived in terms of goals, directions, targets, values, and reward. The chapter gives an overview of the ontological and epistemological foundations, and it introduces the core concepts that define the character of the paradigm. This is followed by an overview of some analytical and empirical studies, which illustrate contributions of the embodied music cognition approach to major topics in musical expression, timing, and prediction processing. The chapter gives a viewpoint on a music research paradigm that is in full development, both in view of the in-depth refinement of its foundations, as well as the broadening of its scope and applications.

Marc Leman, Pieter-Jan Maes, Luc Nijs, Edith Van Dyck

35. Sonic Object Cognition

We evidently have features at different timescales in music, ranging from the sub-millisecond timescale of single vibrations to the timescale of a couple of hundred milliseconds, manifesting perceptually salient features such as pitch, loudness, timbretimbre, and various transients. At the larger timescales of several hundred milliseconds, we have features such as the overall dynamic and timbral envelopes of sonic events, and at slightly larger timescales, also of various rhythmic, textural, melodic, and harmonic patterns. And at still larger timescales, we have phrases, sections, and whole works of music, often lasting several minutes, and in some cases, even hours.Features at these different timescales all contribute to our experience of music, however the focus in the present chapter is on the salient features of what has been called sonic objects, meaning on holistically perceived chunks of musical sound in the very approximately 0.5 − 5 s duration range. A number of known constraints in the production and perception of musical sound as well as in human behavior and perception in general, seem to converge in designating this timescale as crucial for our experience of music.The aim of this chapter is then to try to understand how sequentially unfolding and ephemeral sound and sound-related body motion can somehow be transformed in our minds to sonic objects.

Rolf Inge Godøy

36. Investigating Embodied Music Cognition for Health and Well-Being

The aim of this chapter is to highlight challenges involved in the successful deployment of the rather young paradigm of embodied music cognition in the comprehensive domain of health and well-being. Both our current society and systematic musicology are experiencing transitions that have given rise to cross-disciplinaryembodied music cognition research collaboration between researchers in musicology, the sciences, and a variety of stakeholders in health and well-being. It has been shown that the interdisciplinary, empirical approach that typifies embodied music cognition research has the potential to bring new perspectives to therapeutic approaches for well-being. However, to bring this potential to fruition, researchers have to face the many challenges that arise from the difficulties of using new methods and technologies, especially when working in unfamiliar domains and contexts. In this chapter a framework is presented that provides support to the prominent question of how know-how from the paradigm of embodied music cognition can be efficiently transferred to the sectors of health, rehabilitation, and well-being.

Micheline Lesaffre

37. A Conceptual Framework for Music-Based Interaction Systems

Music affords a wide range of interactive behaviors involving social, cognitive, emotional, and motor skills. In this chapter, we consider the role of technologies in relation to these interactions afforded by music. A general conceptual model is introduced that forms a basis to frame and understand a vast number of music-based interactive systems. In this model, we consider the necessity of coupled action–perception processes, in combination with human reward, prediction and social interaction processes. In addition, we discuss three perspectives on how music-based interaction systems may involve users' actions (monitoring, motivation, and alteration). To conclude, we discuss two case studies of technologies to illustrate the most innovative aspects of the presented model.

Pieter-Jan Maes, Luc Nijs, Marc Leman

38. Methods for Studying Music-Related Body Motion

This chapter presents an overview of some methodological approaches and technologies that can be used in the study of music-related body motion. The aim is not to cover all possible approaches, but rather to highlight some of the ones that are more relevant from a musicological point of view. This includes methods for video-based and sensor-based motion analyses, both qualitative and quantitative. It also includes discussions of the strengths and weaknesses of the different methods, and reflections on how the methods can be used in connection to other data in question, such as physiological or neurological data, symbolic notation, sound recordings and contextual data.

Alexander Refsum Jensenius

Music and Media

Frontmatter

39. Content-Based Methods for Knowledge Discovery in Music

This chapter presents several computational approaches aimed at supporting knowledge discovery in music. Our work combines data mining, signal processing and data visualization techniques for the automatic analysis of digital music collections, with a focus on retrieving and understanding musical structure.We discuss the extraction of midlevel feature representations that convey musically meaningful information from audio signals, and show how such representations can be used to synchronize different instances of a musical work and enable new modes of music content browsing and navigation. Moreover, we utilize these representations to identify repetitive structures and representative patterns in the signal, via self-similarity analysis and matrix decomposition techniques that can be made invariant to changes of local tempo and key. We discuss how structural information can serve to highlight relationships within music collections, and explore the use of information visualization tools to characterize the patterns of similarity and dissimilarity that underpin such relationships.With the help of illustrative examples computed on a collection of recordings of Frédéric Chopin’s Mazurkas, we aim to show how these content-based methods can facilitate the development of novel modes of access, analysis and interaction with digital content that can empower the study and appreciation of music.

Juan Pablo Bello, Peter Grosche, Meinard Müller, Ron Weiss

40. Hearing Aids and Music: Some Theoretical and Practical Issues

This chapter focuses on the hearing assessment of musicians as well as how to recommend and specify the exact parameters for hearing aid amplification for hard-of-hearing people who either play musical instruments or merely like to listen to music. Much of this is based on the differences between the acoustic features of music and of speech. Music is typically listened to, or played at, a higher sound level than speech and there are some spectral and temporal differences between music and speech that have implications for differing electro-acoustic hearing-aid technologies for the two types of input. This involves a discussion of some hearing aid technologies best suited to amplified music as well as some clinical strategies for the hearing health care professional to optimize hearing aids for music as an input.The key limitation concerning the capability of current digital hearing aids to accommodate the more intense elements of music is the analog-to-digital (A/Danalog-to-digital (A/D)) converter. Expending research and development efforts on the other elements within the hearing aid will not really improve the fidelity of music unless the limitations of the A/D converter are first solved. The topic of music as an input to hearing aids and the technologies that are available is a rapidly changing one. New technologies are on the horizon such as better A/D converters that may be implemented by various manufacturers.

Marshall Chasin, Neil S. Hockley

41. Music Technology and Education

In this chapter, the application of music information retrieval (MIRmusicinformation retrieval (MIR)music education) technologies in the development of music education tools is addressed. First, the relationship between technology and music education is described from a historical point of view, starting with the earliest attempts to use audio technology for education and ending with the latest developments and current research conducted in the field. Second, three MIR technologies used within a music education context are presented:1.The use of pitch-informed solo and accompaniment separation as a tool for the creation of practice content2.Drum transcriptiontranscription for real-time music practice3.Guitarguitar transcription with plucking style and expression style detection.In each case, proposed methods are clearly described and evaluated. Objective perceptual quality metrics were used to evaluate the proposed method for solo/accompaniment separation. Mean overall perceptual scores (OPSoverall perceptual score (OPS)) of 24.68 and 34.68 were obtained for the solo and accompaniment tracks respectively. These scores are on par with the state-of-the-art methods with respect to perceptual quality of separated music signals. A dataset of 17 real-world multitrack recordings was used for evaluation. In the drum sound detection task, an F-measure of 0.96 was obtained for snare drum, kick drum, and hi-hat detection. For this evaluation, a dataset of 30 manually annotated real-world drum loops with an onset tolerance of 50 ms was used. For the guitar plucking style and guitar expression style detection tasks, F-measures of 0.93 and 0.83 were obtained respectively. For this evaluation, a dataset containing 261 recordings of both isolated notes as well as monophonic and polyphonic melodies with note-wise annotations was used. To conclude the chapter, the remaining challenges that need to be addressed to more effectively use MIR technologies in the development of music education applications are described.

Estefanía Cano, Christian Dittmar, Jakob Abeßer, Christian Kehling, Sascha Grollmisch

42. Music Learning: Automatic Music Composition and Singing Voice Assessment

Traditionally, singing skills are learned and improved by means of the supervised rehearsal of a set of selected exercises. A music teacher evaluates the user's performance and recommends new exercises according to the user's evolution.In this chapter, the goal is to describe a virtual environment that partially resembles the traditional music learning process and the music teacher's role, allowing for a complete interactive self-learning process.An overview of the complete chain of an interactive singing-learning system including toolstechnology tools for music learning and concrete techniques will be presented. In brief, first, the system should provide a set of training exercises. Then, it should assess the user's performance. Finally, the system should be able to provide the user with new exercises selected or created according to the results of the evaluation.Following this scheme, methods for the creation of user-adapted exercises and the automatic evaluation of singing skills will be presented. A technique for the dynamical generation of musically meaningful singing exercises, adapted to the user's level, will be shown. It will be based on the proper repetition of musical structures, while assuring the correctness of harmony and rhythm. Additionally, a module for singing assessment of the user's performance, in terms of intonation and rhythm, will be shown.

Lorenzo J. Tardón, Isabel Barbancho, Carles Roig, Emilio Molina, Ana M. Barbancho

43. Computational Ethnomusicology: A Study of Flamenco and Arab-Andalusian Vocal Music

In this chapter we approach flamenco and Arab-Andalusian vocal music through the analysis of two representative pieces. We apply a hybrid methodology consisting of audio-signal processing to describe and contrast their melodic characteristics followed by musicological analysis. The use of such computational analysis tools complements a musicological-historical study with the aim of supporting the discovery and understanding of the specific characteristics of these musical traditions, their similarities and differences, while offering solutions to more general music information retrieval (MIR) research challenges.

Nadine Kroher, Emilia Gómez, Amin Chaachoo, Mohamed Sordo, José-Miguel Díaz-Báñez, Francisco Gómez, Joaquin Mora

44. The Relation Between Music Technology and Music Industry

The music industry has changed drastically over the last century and most of its changes and transformations have been technology-driven. Music technology – encompassing musical instruments, sound generators, studio equipment and software, perceptual audio coding algorithms, and reproduction software and devices – has shaped the way music is produced, performed, distributed, and consumed. The evolution of music technology enabled studios and hobbyist producers to produce music at a technical quality unthinkable decades ago and have affordable access to new effects as well as production techniques. Artists explore nontraditional ways of sound generation and sound modification to create previously unheard effects, soundscapes, or even to conceive new musical styles. The consumer has immediate access to a vast diversity of songs and styles and is able to listen to individualized playlists virtually everywhere and at any time. The most disruptive technological innovations during the past 130 years have probably been:1.The possibility to record and distribute recordings on a large scale through the gramophone.2.The introduction of vinyl disks enabling high-quality sound reproduction.3.The compact cassette enabling individualized playlists, music sharing with friends and mobile listening.4.Digital audio technology enabling high quality professional-grade studio equipment at low prices.5.Perceptual audio coding in combination with online distribution, streaming, and file sharing.This text will describe these technological innovations and their impact on artists, engineers, and listeners.

Alexander Lerch

45. Enabling Interactive and Interoperable Semantic Music Applications

New interactive music services have emerged, but many of them use proprietary file formats. In order to enable interoperability among these services, the International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) Moving Picture Experts Group (MPEG) issued a new standard, the so-called MPEG-A: Interactive Music Application Format (IM AFinteractivemusic application format (IM AF)).The purpose of this chapter is to review the IM AF standard and its features, and also to provide a detailed description of the design and implementation of an IM AF codec and its integration into a popular open source analysis, annotation and visualization audio tool known as Sonic Visualiser. This is followed by a discussion highlighting the benefits of their combined features, such as automatic chords or melody extraction time-aligned with the song's lyrics. Furthermore, this integration provides the semantic music research community with a testbed enabling further development and comparison of new Sonic Visualiser plug-ins, e. g., from singing voice-to-text conversion with automatic lyrics highlighting for karaoke applications, to source separation-based music instrument extraction from a mixed song.

Jesús Corral García, Panos Kudumakis, Isabel Barbancho, Lorenzo J. Tardón, Mark Sandler

46. Digital Sensing of Musical Instruments

Acoustic musical instruments enable very rich and subtle control when used by experienced musicians. Musicology has traditionally focused on analysis of scores and more recently audio recordings. However, most music from around the world is not notated, and many nuances of music performance are hard to recover from audio recordings. In this chapter, we describe hyperinstruments, i. e., acoustic instruments that are augmented with digital sensors for capturing performance information and in some cases offering additional playing possibilities. Direct sensors are integrated onto the physical instrument, possibly requiring modifications. Indirect sensors such as cameras and microphones can be used to analyze performer gestures without requiring modifications to the instrument. We describe some representative case studies of hyperinstruments from our own research as well as some representative case studies of the types of musicological analysis one can perform using this approach, such as performer identification, microtiming analysis, and transcription. Until recently, hyperinstruments were mostly used for electroacoustic music creation, but we believe they have a lot of potential in systematic musicological applications involving music performance analysis.

Peter Driessen, George Tzanetakis

Music Ethnology

Frontmatter

47. Interaction Between Systematic Musicology and Research on Traditional Music

The origin of systematic musicology is strongly linked to the studies of music cultures of non-Western origin. From the methodological point of view, folk music research applied systematic methods to collect and analyze data. Anthropology of music and later ethnomusicology had a different focus: musical phenomena should be interpreted in their cultural context. The cognitive approach was the third paradigm change in the field of systematic musicology, which again changed both methodology as well the point of view of research topics. In cross-cultural music cognition, as well as in cognitive ethnomusicology, previous approaches in systematic musicology, ethnomusicology, and cognitive science of music are combined: systematic analysis of data related to human cognition interpreted in a cultural context.

Jukka Louhivuori

48. Analytical Ethnomusicology: How We Got Out of Analysis and How to Get Back In

Analysis has had a long and somewhat tenuous history under the umbrella of ethnomusicology. In this chapter, we examine the trajectory of analytical ethnomusicology, from its parallel beginnings in late 19th-century Europe and North America through its relative obscurity in the field in the mid-20th century to its panoply of new methods in the late 20th and early 21st centuries. The aim of the chapter is threefold. Looked at in one way, it is a simple historical overview of analysis in ethnomusicology: an examination of the major players, from Erich Moritz von Hornbostel to Alan Lomax to many of today’s central scholars, as well as the major trends and intellectual frameworks influencing its execution, from cultural evolutionism to cultural relativism to interdisciplinarity. Yet it is also designed as an exploration of the myriad methods and approaches in the analytical ethnomusicologist’s toolkit, from transcription and trait listing to structural analysis, computational analysis, and the new comparative analysis. And finally, woven throughout is the story of the place of analysis in ethnomusicological research: its strengths and weaknesses, successes and mistakes, practitioners and detractors. Through these discussions, we then begin to unpack the ebbs and flows of its use, reception, and usefulness in the field.

Leslie Tilley

49. Musical Systems of Sub-Saharan Africa

The following chapter is a brief overview of Sub-Saharan music traditions based on our current knowledge. It seeks to fill a gap that exists in the studies done thus far because, to the best of our knowledge, no inventory of the essential parameters that go into the constitution of this region's musical systems has been made. This is probably due to the fact that ethnomusicologists specializing in African music as well as many others give priority to the cultural or anthropological context in which music practices have a function, rather than considering the subject itself – music – as a system.In light of this situation, and while I am aware of the weaknesses and imperfections such an approach will have difficulty avoiding, it nevertheless seems worthwhile to make an attempt.This chapter is not intended to be exhaustive but is rather, beyond its specific content, an attempt to provide a tool for study that is easy to use and practical to all those who are interested in and curious about the grammar that underlies the Sub-Saharan African music traditions: scholars, teachers, students, not to mention those on the creative end, notably jazz musicians and composers, and more generally all people interested in music from beyond Europe and those interested in the manufacturing of African folk music.In my mind, the value of this work extends beyond the text to include collections of references and the selective but very rich bibliography. These collections are divided into the following categories: general characteristics of African music, the acquisition of musical knowledge, taxonomy, scales, time organization, form and structure, variations, polyphonic techniques in general, hocket and polyrhythm. Each one lists authors whose work covers these categories. Readers can then consult the bibliography for further study of an author or subject presented in the text.

Simha Arom

50. Music Among Ethnic Minorities in Southeast Asia

In the countries of mainland Southeast Asia there are several ethnic minority groups, particularly in the mountainous inland and in forest areas. There is much variation between the customs of these peoples but it is also possible to see similarities on a metalevel. In this chapter strong traits in the village-based music culture of the ethnic minorities are presented, in some cases on purely historical grounds. These traits are in many cases paralleled in the tradition of the majority peoples. Against this background follows a discussion on musical change and matters of sustainability.

Håkan Lundström

51. Music Archaeology

Music archaeology is a rather recent field of academic studies that emerged in the 19th century and provides access to musical cultures of ancient civilizations. Modern research, characterized by expanding interdisciplinary methodological approaches, has its roots in the 1970s. Music archaeology explores ancient music and archaeoacoustic phenomena embedded in archaeological contexts and objects, iconographic representations, and written sources. After a long period of processing and classifying such data, cultural turns in the past two decades have increasingly influenced the interpretation of archaeological music evidence.

Ricardo Eichmann

52. The Complex Dynamics of Improvisation

This essay provides some general observations about the field of improvisation studies and surveys important theoretical and empirical work on the subject. It makes a distinction between referent-based and referent-free musical improvisation, placing particular emphasis on the specific issues that surround the latter, and highlighting recent research that has arisen to address them. Whether referent-based or referent-free, improvisation appears to involve a continual tension between stabilization through communication and past experience and instability through fluctuations and surprise. While many issues persist about how to frame and explore musical improvisation, there is broad agreement that improvisation involves novel output (for the individual, but only optionally for society) created in nondeterministic, real-time situations by individuals and collectives involving certain affordances and constraints. The critical questions involve how we chose to frame these improvisatory dynamics: either as information processing struggling to keep pace with the cognitive demands of the moment, or as an ecologically sensitive engagement with one's sonic and social world.

David Borgo

53. Music of Struggle and Protest in the 20th Century

This is a description of a sound, a poetics, and a political stance in the United States of America during a turbulent century and a half written by someone who grew up in the social milieu described. It endeavors to trace some of the historical and literary roots of 20th-century protest music and discusses the political and musical impact of certain musician-activists on the styles of protest music popular in the second half of the 20th century. These included Charles Seeger, John and Alan Lomax, and Pete Seeger, among others. The tradition of using song to express political ideas flourished in the first four decades of the century, declined due to political repression in the fifth decade, flourished again during the 1960–1980s, and moved to spoken poetry and rap toward the end of the century. For a brief period of time the 20th century forms and performances of music of struggle and protest in the United States had a major impact on music and how music was used in other struggles around the world.

Anthony Seeger

Backmatter

Weitere Informationen