Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2024 | OriginalPaper | Buchkapitel

Generic Motion Components for Sonic Design

verfasst von : Rolf Inge Godøy

Erschienen in: Sonic Design

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Sonic design, understood as the activity of intentionally creating sound events, encompasses both musical craftsmanship and analytic reflection. It may include technologies for sound synthesis and processing, as well as traditional methods for sound generation by musical instruments or the human voice, and also principles of orchestration. Common to many instances of sonic design, is having acoustic components that blend with concurrent real or imagined motion sensations. Thus, sonic design can be understood as a multimodal phenomenon, yet we often lack suitable concepts for differentiating and evaluating these multimodal components. This paper aims to present work on developing a scheme to detect, and actively exploit, generic motion components in sonic design, be that as analytic or creative tools.

1 Introduction

We may come across the expression ‘sonic design’ in different contexts, but there seems to be a consensus that it designates the activity of generating perceptually salient sound events, be that in music (instrumental, vocal, electronic), multimedia (theatre, movies, videos), human-machine interactions (computers, phones, electronic devices), or even as attributes of consumer products (cars, motorbikes, lighters), or marketing logos and branding in general (e.g., the so-called “James Bond chord”). But whereas sound logos will have significations in the direction of semiotics or narrativity, i.e., conveying some specific meaning beyond the sonic event, the focus of the present chapter will be limited to subjectively perceived sonic features, primarily to sonic design as an instance of musical creativity, be that in performance, improvisation, or composition.
To our knowledge, there have been only a few attempts to apply sonic design perspectives to so-called Western classical music, yet there is arguably a strong affinity between salient perceptual features of Western classical music and several key issues in sonic design. The lack of focus on perceptual sonic features we have seen in mainstream Western music theory is symptomatic of a general focus on the symbolic representations of pitch and duration, i.e., on Western notation-based features, and usually not on features of output sound. However, given recent technological developments, tools for sound generation and analysis are now readily available and applicable to various sonic design features, including those of Western classical music, enabling systematic research in this area.
Furthermore, we may see the noun ‘sound’ used interchangeably with the adjective ‘sonic’ and the expression ‘sound design’ used with more or less the same extension as ‘sonic design.’ Given our past work in this area, we shall continue to use ‘sonic design’ here and understand this expression to also include a number of elements in addition to what may be understood as purely acoustic components. This is actually the main point of the present chapter: Understanding sonic design as a multimodal music-related activity, comprising sensations of sound fused with sensations of motion. In more detail, I shall present an overview of salient sonic and motion features, and suggest how these can be integrated into useful tools for music creation and analysis. In view of these aims, similar motion components can be found across different instances of music and multimedia, constituting what can be called generic motion components for sonic design.
That subjective experiences of music are closely linked with sensations of motion, seems now to be claimed in much music-related discourse, and we have, during the last decades, seen a deluge of publications on such links. But the main focus in these publications seems to be on whole body motion, i.e., on how people move in synchrony with music by so-called entrainment (the process of aligning or synchronizing independent rhythmic processes), and less on more small-scale motion of effectors (fingers, hands, arms, tongue, lips, etc.) in sound-producing body motion. Of particular interest in our context are the details of motion we associate with specific sound features (e.g., the rapid back-and-forth shaking motion associated with a tremolo sound), and how sensations of such motion may be integral to our mental images of musical sound. From our own and other work on music-related motion, we have come to believe that what is referred to as ‘sonic design’, is also a matter of detecting and qualifying several motion components. Actually, we believe the links between sound and body motion are so extensive that we may not be able to univocally state what-is-what of sound and motion in our subjective perception of music, and should just accept that we need to explore both sound and motion components in sonic design.
Fortunately, it turns out that based on past and more recent research within music perception and associated cognitive sciences, it is indeed possible to develop some more systematic schemes for detecting and differentiating salient multimodal features that can be useful in sonic design (see, e.g., Godøy and Leman 2010 for an overview of music-related body motion). This is first of all linked with our sensations of the temporal patterns of energy in both sound and body motion, what we could collectively call energy envelopes of sound and motion, manifest, e.g., in a protracted sound linked to a sensation of protracted motion, or in a percussive sound linked to a sensation of an impulsive motion. The key issue here is the recognition of basic motion categories based on body motion constraints, such as the biomechanical and motor control differences between a sustained and an impulsive kind of body motion, as well as recognizing several other constraints that serve to shape output musical sound.
The ambition of this chapter is then to contribute to a conceptual, analytic framework for sonic design based on exploiting such sound–motion relationships, including both observable sound-producing motion of performing musicians (blowing, hitting, stroking, bowing, plucking, rubbing, etc.), and more subjectively perceived or imagined motion sensations emerging in listening, such as from various subtle timbral changes in the course of sound events, or from composite textural patterns with several concurrent layers of sound-producing motion, or also from more superordinate dynamic and spectral shapes of composite ensemble sound. And notably, the term ‘motion’ here also includes postures, as posture may be understood as a prerequisite motor control element for motion (Rosenbaum 2017), as well as a component of sound shaping, for instance, in body postures used when performing specific instruments or the posture shapes of the vocal apparatus related to specific vocal sounds.
We shall then, in the next Sect.  2, have an overview of what may be considered salient sonic features in our context, extending from the basic acoustic and signal-based to the more behaviorally and musically significant. Then follows a Sect. 3 on generative features in sonic design, encompassing both traditional instrumental and/or vocal music as well as more technology-based means for synthesis and processing, and a Sect. 4 on the related topic of multimodality, based on the abovementioned belief that sonic design involves more than just ‘pure’ sound. Then follows a Sect. 5 on analytic tools for sonic design, leading on to a Sect. 6 on textures and roles, focused on the distribution of sound events as well as individual musicians’ contributions to the output sound events. These topics lead to a Sect. 7 with some ontological reflections on the perceptual significations of different components of sonic design, as well an encounter with challenges in what we can call musical translation in the succeeding Sect. 8, i.e., on the transfer of musical ideas from one instrumental or vocal setting to another, before a concluding Sect. 9 with a brief summary of the main ideas and some thoughts on further work on sonic design.

2 Sonic Features

The expression ‘sonic design’ seems to go back to the pioneering work of Robert Cogan and Pozzi Escot, with the publication of their highly innovative and mainly sound features-based approach to musical analysis (Cogan and Escot 1976). Further work aiming to correlate musical features more directly with concrete sonic features by the use of sonograms and thus breaking out of the confines of the Western notation-based analytic framework was presented in (Cogan 1984). As for exploring significant sonic features of musical sound, current work on sonic design can reap benefits from a very large number of relevant publications, ranging from those focused on physical-acoustic features (e.g., Rossing 2002, Loy 2007), and/or technologies for digital sound synthesis and processing (e.g., Roads 1996, Zölzer 2011), to those focused on auditory perception (e.g., Bregman 1990, Fastl and Zwicker 2007), including publications related to sonic object perception (Griffiths and Warren 2004, Bizley and Cohen 2013), and some highly relevant publications on object formation in human motion (Klapp and Jagacinski 2011, Loram et al. 2014). Concerning motion as integral to sonic design, we should mention publications on sound-motion relationships (e.g., Rocchesso and Fontana 2003, Godøy and Leman 2010, Clayton et al. 2013), as well as on methods for motion capture and subsequent data processing of sound-motion relationships (e.g., Godøy et al. 2016, Godøy et al. 2017, Gonzales-Sanchez et al. 2019), all contributing to a broad background for work on sonic design. We can also benefit from projects of so-called interactive sonic design in various artistic and/or entertainment contexts, spurred on by new technical possibilities for multimedia experiences, as well as by the need for enhanced modes of human-machine interaction (Franinovic and Serafin 2013).
But the most significant contribution to a framework for sonic design research here has been the theoretical work of Pierre Schaeffer and co-workers on so-called sound objects, theoretical work emerging from compositional activities in the so-called musique concrète genre of the late 1940s and following decades. The impetus for this theory development was experiences with musical composition based on sound fragments from a variety of sources, be that human, animal, environmental, mechanical, instrumental, or electronic, and bypassing most features of more traditional Western music, leading to a fundamental revision of the basic principles of music theory (Schaeffer 1952, 1966, Chion 2009, Schaeffer et al. 1998, Godøy 2013, 1997a, 2021a).
The result of this revision was a theory based on the subjective perception of sound objects, defined as fragments of sound, typically in the duration range of 0.5 to 5 s. The reason for this focus was initially pragmatic, with the use of looped fragments on discs in the early days of the musique concrète, before the advent of the tape recorder. These experiences of listening to innumerable repetitions of such looped sound fragments made Schaeffer and co-workers realize that their perception of the sound fragments changed. Their listening focus shifted away from the anecdotic significations (e.g., a door squeaking signaling that someone is coming) towards the sound features as such (e.g., the glissando feature of the squeaking sound), something that came to be called reduced listening. This meant focusing on the overall dynamic, timbral, and pitch-related shapes, as well as various internal details, of the entire sound object, engendering a theory based on exploring perceptually salient features at different timescales within sound objects.
Schaeffer’s approach was that of a top–down feature differentiation, following a seemingly naïve Socratic line of questioning as to what we are hearing, and progressively differentiating more and more feature dimensions based on this inquisitive listening with the long-term aim of correlating these subjective sensations with more objective acoustic features. This method ended up with a classification of the overall dynamic and pitch-related shapes of the sound objects, called the typology of sound objects, and with a more elaborate classification of the internal features of the sound objects, called the morphology of sound objects. The typology had three main categories for the overall dynamics, called facture:
  • Impulsive: a short and percussive kind of sound
  • Sustained: a prolonged and relatively stable sound
  • Iterative: a rapidly repeated sound such as in a tremolo
The typology had three main categories for pitch-related content, called mass:
  • Tonic: with a clear and stable sense of pitch
  • Complex: being strongly inharmonic or noise-dominated
  • Variable: having a changing sense of pitch
The typology was meant to be a first and coarse classification of sound objects to be further differentiated with the help of morphological features. Any sonic object would be assigned more sub-features, and some also sub-sub-features, in sum, providing a rather progressively more and more elaborate scheme for feature classification.
In Fig. 1, there is a 3x3 typology spectrum illustration of sound objects from tracks 31, 32, and 33 on CD3 in (Schaeffer et al. 1998), sounds made with traditional Western instrumental ensembles. Interestingly, Schaeffer applied these generic motion types to several different sources on CD3, namely the sounds in tracks 31–33, 34–36, 37–39, and 40–42, i.e., in 12 rather different examples, some of which came from synthetic sources, illustrating the universality of this typological categorization scheme.
Schaeffer’s concept of mass was intended as a scheme for classifying spectral distribution, extending from single clearly pitched sounds to complex spectral sounds, and as harmonic, inharmonic, or noise-dominated, as well as their distribution of components in the spectrum and in time, be that stationary or evolving (as the profile of mass). All these elements can be related to a general notion of shape, i.e., of pitch, of overall dynamics, of more internal fluctuations, or of spectral content, e.g., to the wah-wah sound of an opening-closing of a straight mute on a trumpet, or the shifts between the sul tasto and the sul ponticello in bowing on a violin.
Going further into the morphology of sonic objects, the initially most salient features are the so-called gait and grain. Gait denotes the slower motion within the object, as can be seen later in Fig. 3 as the undulating motion in violin 2, viola, and cello. Grain denotes the fast fluctuations within a sound object, also to be seen in Fig. 3 in the subsequent tremolo motion in the violas.
Concerning current and recent research on timbre, the attention devoted by Schaeffer to the evolution of the spectral features within a sound object is remarkable, compared with some notions of timbre associated mostly only with the stationary spectral features, i.e., with what was called tone color (Slawson 1985). In the context of sonic design, limiting explorations to stationary spectra is insufficient, as this will miss the rich features of musical sound due to the within-sound motion of different kinds, ranging from various transients to more pronounced textural motion. In sum, the sound object theory of Schaeffer had the ambition of being able to diagnose whatever sound fragment is thrown at us by detecting its most salient perceptual features. That is, by trying to figure out why it subjectively sounds the way it does, and then later on, trying to correlate these subjective features with acoustic data. That said, we presently have readily available tools for going into various details of timbre, providing visualizations (e.g., with Sonic Visualiser), enabling extracting various defined features such as spectral flux, spectral centroid, harmonicity, etc. (e.g., MIRtoolbox for Matlab), and also make analysis-by-synthesis simulations of timbral features (e.g. in Max/MSP), just to mention some prominent tools here.
Also, within the framework of Western music theory, there are many features, albeit at the level of tones, that arguably could be included in a sonic design framework. This goes for various kinds of voicing and/or distributions of tones in time and spectrum, significantly so when we compare tone distributions, e.g., dense “Beethoven-type” chords with more widely spaced “Chopin-type” chords. Also, chord categories (e.g., triads, fourths chords, polychords, cluster chords, etc.) and modality categories (e.g., Lydian, Phrygian, Messiaen modes, etc.), all have salient effects on the sonic design, something that clearly deserves a more extensive research effort within music theory.
From the mentioned typological and morphological features as well as tone-level features of Western music theory, we see that salient features are manifest at different, and also often concurrent, timescales, making timescales differentiation a crucial topic in sonic design (Godøy 2022a, 2022b). The most important timescale for sonic design, following the seminal work of Schaeffer, is that of the sound object (as mentioned above, typically in the 0.5. to 5 s duration range), because this timescale may contain most defining features in terms of style, aesthetics, sense of motion, and affect. The typical sound object duration is also largely sufficient to contain sequentially evolving events, such as an entire tone envelope with its attack, sustain, and decay, thus enabling a cumulative and all-at-once presence of these components in echoic memory, something that is necessary for the holistic perception of a sound object. This insight came to be known through the so-called cut bell experience of Schaeffer and co-workers, i.e., that manipulating the attack and sustain segments of a bell sound would radically alter the overall sensory impression of the sound. Understanding how sequentially occurring features fuse into the sensation of a sound object as a holistic entity is a major challenge in sonic design.

3 Generative Features

The holistic nature of sound objects is also manifest when it consists of several tones in succession, as noted by Michel Chion: “[…] a harp arpeggio on the score is a series of notes; but, to the listener, it is a single sound object.” (Chion 2009, p. 33). A series of tone events may certainly be perceived as a coherent entity by way of the proximity of tones, as suggested by gestalt theory (Tenny and Polansky 1980), but even more so if we consider the arpeggio as a single coherent motion unit. In fact, holistically conceived motion units seem also to be optimal for body motion motor control in what may be called motor gestalts (Klapp and Jagacinski 2011), as well as implementing the efficacy of so-called intermittent motor control (Loram et al. 2014), i.e., a point-by-point scheme for controlling upcoming motion events. These motor control elements seem to converge in suggesting the existence of what we could call motion objects, similar to sound objects, and their multimodal combination in what we could call sound–motion objects.
The basic idea here is that sound production, albeit variably so, is conditioned by various constraints. These constraints not only determine what is possible, difficult, or even impossible for sound production on various instruments and/or the human voice but, in a more positive sense, contribute to salient features of the resultant sound. Constraints range from various physiological limitations on speed, amplitude, rate, etc., of sound-producing motion, up to affecting emergent sonic features such as spectral flux, harmonicity, spectral centroid, etc. (Godøy 2021b). In particular, motion constraints manifest in the temporal unfolding, i.e., in the envelopes presented above, and in the fusion of otherwise separate events by so-called coarticulation (Godøy 2014).
Coarticulation signifies the fusion or contextual smearing of motion components due to the need to prepare for upcoming motion events, e.g., hands need to move ahead of fingers in piano performance to place the fingers in a certain position to hit a key at the right moment in time, and there are also spillover effects from recently made motion, so coarticulation may be affected by both future and past events. Coarticulation is well known in several everyday tasks such as typing and tool use (Rosenbaum 2009), but most of all in speech (Hardcastle and Hewlett 1999), where the vocal apparatus is preparing for the production of upcoming sounds, as well as being moving away from the vocal apparatus shape for the most recently produced sound. Coarticulation means that there is a constraint-based tendency to fuse small-scale motion events into larger-scale motion events, and in music, coarticulation may contribute to the fusion of not only sound-producing motion but also of the output sounds, cf. The Chion example mentioned above. It can be argued that due to coarticulation, there is often a tendency towards object formation both in motion and output sound (Godøy 2022b).
Constraint-based motion may also result in distinct, mutually exclusive categories such as the mentioned impulsive, sustained, and iterative types of sound, e.g., a sustained sound or motion fragment will, by definition, not be impulsive. But there may be transitions between such categories by so-called phase transition (Haken, Kelso, and Bunz 1985). For instance, if a sustained sound is incrementally shortened, it may at some point turn into an impulsive sound, or if the rate of separate impulsive sounds is increased, it may at some point turn into an iterative sound. Similarly, there may be phase transitions concerning coarticulation in that decreasing the distance between tone onsets may increase the levels of both anticipatory and spillover motion, and hence also the level of contextual smearing of output sounds (Godøy 2021b).
Including sound-producing motion in research on sonic design is challenging when we try to understand performers’ ‘tacit knowledge’ of shaping sound to their ideals. The tacit sonic design knowledge can, in part, be accessed by recording sound-producing body motion by video or with motion capture systems while also recording physiological data associated with such motion, e.g., muscle tension (EMG), pupillometry, brain activity (EEG) and other brain observation data. Such data can, after extensive pre-processing, be correlated with output sound features, something that should enhance our understanding of sound-motion objects in sonic design.
But as for sound generated by electronic means, what are the production schemas at work in such cases? Schaeffer’s (and our) response is that the same typological schemas may apply to all sound events, regardless of origin, and thus also to synthetic sound, as in the examples on tracks 40, 41, and 42 on CD3 in (Schaeffer et al. 1998), because the energy envelopes may be matched to motor schemas, i.e., so that these schemas apply to electronic sounds as if the electronic sounds were made by body motion (see Godøy 2021b for this example).
So-called physical models of sound synthesis have a special status in view of sonic design because they are designed to (variably so) simulate energy relationships, differently from more abstract synthesis models. Physical models are diverse, ranging from quite simple to highly complex. Models of the so-called source-filter type are particularly interesting in our context in that they are based on the principle of an energy source that produces an output that is passed through various filters, resulting in an output sound with musically interesting features. An instance of this is the so-called Karplus-Strong model for simulating a plucked string. A burst of white noise sent into a feedback loop through a delay line with a low-pass filter will sound like a plucked string. It is interesting as a model of a ‘real world’ instrument in that a quantum of energy (the noise burst) is reverberating and gradually dissipating its energy within a system.
With more abstract sound synthesis models, there is the challenge of navigating to intended output features, given the point of departure that, in theory, any sound, heard or unheard, may be generated by digital synthesis. The extensive work in recent decades on the control of output features in synthesis has documented that some kind of holistic input scheme would be useful, and we have seen physical models that work by simulating the physics of the sound generation as well as having an input that simulates the actual sound-producing body motion (Bounëard et al. 2010).
Given the continuous increase in computational power and widespread activity in developing more responsive interfaces for gestural control, we are probably going to see enhancements in such direct control of physical model synthesis, hence have tools more in line with what we could call an ecological and constraint-based framework for synthesis in sonic design. A number of software tools are now available for experimentation and systematic analysis-by-synthesis explorations of sound-motion features, e.g. such as Modalys and Max/MSP, as well as useful suggestions for concrete work strategies (Farnell 2010).

4 Multimodality

From the overview of sonic and generative features, it should be clear that sonic design involves more than ‘pure’ sound. The features we encounter in sound events are related to motion, and sensations of motion are, in turn, composite, involving sensations of effort, energy, quantity of motion, trajectory shapes, posture shapes, and derivatives of motion such as velocity, acceleration, jerk, and the mentioned phenomena of phase transition and coarticulation. Also, secondary and more ‘passive’ features, such as haptic, proprioceptive, visual, etc. sensations, may all (variably so) be related to sonic design. Given extensive experiences of observing sound-producing motion as well as motion in general, it seems reasonable to suspect that most people may have motion sensation components in sound perception, as has been the claim of the so-called motor theory now for more than half a century (Liberman and Mattingly 1985, Galantucci, Fowler and Turvey 2006). I coined the term motormimetic cognition to signify this sensing or mental simulation of sound-producing body motion linked to whatever sound we are hearing or imagining in music (Godøy 2001, 2003). The basic tenet here is that sonic design is a multimodal topic, not limited to any idea of ‘pure’ sound.
Looking closer at what is going on in sound events, we realize that sound onset and sustain are happening because of an energy transfer from the musician to the instrument. This means that, e.g., a rapid drum fill is as much a rapid sequence of mallet-hand-arm-shoulder-etc. motion as a series of sound events (cf. Godøy et al. 2017). How the modalities of sound and motion work together, as well as which is the most important in any listening situation, is still an open question. Could even a silent choreography of sound-producing motion give us some sense of a drum fill? What is crucial in our context is that images of sound-producing motion, in turn with several components, e.g., sensations of muscle contraction, proprioceptive sensations, visual sensations, etc., may all contribute to giving us some salient image of the drum fill. This implies that the motion components can also become a tool for handling the otherwise ephemeral sound sensations.
The following motion features are detectable, measurable, and may be documented in motion data (Godøy 2021b):
  • Quantity of motion (QoM): the overall sense of energy in sound-producing motion
  • Velocity of motion: the sensation of displacement speed and direction
  • Acceleration: the sensation of change in the displacement speed
  • Jerk: abruptness in the displacement
  • Phase transition: a qualitative categorical change due to incremental change in amplitude and/or frequency of motion
  • Coarticulation: the fusion of otherwise separate elements due to spillover and/or anticipatory smearing
A main feature of motormimetic cognition concerns mapping energy envelopes of perceived sound to body motion energy envelopes, be these body energy envelopes based on actually seen body motion (as when attending a performance) or only imagined (headphones listening, eyes closed), so as to become integral sound features. Some examples of motormimetic elements relevant to sonic design are evident just by an enumeration of sound features and their possible corresponding sound-producing motion, as in the following:
  • Tremolo: back-and-forth hand motion
  • Trill: lower arm rotation motion
  • Gait: slower undulating paced motion
  • Grain: rapid back-and-forth or up-and-down motion hand motion
  • Crescendo/decrescendo: gradual increase/decrease of force and/or amplitude of motion
  • Flam: double strokes in drumming
  • Glissando: sweeps by hand, arm, whole torso, on an instrument
  • Sustained sound: slow, protracted motion
  • Impulsive sound: rapid ballistic kind of hand/arm motion
Also, when considering articulation elements in music, e.g., staccato, legato, sforzato, tenuto, and bowing types, e.g., martellato, spiccato, etc., see (Halmrast, Guettler, Bader, and Godøy 2010), we may be reminded that they all owe their existence to motion components and that such articulation elements are in fact multimodal phenomena.
Concretely, tracing the typology components of the mentioned facture and mass, and the morphology components of gait, grain, and profiles of mass as shapes, is a prominent feature of Schaeffer’s theoretical work (Schaeffer 1952, 1966), evident in the many conceptual shape images in his publications. Visualizing sonic features as shapes is something we can do in our minds, with pencil and paper, on the computer screen, or just with our fingers and hands in the air, and importantly, regard these tracings as generic images. Thinking of shapes as generic means that they may be applied across different modalities and contexts and be useful as practical tools in both analytic and generative contexts, e.g., in musical translations (see below).
In summary, sonic design is not limited to ‘tone color,’ i.e., not limited to stationary spectra, but includes motion, motion within sound objects, such as various transients, fluctuations (timbral, dynamic, pitch-related), and all sorts of textural patterns, as well as corresponding sound-producing motion. Shape cognition, in the sense of depicting all kinds of spectral features, both quasi-stationary and changing spectra, all kinds of within-spectrum motion, all kinds of dynamic envelopes, etc., becomes a prime tool for working with multimodality, with the capacity to translate from one modality to another in the analysis and generative processes of sonic design.

5 Analytic Tools

There is clearly a need to develop better tools for analysis and systematic work strategies in sonic design. Ideally, such tools should 1) help diagnose why/how particular sonic features produce specific aesthetic and affective results, and 2) help realize wished-for aesthetic outcomes.
Existing knowledge of musical acoustics and music technology is useful for grasping many sound features, as are the mentioned tools for research on music-related motion. What we have much less of, is analytic tools for sonic design in more traditional Western composition theory. Western music theory, with its focus on more abstract and symbol-based concepts of pitch and duration, does not tell us much about output sound, and is thus largely inadequate for work in sonic design. Fortunately, we have had important developments of software tools enabling explorations of different kinds of sound features, such as the MIRtoolbox for Matlab (Lartillot and Toiviainen 2007), and for visualizing sound features, such as the Sonic Visualiser, and software that enables more experimental changes to sound features, such as AudioSculpt, and the mentioned Modalys and Max/MSP software for hands-on work with analysis-by-synthesis.
The main challenge in view of analytic tools is that salient aesthetic features are emergent based on a distributed substrate in both the time and frequency domains. Thus, we need, first of all, to map out the different relevant timescales and feature dimensions and then to figure out how to represent the ephemeral emergent features relevant to sonic design. Our response is firstly to make graphic images of unfolding motion and sound, i.e., of both envelopes and spectra, as was suggested by Schaeffer’s theory of sound objects, and secondly, to carry out systematic analysis-by-synthesis experiments of holistic, sound object-level features. Also, following Schaeffer, there is converging evidence that the timescale of the sound object is the most important in view of salient features and that other timescales should be seen in relation to this timescale, either as internal features of sound objects or as features of the overall shape of the sound objects (Godøy 2021a). The main arguments in favor of the sound object timescale in our analytic approaches are as follows:
  • The object timescale, with its typological categories, is crucial for the overall emergent features of style, sense of motion, and affect, and this may also apply to musical semiotics (Delalande et al. 1996)
  • Including entire sound objects in our explorations is crucial for capturing salient features distributed in time, cf. the mentioned cut bell experience of salient features that may be non-existent at shorter timescales
  • The object timescale contains the morphology features, and various morphology patterns may be further differentiated into sub-sub-features
  • Sound fragments longer than the typical sound object duration may contain several competing overall features, making focusing on single object features difficult
As for analytic tools, Schaeffer’s approach consists of top-down feature differentiations based on subjective sensations, however, these subjective sensations could later be correlated with acoustic features. Practically, this means:
  • Subjective tracing of overall typological shapes of facture and mass
  • Subjective tracing of salient morphological shapes, i.e., of various internal features
  • Correlating subjective tracings with signal-based representations
This shape-tracing strategy is one of the main ideas of motormimetic cognition and is arguably an extension of Schaeffer’s ideas (cf. Godøy 2006), as shape concepts are manifest in the typological facture categories presented above. Such shape tracing may include the assumed sound-producing motion as reflected in the facture of sound objects, i.e., impulsive, sustained, and iterative shapes, and also apply directly to the dynamic and spectral sound object elements.
In addition, there are methods that are only mostly hinted at in Schaeffer’s works, as they were not so easily implementable with the available technologies of the 1950s and the following couple of decades (Godøy 2021a), but which are possible now:
1.
Analysis-by-synthesis generation of incrementally different variant sound objects by incremental changes in feature dimension values (Risset 1991) to explore categorical limits of salient perceptual features
 
2.
Experimental explorations of phase transition and coarticulation by incremental changes in input and control parameters
 
The main goal of these analytic schemes is to create feature awareness in the analytic and practical work of sonic design, i.e., to make that which is present in subjective experience more explicit by sketching the subjectively perceived shapes and then naming these shapes, thus analytically differentiating salient features. We see then that images of shape, or what we could call shape cognition (Godøy 2019), become a useful part of practical analytic tools, with shape cognition also having a broader foundation in so-called morphodynamic thought (Thom 1983, Petitot 1990, Godøy 1997a).
Following the seminal ideas of Schaeffer, we can have a foundation for analytic sonic design tools here, starting with dynamic and spectral shapes applied at the object timescale and continuing downwards to a progressively more detailed differentiation of features as shapes. Also, following Schaeffer’s idea of correlating these subjective features with acoustic data, we are working on a bottom-up, signal-based scheme for machine-based typological categorization, with the long-term aim of enabling studies of large collections of sound objects.

6 Textures and Roles

The term ‘texture’ is used in a rather inclusive way in musical contexts for designating the overall appearance of sound, similar to the overall appearance of a fabric of textile, wood, or other materials. Discussions of texture in music are typically few and rather brief, which is odd considering how crucial textural features are in Western musical styles, all the way from the emergence of polyphony to present-day music culture.
That texture has been ignored in much Western music theory is probably due to texture being an emergent property of temporally distributed substrates, i.e., of successive tone events and/or internal tone features unfolding in time. There is thus no simple reduction to be made of texture, as texture rather requires a holistic approach, similar to seeing distributed patterns elsewhere, such as in clouds, in waves, in bouncing objects, or in ornate surfaces, meaning that texture in music, with its distributed basis, only exists on the sound object timescale.
Actually, one of the main points of Schaeffer’s typomorphology is about creating what we could call a textural taxonomy, a universally applicable scheme for qualifying perceptually salient textural features, but notably so, both at the tone event and sub-tone event timescale. Theoretically, we can think of a continuum extending from stationary sound (made by additive synthesis with perfectly harmonic spectra) to highly complex sounds with many inharmonic and/or noisy spectral components as well as containing many transients and fluctuations. Along such a continuum, we may find several different kinds of sonic textures, however, they are not always based on tones as the basic ingredients. That said, we do have some useful textural concepts in Western music theory that include various degrees of internal motion, which could initially be classified according to traditional categories:
  • Monody: mostly single melodic lines with intermittent accompaniment, however, with variable degrees of embellishments and more spurious sonic events.
  • Homophony: mostly as successions of chords, but with various internal fluctuations and sonic events, e.g., as is the case in the example in Fig. 3 below.
  • Polyphony: in principle, mostly independent voices, and in some cases, with a fabric of voices so robust that a work can be transferred to different instrumental settings (e.g., as in J. S. Bach’s The Art of Fugue where the individual voices have such outstanding melodic features and limits in ambit, that the same score may be performed by very different instruments, and with what could be considered musically acceptable results).
  • Heterophony: melodic lines in unison or other intervals, and with deviations, found in various non-Western music as well as in jazz and some 20th-century Western music as well as in many non-Western kinds of music (see Godøy 1997a, pp. 219–223 for more on this).
We may also see the coexistence of textural motion features (fast) and sustained harmonic-modal features (slow) both in traditional Western music (baroque to romantic) and in more recent kinds (Messiaen, Lutoslawski, Xenakis, etc.), and in acousmatic music (sustained sounds with superposed transient or fluctuating motion). In principle, these two components, fast and slow, may be separated and explored by an analysis-by-synthesis scheme (cf. Godøy 1997b with Chopin’s C major prelude in various variant guises where the texture is kept constant across several variant pitch modes, i.e., C-major, C-minor, C-phrygian, C-lydian, etc.).
As for textural roles, we have the classic Western scheme of a melodic foreground with a homophonic background accompaniment (with varying levels of voicing independence), as we can see in the schematic view of Fig. 2. However, there are, of course, other kinds of role distributions also in Western music, e.g., with more voicing independence, more pronounced polyphonic textures, as well as recently, also heterophonic and complex textures with various statistical distributions of sound events, e.g., as in some music of Xenakis, Lutoslawski, and Ligeti, where sonic textures may be the most prominent design element.
The analytic strategy for textural elements may then consist of a top-down differentiation of roles, going on to designating sub-roles, sub-sub-roles, etc., and then matching these roles with instrumental idioms and evaluating 1) the suitability of chosen instruments in terms of idioms, and 2) the well-formedness of instrument selections and combinations in terms of acoustic results. Figure 3 shows an excerpt from the second movement of Rimsky-Korsakoff’s Capriccio Espagnol, where some of the mentioned textural features, as well as roles and idiomatic role assignments, are represented, in addition to an acoustical distribution of sound that is remarkable in its sonority and robustness. This robustness is due to Rimsky-Korsakoff’s principle of having the mid-range chordal tones close together and having the top-range and the bottom-range tones more spread out, making for maximal sonority and little risks of “bad” sound. It will always sound good even if the musicians are not at their best.
Once we consider the use of instruments and orchestration as related to sonic design, we see that there has indeed been extensive, albeit not so well-articulated, knowledge of sound design issues, sometimes also linked with ergonomic issues, i.e., issues of idioms and issues of what’s good and bad registers on instruments, issues of what is easy, difficult, or outright impossible on different instruments. However, Rimsky-Korsakoff (1912), combined ideas on acoustics from Helmholtz with experimentation exploring sound combinations, as well as systematic exploitation of instrumental idioms, in view of both performer well-being and optimal acoustic output. His work also includes ideas on good voicing and good parts, and he admonished orchestrators to delete anything in a score that could not be assigned to a clear role.
Thus, what we may collectively call textural features are fundamentally multimodal in evoking sensations of motion, implying that we really can’t isolate a ‘purely sonic’ component in sonic design contexts. However, we can express something about our intentional feature focus at any time in listening and/or imagining. The different kinds of motion in textures, in particular the categories of sustained background vs. moving gait and faster grain, as well as impulsive attack reinforcements, all contribute to the sonic design, with the sustained giving an impression of a reverberant effects processing mix (in this case, pre-electronic). This may be seen in Fig. 3, where the sustained tones of horns 3, 4, and double bass make up the sustained role as a background to the undulating motion in violin 2, viola, and cello, with the remaining instruments massively doubling the foreground melody with its sustained tones. In general, extensive use of sustained tones can create a sensation similar to heavy reverb use, as in the famous Mantovani string orchestra sound, made by having the string players use lots of divisi with time lags so as to create an illusion of stuck tones, e.g., in his famous recordings of Charmine (this technique was actually invented by Ronald Binge).

7 Ontological Reflections

In Western music theory, we have, with the development of music notation, acquired means for focusing on symbolic features, e.g., typically on pitches and durations, organized by various abstract schemes, schemes not always significant in terms of perceptual salience. A bit simplified, we could list the main features of Western music theory as follows: pitches, durations, intervals, chords, modality, motives, and articulations, and also some composite features such as form, style, and more recently, overall ‘sound’ and sensations of affect, and then try to have some opinion as to which of these elements are significant for subjective experiences of sonic design. Such evaluations of musical features are cases of ontological reflections and could be done in view of revealing the importance attached to sonic design issues.
Within Western music culture, we may often encounter an inherited view of music as having a ‘core’ of melody, themes, motives, and formal schemes, in short, as what has been referred to as Formenlehre, and only a ‘periphery’ of instrumental and sound design features. In some cases, we have also seen pitch-related features being transformed into abstract relationships, e.g., as in the so-called pitch class set theory, with a loss of spatiotemporal modal-harmonic features in favor of numerical relationships that may seem unrelated to subjective perceptual features. We have also seen elaborate schemes for the organization of notation symbols in compositions with little reflection on the perceived output, e.g., often with little or no sound object-level feature awareness, as pointed out in Iannis Xenakis’ critique of serial music as lacking macroscopic feature concepts of statistical distributions of sound events (Xenakis 1992). A similar disregard for emergent perceptual issues seems to apply to some more recent and rather naïve cases of sonification, cf. the next section.
The crucial factor is to see composition, music production, and sonic design schemes separate from output perceptual features, i.e., not to confuse production formalisms with perceptually salient features. Jean Petitot (1990) has proposed a general model that is also relevant for sonic design, consisting of a control sphere and a perception sphere, where changes in the parameters of the control sphere may, or may not, have significant effects on the perception sphere, implying that we need to critically examine the relationships between these two spheres. In sonic design, a typically salient output from control input would be the sense of attack, ranging from ‘bowed’ to ‘percussive’ in the perception sphere, based on the incremental shortening of the attack time in the control sphere, whereas a change in pitch class set distributions might not have any significant effect compared with for instance rhythmical-textural elements. With readily available technologies, it is indeed possible to explore such levels of the salience of different structural schemes by an analysis-by-synthesis approach, starting by generating several variant sound objects with incremental changes in control input and then letting listeners evaluate significant, less significant, or insignificant changes. This could, for instance, be applied to sound objects where rhythmical-textural elements are kept unchanged and where pitch and/or modality features are systematically changed to allow evaluations of the relative significance of texture features vs. pitch features (see Godøy 1997b on this).
Taking the emergent features of continuous, fused sound events as primordial for sonic design, the ontological primacy is clearly at the sound object level (chunk level) of holistic shapes, not at the atomistic symbolic notation level. The sound object timescale has a fundamental ontological status, so that in sonic design, we have the primacy of the sound object. As such, there are some affinities with phenomenology in that we make sense of our continuous streams of perceptual input by interrupting these streams into chunks, chunks containing the cumulative images of a certain segment of continuous experience (Husserl 1991, Ricoeur 1981). The important message from phenomenology is that salient features emerge based on the distributed substrate of an entire object (t1 to t2), and that sonic design needs to start considering entire sound objects as coherent entities.
A general point is to think about sonic design in relation to real-world, non-abstract models of sound generation, closer to our common experiences of causality. This also includes motormimetic cognition and generic motion components as fundamental to sonic design, with ‘motion’ here also including various attributes of motion (shapes, effort, velocity, acceleration, jerk, etc.) as well as postures. In summary: ontological reflections are about sorting out what is what in sonic design, and we should be careful when mapping features from one domain to another, evaluating the validity of such mappings, something that will be at the core of what we may call musical translation in the next section.

8 Musical Translation

We can define the expression musical translation as the transfer of a musical idea from one setting, instrumental or vocal, to another, typically as is done in orchestration or arranging. The basic idea is to render an excerpt (or entire work) of music in a new ensemble setting, typically a solo or chamber music work adapted for a full symphony orchestra, with the assumption that the orchestrated music is just an alternative version of the original, hence a case of musical translation.
As is the case with language translations, the difficulty with idioms is that they will often become quite awkward, if not outright misleading, in a strictly literal translation, whereas a more reformulated version may actually be truer to the original than a literal translation. In music, this means that typical idioms for any instrument may not work well if transferred note-by-note to another instrument or sets of instruments, but could work well if transformed by either changing to an ergonomically better version or to a version with several instruments cooperating (Godøy 2018).
Musical translation will thus involve 1) an analysis of the original in view of what is the main musical idea(s), 2) a consideration of what (if any) are highly peculiar idioms embedded in the original, 3) considering whether these idioms could be transformed without doing too much harm to the overall aesthetic intention of the original, and 4), rendering this transformed idea in the new setting using optimal idioms of the new ensemble instruments. In other words, musical translation means adapting a generic motion script, with some adjustments of idioms, yet conserving the overall musical intention. Similar to natural language translation, where translating word-by-word is problematic and translating phrase-by-phrase is often much better, translating tone-by-tone in musical contexts may be problematic, whereas translating sound-object-by-sound-object is usually much better.
This is, in particular, the case when translating between instruments with and/without sustained sound, e.g., from piano to strings, wind, or tutti orchestra. For instance, the effect of the piano sostenuto pedal needs to be taken into account in the translation, otherwise, the result will be unduly dry, and in terms of effects processing, the wet-dry balance in different ensembles is really about making transformed, non-literal translations, cf. the mentioned Mantovani example with the reverb imitated by sustained string tones.
Flexibility in translation is possible because although the sound events we are working with in sonic design may have quite salient overall perceptual features, there is also the possibility of variation of the constituent detail features, hence that the categorical boundaries may be flexible, as is one of the hallmarks of categorical perception (Harnad 1987). The limits of tolerance for such variations can be studied empirically through the analysis-by-synthesis method, i.e., by making several incrementally different variants of some sonic object and then having participating subjects judge when there is a transition from one category to another, as in the abovementioned bowed to percussive category boundary exploration.
Similar problems of translation may be found in the domain of sonification, mapping elements from one domain to another, typically with data from a different domain than music (e.g., various experimental or observational data) to musical sound, to enable listening to the data rather than having to study large collections of numerical data (Hermann, Hunt, and Neuhoff 2011).
As for the generic, and thus translatable, features of music-related motion, we can see that the same sonic feature may apply to various similar body motion types, e.g., in the famous barber scene in Charlie Chaplin’s Dictator. In that scene, Chaplin merges the sound-producing motion of Brahms’ Hungarian Dance Number 5 with the everyday motion of shaving: the sustained motion as a protracted razor shaving motion, the impulsive motion as a rapid flick removing soap, and the iterative motion as a rapid back-and-forth motion of rubbing in the soap (Godøy 2010).
For both translation and sonification, the most important questions are: Which sonic feature(s) is (are) the most prominent? And: How can this be somehow tested with an analysis-by-synthesis approach? Musical translation and sonification may then be testbeds for sonic design features, and what survives a transfer and what does not could be a crucial test for perceptual salience and teach us more about generic motion features.

9 Conclusions

In sum, it seems reasonable to claim that perceived and/or imagined motion may be integral to sound perception and may also have the potential to be a useful element in sonic design. Furthermore, such motion sensations may leave traces of both effort (muscle contractions) and vision (postures and trajectories shapes). Common to such sensations is that they may be conceptualized as shapes, shapes unfolding in both the time domain (dynamic envelopes) and the frequency domain (spectral envelopes), shapes that may furthermore be rendered as amodal graphical figures that can enable translation between modalities.
Although many of the issues covered in this chapter remain to be more systematically explored, we have, for the moment, good reason to conclude with the following:
  • Focusing on sensations of motion in music perception is an efficient strategy to make us aware of salient features we might otherwise not be aware of
  • Exploring generic motion components in sonic design may enhance our capabilities for both systematic diagnosis and enhanced skills for the creation of musical sound
Needless to say, there are also many outstanding issues:
  • We need more systematic studies of sound-motion relationships, both because of how motion shapes sound and how listeners perceive sound–motion links
  • We should work towards developing machine-based sonic object categorization enabling large-scale studies of music collections
  • We need to supplement traditional Western music theory and composition theory with sonic design theory
Yet the current state of knowledge and skills in sound design may be put to use now because:
  • Tracing shapes, both of sound-producing motion and postures, as well as of output sound and sound features, can be useful as generative tools in improvisation and composition
  • Generic motion components can contribute to revising teaching methods, allowing for more spontaneous and improvisation-like creation of musical sound
  • Detecting and qualifying generic motion components in sonic design can advance our understanding of why and how music affects us
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Literatur
Zurück zum Zitat Bizley, J.K., Cohen, Y.E.: The what, where and how of auditory-object perception. Nat. Rev. Neurosci. 14, 693–707 (2013)CrossRef Bizley, J.K., Cohen, Y.E.: The what, where and how of auditory-object perception. Nat. Rev. Neurosci. 14, 693–707 (2013)CrossRef
Zurück zum Zitat Bregman, A.: Auditory Scene Analysis. The MIT Press, Cambridge (1990) Bregman, A.: Auditory Scene Analysis. The MIT Press, Cambridge (1990)
Zurück zum Zitat Clayton, M., Dueck, B., Leante, L.: Experience and Meaning in Music Performance. Oxford University Press, New York (2013) Clayton, M., Dueck, B., Leante, L.: Experience and Meaning in Music Performance. Oxford University Press, New York (2013)
Zurück zum Zitat Cogan, R.: New Images of Musical Sound. Harvard University Press, Cambridge (1984) Cogan, R.: New Images of Musical Sound. Harvard University Press, Cambridge (1984)
Zurück zum Zitat Cogan, R., Escot, P.: Sonic Design: The Nature of Sound and Music. Prentice-Hall, Englewood (1976) Cogan, R., Escot, P.: Sonic Design: The Nature of Sound and Music. Prentice-Hall, Englewood (1976)
Zurück zum Zitat Delalande, F., et al.: Les unités sémiotiques temporelles: Éléments nouveaux d’analyse musicale. Éditions MIM-DocumentsMusurgia, Marseille (1996) Delalande, F., et al.: Les unités sémiotiques temporelles: Éléments nouveaux d’analyse musicale. Éditions MIM-DocumentsMusurgia, Marseille (1996)
Zurück zum Zitat Farnell, A.: Designing Sound. The MIT Press, Cambridge (2010) Farnell, A.: Designing Sound. The MIT Press, Cambridge (2010)
Zurück zum Zitat Franinovic, K., Serafin, S.: Sonic Interaction Design. The MIT Press, Cambridge (2013) Franinovic, K., Serafin, S.: Sonic Interaction Design. The MIT Press, Cambridge (2013)
Zurück zum Zitat Galantucci, B., Fowler, C.A., Turvey, M.T.: The motor theory of speech perception reviewed. Psychon. Bull. Rev. 13(3), 361–377 (2006)CrossRef Galantucci, B., Fowler, C.A., Turvey, M.T.: The motor theory of speech perception reviewed. Psychon. Bull. Rev. 13(3), 361–377 (2006)CrossRef
Zurück zum Zitat Godøy, R.I.: Formalization and Epistemology. Scandinavian University Press, Oslo (1993/1997a) Godøy, R.I.: Formalization and Epistemology. Scandinavian University Press, Oslo (1993/1997a)
Zurück zum Zitat Godøy, R.I.: Knowledge in music theory by shapes of musical objects and sound-producing actions. In: Leman, M. (ed.) Music, Gestalt, and Computing. JIC 1996. Lecture Notes in Computer Science, vol. 1317, pp. 106–110. Springer, Heidelberg (1997b). https://doi.org/10.1007/BFb0034109 Godøy, R.I.: Knowledge in music theory by shapes of musical objects and sound-producing actions. In: Leman, M. (ed.) Music, Gestalt, and Computing. JIC 1996. Lecture Notes in Computer Science, vol. 1317, pp. 106–110. Springer, Heidelberg (1997b). https://​doi.​org/​10.​1007/​BFb0034109
Zurück zum Zitat Godøy, R.I.: Imagined Action, Excitation, and Resonance. In: Godøy, R.I., Jørgensen, H. (eds.) Musical Imagery, pp. 239–252. Swets & Zeitlinger, Lisse (Holland) (2001) Godøy, R.I.: Imagined Action, Excitation, and Resonance. In: Godøy, R.I., Jørgensen, H. (eds.) Musical Imagery, pp. 239–252. Swets & Zeitlinger, Lisse (Holland) (2001)
Zurück zum Zitat Godøy, R.I.: Motor-mimetic music cognition. Leonardo 36(4), 317–319 (2003)CrossRef Godøy, R.I.: Motor-mimetic music cognition. Leonardo 36(4), 317–319 (2003)CrossRef
Zurück zum Zitat Godøy, R.I.: Gestural-sonorous objects: embodied extensions of Schaeffer’s conceptual apparatus. Organised Sound 11(2), 149–157 (2006)CrossRef Godøy, R.I.: Gestural-sonorous objects: embodied extensions of Schaeffer’s conceptual apparatus. Organised Sound 11(2), 149–157 (2006)CrossRef
Zurück zum Zitat Godøy, R.I.: Gestural affordances of musical sound. In: Godøy, R.I., Leman, M. (eds.) Musical Gestures. Sound, Movement, and Meaning. Routledge, New York (2010) Godøy, R.I.: Gestural affordances of musical sound. In: Godøy, R.I., Leman, M. (eds.) Musical Gestures. Sound, Movement, and Meaning. Routledge, New York (2010)
Zurück zum Zitat Godøy, R.I.: Understanding coarticulation in musical experience. In: Aramaki, M., Derrien, M., Kronland-Martinet, R., Ystad, S. (eds.) Sound, Music, and Motion. CMMR 2013. Lecture Notes in Computer Science, vol. 8905, pp. 535–547. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-12976-1_32 Godøy, R.I.: Understanding coarticulation in musical experience. In: Aramaki, M., Derrien, M., Kronland-Martinet, R., Ystad, S. (eds.) Sound, Music, and Motion. CMMR 2013. Lecture Notes in Computer Science, vol. 8905, pp. 535–547. Springer, Cham (2014). https://​doi.​org/​10.​1007/​978-3-319-12976-1_​32
Zurück zum Zitat Godøy, R.I., Song, M.-H., Dahl, S.: Exploring sound-motion textures in drum set performance. In: Proceedings of the SMC Conferences, pp. 145–152 (2017) Godøy, R.I., Song, M.-H., Dahl, S.: Exploring sound-motion textures in drum set performance. In: Proceedings of the SMC Conferences, pp. 145–152 (2017)
Zurück zum Zitat Godøy, R.I.: Musical shape cognition. In: Grimshaw, M., Walther-Hansenm M., Knakkergaard, M. (eds.), The Oxford Handbook of Sound and Imagination, vol. 2, pp. 237–258. Oxford University Press, New York (2019) Godøy, R.I.: Musical shape cognition. In: Grimshaw, M., Walther-Hansenm M., Knakkergaard, M. (eds.), The Oxford Handbook of Sound and Imagination, vol. 2, pp. 237–258. Oxford University Press, New York (2019)
Zurück zum Zitat Godøy, R.I., Leman, M.: Musical Gestures: Sound, Movement, and Meaning. Routledge, New York (2010)CrossRef Godøy, R.I., Leman, M.: Musical Gestures: Sound, Movement, and Meaning. Routledge, New York (2010)CrossRef
Zurück zum Zitat Griffiths, T.D., Warren, J.D.: What is an auditory object? Nat. Rev. Neurosci. 5(11), 887–892 (2004)CrossRef Griffiths, T.D., Warren, J.D.: What is an auditory object? Nat. Rev. Neurosci. 5(11), 887–892 (2004)CrossRef
Zurück zum Zitat Halmrast, T., Guettler, K., Bader, R., Godøy, R.I.: Gesture and timbre. In: Godøy, R.I., Leman, M. (eds.) Musical Gestures. Sound, Movement, and Meaning, pp. 195–223. Routledge, New York (2010) Halmrast, T., Guettler, K., Bader, R., Godøy, R.I.: Gesture and timbre. In: Godøy, R.I., Leman, M. (eds.) Musical Gestures. Sound, Movement, and Meaning, pp. 195–223. Routledge, New York (2010)
Zurück zum Zitat Hardcastle, W.J., Hewlett, N.: Coarticulation: Theory, Data, and Techniques. Cambridge University Press, Cambridge (1999) Hardcastle, W.J., Hewlett, N.: Coarticulation: Theory, Data, and Techniques. Cambridge University Press, Cambridge (1999)
Zurück zum Zitat Harnad, S.: Categorical Perception. Cambridge University Press, Cambridge (1987) Harnad, S.: Categorical Perception. Cambridge University Press, Cambridge (1987)
Zurück zum Zitat Hermann, T., Hunt, A., Neuhoff, J.G.: The Sonification Handbook. Logos Publishing House, Berlin (2011) Hermann, T., Hunt, A., Neuhoff, J.G.: The Sonification Handbook. Logos Publishing House, Berlin (2011)
Zurück zum Zitat Husserl, E.: On the Phenomenology of the Consciousness of Internal Time. In: Brough, J.B. (ed.), pp. 1893–1917. Kluwer Academic Publishers, Doredrech (1991) Husserl, E.: On the Phenomenology of the Consciousness of Internal Time. In: Brough, J.B. (ed.), pp. 1893–1917. Kluwer Academic Publishers, Doredrech (1991)
Zurück zum Zitat Klapp, S.T., Jagacinski, R.J.: Gestalt principles in the control of motor action. Psychol. Bull. 137(3), 443–462 (2011)CrossRef Klapp, S.T., Jagacinski, R.J.: Gestalt principles in the control of motor action. Psychol. Bull. 137(3), 443–462 (2011)CrossRef
Zurück zum Zitat Lartillot, O., Toiviainen, P.: A Matlab toolbox for musical feature extraction from audio. In: International Conference on Digital Audio Effects, Bordeaux, 2007 (2007) Lartillot, O., Toiviainen, P.: A Matlab toolbox for musical feature extraction from audio. In: International Conference on Digital Audio Effects, Bordeaux, 2007 (2007)
Zurück zum Zitat Loram, I.D., van De Kamp, C., Lakie, M., Gollee, H., Gawthrop, P.J.: Does the motor system need intermittent control? Exerc. Sport Sci. Rev. 42(3), 117–125 (2014)CrossRef Loram, I.D., van De Kamp, C., Lakie, M., Gollee, H., Gawthrop, P.J.: Does the motor system need intermittent control? Exerc. Sport Sci. Rev. 42(3), 117–125 (2014)CrossRef
Zurück zum Zitat Loy, G.: Musimathics. The MIT Press, Cambridge (2007) Loy, G.: Musimathics. The MIT Press, Cambridge (2007)
Zurück zum Zitat Petitot, J.: ‘Forme’ in Encyclopædia Universalis. Encyclopædia Universalis, Paris (1990) Petitot, J.: ‘Forme’ in Encyclopædia Universalis. Encyclopædia Universalis, Paris (1990)
Zurück zum Zitat Ricoeur, P.,: Hermeneutics and the Human Sciences. Cambridge University Press/Éditions de la Maison des Sciences de l’Homme, Cambridge/Paris (1981) Ricoeur, P.,: Hermeneutics and the Human Sciences. Cambridge University Press/Éditions de la Maison des Sciences de l’Homme, Cambridge/Paris (1981)
Zurück zum Zitat Risset, J.-C.: Timbre analysis by synthesis: representations, imitations and variants for musical composition. In: De Poli, G., Piccialli, A., Roads, C. (eds.) Representations of Musical Signals, pp. 7–43. The MIT Press, Cambridge (1991) Risset, J.-C.: Timbre analysis by synthesis: representations, imitations and variants for musical composition. In: De Poli, G., Piccialli, A., Roads, C. (eds.) Representations of Musical Signals, pp. 7–43. The MIT Press, Cambridge (1991)
Zurück zum Zitat Roads, C.: The Computer Music Tutorial. The MIT Press, Cambridge (1996) Roads, C.: The Computer Music Tutorial. The MIT Press, Cambridge (1996)
Zurück zum Zitat Rocchesso, D., Fontana, F.: The Sounding Object. Mondo Estremo Publishing (2003) Rocchesso, D., Fontana, F.: The Sounding Object. Mondo Estremo Publishing (2003)
Zurück zum Zitat Rossing, T.D.: The Science of Sound. Addison Wesley, San Francisco (2002) Rossing, T.D.: The Science of Sound. Addison Wesley, San Francisco (2002)
Zurück zum Zitat Rosenbaum, D.A.: Human Motor Control, 2nd edn. Elsevier, Burlington (2009) Rosenbaum, D.A.: Human Motor Control, 2nd edn. Elsevier, Burlington (2009)
Zurück zum Zitat Rosenbaum, D.A.: Knowing Hands: The Cognitive Psychology of Manual Control. Cambridge University Press, Cambridge (2017) Rosenbaum, D.A.: Knowing Hands: The Cognitive Psychology of Manual Control. Cambridge University Press, Cambridge (2017)
Zurück zum Zitat Schaeffer, P.: A la recherche d’une musique concrète. Éditions du Seuil, Paris (1952) Schaeffer, P.: A la recherche d’une musique concrète. Éditions du Seuil, Paris (1952)
Zurück zum Zitat Schaeffer, P.: Traité des objets musicaux. Éditions du Seuil, Paris (1966) Schaeffer, P.: Traité des objets musicaux. Éditions du Seuil, Paris (1966)
Zurück zum Zitat Schaeffer, P. (with sound examples by Reibel, G., and Ferreyra, B.) (1998) (first published in 1967). Solfège de l’objet sonore. INA/GRM, Paris Schaeffer, P. (with sound examples by Reibel, G., and Ferreyra, B.) (1998) (first published in 1967). Solfège de l’objet sonore. INA/GRM, Paris
Zurück zum Zitat Thom, R.: Paraboles et catastrophes. Flammarion, Paris (1983) Thom, R.: Paraboles et catastrophes. Flammarion, Paris (1983)
Zurück zum Zitat Xenakis, I.: Formalized Music, Revised Pendragon Press, Stuyvesant (1992) Xenakis, I.: Formalized Music, Revised Pendragon Press, Stuyvesant (1992)
Zurück zum Zitat Zölzer, U.: Digital Audio Effects, 2nd edn. Wiley, Chichester (2011)CrossRef Zölzer, U.: Digital Audio Effects, 2nd edn. Wiley, Chichester (2011)CrossRef
Metadaten
Titel
Generic Motion Components for Sonic Design
verfasst von
Rolf Inge Godøy
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-57892-2_1

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.