Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2024 | OriginalPaper | Buchkapitel

Sound Design and the Theory of Self-augmented Interactions

verfasst von : Marc Leman

Erschienen in: Sonic Design

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In the past decades, musicology has been evolving at a pace that matches new developments in technology. Underneath this development, a new theory of music emerged, embracing interaction states as a model for understanding how music can be empowering. In the present chapter, sound design is considered from the viewpoint of interaction states, using caregiver–infant communication as a challenging domain of application. Sound design components of interest are identified, as well as human capacities for dealing with them in terms of empowerment. These are related to the concepts of self-augmented interaction and biofeedback-based sound design.

1 Introduction

Imagine the request of a neonatologist, a medical doctor specialized in the care of newborn babies. Imagine that the neonatologist requests us, sound designers, to stimulate premature-born infants with musical sounds as a way to lower stress and stimulate brain development. Stress in premature infants is mainly due to sounds coming from several clinical devices in the NICU (Neonatal Intensive Care Unit), as well as to a lack of body movement and stimulation thereof. Stress is known to cause long-term emotional complications, abnormal brain development, and health alteration (Beltrán et al., 2022). The effects of musical sounds could be measured via physiological outcome indicators of stress, such as heart rate, blood pressure, and oxygen levels, among others. Hence, hard-core evidence could be provided on whether a sound design treatment would be an improvement compared to the current situation. Can we, as sound designers, offer proper musical stimuli?
In this chapter, the neonatologist’s request will serve as a common thread in a general reflection on sound design and its underlying theory of musical action and perception, or at least how we interpret this theory. We start with identifying the core components that are typical for music as well as the core human capacities needed for processing these components. Using these components and capacities, we clarify a theory called “self-augmented interaction.” With this theory, we aim to explain the power of music in healing, that is, why music could be effective in particular clinical contexts. While the case of preterm infants will leave us with many unanswered questions, the neonatologist’s request is challenging and fruitful for sharpening our own premature points of view. Towards the end of this chapter, we will expand our viewpoints on biofeedback as a possible way to go for future sound design applications.

2 Strategies: Stimulus-Response or Interaction?

Our challenge is to understand which sound design could work for preterm infants. This promises to be a delicate and difficult task, radically diverging from an earlier but misleading claim that Mozart would make infants smarter (see https://​en.​wikipedia.​org/​wiki/​Mozart_​effect). We expect that a decrease in stress in infants would already be a major achievement. Making them smarter could be understood in terms of brain development through rich stimulations. However, can we be sure that music will decrease rather than increase stress, given an already stressed infant? And what can music offer in terms of brain development? We know that preterm infants, due to limited experience with (prenatal) low-pass filtered speech, have difficulties processing natural speech after birth (François et al., 2021). But would music with a proper sound design work any better?
The sound design approach should probably be inspired by the natural way in which caregiver and infant communicate with each other (Malloch, 1999; Trewarthen and Aitken, 2001; Van Puyvelde et al., 2013). Such communication is interactive, involving speech, gestures, and touch. In observations of speech, the exchange of patterns between the caregiver and infant seems to establish an interaction state that is experienced as meaningful by the caregiver. Such an interaction state opens a window of opportunity for an intense contact with the infant, typically described as an attending state in which synchronized sound exchanges, gesturing, and participation in narratives are important characteristics. The overall picture is that caregiver-infant communication is rooted in musicality.
Given the practice of meaningful interaction, that is, interaction meant to establish contact, attention, and intention, the original question of the neonatologist can perhaps be inverted. Rather than asking for a proper sound design that stimulates preterm infants, the question should be whether infants are capable and willing to interact with a designed musical sound. In the former case, we put the infant in a passive role, adhering to a stimulus-response paradigm in the hope that some measurable effect will be generated. In the latter case, we put the infant in an active role, adhering to an interactive paradigm in the hope that an interaction state can be established, so that opportunities are generated for creating effects. The latter could be far more effective because it simulates the interactive basis of caregiver–infant communication. Obviously, this approach suggests a strategy of sound design based on principles of interaction.

3 Components: Emergent Patterns and Expressive Gestures

Given the fact that caregiver–infant interaction can be described as “musical” (Trehub, 2013; Trevarthen, 2008), it is instructive to look at two rather peculiar components of music and see how they could be incorporated into a proper design.

3.1 Pattern Emergence

Pattern emergence implies that a pattern has structural dispositions conjugate with the human disposition for emergence. For example, the human ear/brain will transform harmonic sounds into auditory patterns experienced as pitch (see Langner, 2015). The structure could be a harmonic pattern at 600, 800, 1000, or 1200 Hz having a disposition for subharmonics (or common periodicity) when seen from the viewpoint of the auditory system. This pattern would elicit a clear pitch percept at 200 Hz due to “Verschmelzung” mechanisms (Schneider, 2018a). These mechanisms may also fuse multiple harmonic patterns into a chord. When such harmonic patterns are played in sequence, their fusion may elicit expectations, leading to tonal tensions and relaxation dynamics. The bottom–up mechanisms of pattern emergence may compete with top–down mechanisms of patterns formed by habits, and it seems that both may influence the perception of tonal tension and relaxation, although the precise contributions of sensory (bottom–up) and cognitive (long-term memory) processing are still debated (Collins et al., 2014; Sears et al., 2019). In speech, however, “Verschmelzung” is less evident at multiple hierarchical levels. It works at the level of single pitches of a voice, but not for pitch complexes like chords, although tonal induction as an emergent outcome of the accumulation of tonal information in speech tone sequences might be considered. Clearly, the ability to form emergent patterns at frequency levels  >80 Hz is more prominent in music than in speech. A similar observation can be made for lower frequency structures (<10 Hz) where rhythms, tempi, and meters are formed. Rhythms are made of pulses that subsume a metric, that is, a super-structure that emerges from the lower-level pulse structure. While rhythms can be strongly present in speech as well, rhythmic regularity in music is usually more pronounced than in speech. Similarly, timbres might blend and form emergent texture patterns, a phenomenon that is well-known in orchestration (Schneider, 2018b). Pattern emergence is less obvious for speech because it may just blur the signal, making it less apt for understanding its semantics. Moreover, in music, it often happens that different performers co-regulate their actions to generate pattern emergence at rhythmic, pitch, and timbre levels. Clearly, this phenomenon is far less prominent in speech. Accordingly, for preterm infants, some degree of pattern emergence might be considered as an ingredient of the sound design because we can assume that it will stimulate the infants’ brain disposition for pattern emergence and associated predictions.

3.2 Gestural Expression

There is another aspect in which music differs from speech, namely, in gesturing. Gestures are known to accompany speech (Goldin-Meadow, 2005). Yet, in music, gestures form a more explicit part of the sound pattern. Gestures tend to structure musical pitch as a moving sound object (e.g., with portamento and intonation). In a similar way, gestures tend to structure time (e.g., making intervals shorter and longer for adaptations of tempo and meter), articulations (e.g., legato and staccato), sound color, musical narrative, and dynamics (e.g., crescendo and diminuendo). In short, human gestures structure the sound expression, making gestures constituent of music.
Godøy (2003) used the term “motor-mimetic” to specify the gestural interaction with sound endowed with gestural cues. The term is closely related to the term “mirroring.” The basic idea is that music is endowed with expressive cues that composers and performers encode as part of the sound design, and which listeners and dancers decode through gesturing because it offers them an expression-based, and gesture-based, perspective for prediction. Motor mimesis thus means that music has traces of human-encoded gestural patterns that listeners can decode in terms of a corporeal gestural mirroring of these patterns. “Baby talk” is a good example of a kind of communication in which the expressive component is the major vehicle for interacting. Accordingly, for preterm infants, gestures in the sound design may be considered a component of the sound design because we can assume that it stimulates the infant’s disposition to move in response to it.
Both pattern emergence and gestural expression are probably the core components of a proper sound design that would be capable of generating interaction states with the potential to have empowering effects. In our example, we want the sound design to resemble the caregiver–infant interaction, thus stimulating brain and body movement to become more expressive, responsive, and engaging, with a plausible transfer to other brain functions, such as those needed for speech.

4 Capacities: Affordance, Entrainment and Anticipation

What, then, would be the required capacities of an interactive sound design system? Some of the key human capacities for dealing with sound design have already been suggested. We identify them here as affordance, entrainment, and anticipation.

4.1 Affordance Capacity

An affordance is a property of the sound design, and the capacity to act upon an affordance is called: the affordance capacity (Godøy, 2010). Affordances can be understood as invitations to act in a particular way rather than another. The classic example is the design of a door handle, inviting us to open the door by turning left, or right, based on our knowledge of handling doors. In musicology, the notion of affordance has sometimes been linked with the notion of “frozen emotion” and the idea that composers and performers encode “frozen emotion” in music while listeners have the capacity to decode these emotions because they work as affordances. The affordance capacity is a decoding capacity which, in the case of music, and “frozen emotion” in particular, is likely based on mirroring, or (overt or covert) gesturing along with the music. Would preterm infants already have an affordance capacity? That’s an interesting question. If expression is indeed partly innate, then a biological response to sound cues through movement, perhaps somewhat uncontrolled, can plausibly be expected from an infant, although the time of development after birth will obviously be important to build up knowledge for affordance decoding.

4.2 Entrainment Capacity

The entrainment capacity is the capacity for moving along with music, either in a continuous manner when movement flows along with the music or in a discrete manner when movement marks musical events (Clayton et al., 2005). As the word suggests, entrainment implies that there is something in the music that brings the listener in sync with its temporal course through some form of dynamical process of attraction. In recent years, this phenomenon has received much attention as it also implies to (co-)regulated narratives (McGowan and Delafield-Butt, 2022). In the musical domain, entrainment has been studied in the context of (co-regulated) sensorimotor synchronization, where it has been associated with a bias to subliminally reduce prediction errors in the alignment of body movement with sound cues (Phillips-Silver and Keller, 2012). While entrainment is often defined in relation to synchronization, as the dynamic adaptation of sensorimotor behavior due to coupling, entrainment may also be defined in a broader perspective, as the capacity of giving a response to cues (Trost et al., 2017). In view of infant stimulation, the idea is that the stimulus contains cues to entrain the infant’s responses. However, capacities for synchronized responding might be limited, subdue to the infant’s development.

4.3 Anticipation Capacity

The anticipation capacity is the capacity to predict events. This topic has been widely studied in the context of predictions of sound structures, both in pitch and rhythm (Huron 2008), as well as in the sensorimotor domain related to synchronization (Maes et al., 2014). Obviously, anticipation is also possible in gesturing, where sound cues engage gestures that are intrinsically predictive. When a gesture is initiated, it typically follows a spatiotemporal trajectory based on so-called forward models in the brain. Once initiated, such gestures can become vehicles for anticipating musical events, leading to the phenomenon of reverse causality (Leman, 2016). For example, when listeners are dancing to music, the music and the dance movements are correlated, and typically, the dance movements will anticipate the events that occur in the music. Given the dance–music correlation and the knowledge that dance anticipates the music, the listener may then believe that the dance is, in fact, what causes the music. Obviously, the listener knows that the counterfactual statement “no-dance thus no-music” would fail because the music will continue if the listener doesn’t dance. Nevertheless, despite the denial of the counterfactual, the illusion of reverse causality may be very strong, and it is typically associated with strong feelings of control and power.
Furthermore, we can assume that affordance, entrainment, and anticipation are tightly related to each other. For example, a musical pulse at about 2 Hz is a typical affordance, and children and adult listeners and dancers respond to it by moving along with the pulse, thus engaging an entrainment mechanism for the tempo that is based on prediction, making it possible to experience the illusion of reversed causality. However, this affordance assumes that there is a disposition for moving at 2 Hz. Is this disposition available at birth, or do humans acquire it? In relation to preterm infants, it is reasonable to believe that reverse causality (and hence: gesture–sound anticipation) can only occur when a gesture–sound link is already established, and therefore, it is unlikely that a preterm infant has the anticipation capacity to act in this way. Nevertheless, developing such an anticipatory capacity is probably a key goal of caregiver–infant interactions. For sure, after a few months, babies already understand gesture–sound relationships in the sense that they use them in outspoken anticipatory expressions (personal experience of the author).

5 Sound Design and the Theory of Self-augmented Interaction

Having specified musical components and human capacities, our goal is to understand their role in terms of a theory of music interaction. This theory is the backbone of our sound design for preterm infants. This theory, by the way, gradually evolved over the past decades in the slipstream of cognitive science. It aims at understanding why people, through interacting with music, might benefit from it. People’s impelling attraction to perform, listen, and dance to music is intriguing, and its empowering effects are documented, albeit poorly understood. The acclaimed beneficial power of music is the reason why our neonatologist wanted a sound design with musical properties. But do we understand where that power comes from?

5.1 What is Self-augmented Interaction?

In what follows, we develop the notion of self-augmentation as a distinctive feature of music interaction. Self-augmented interaction implies that interaction is becoming sustained, richer, and more empowering than other states that do not, or to a lesser degree, have this empowering effect. We may assume that the distinctive feature of a self-augmented interaction state is based on a more optimal functioning of its underlying constituent processes. For example, when a string quartet plays and maintains a particular stable tempo, functional for global musical processing, it is because the members of that quartet have co-regulated their actions such that the required tempo can be created and maintained. The tempo is based on the optimal functioning of the underlying constituent timing and sensorimotor mechanisms. The “self” points to the fact that the musicians want to play together and realize this tempo without any external driver. If a musician plays too fast, the spell of a stable tempo may be rapidly lost, and the enriched state may become dis-integrated. Self-augmented interactions require physical effort, attentional focus, and sharpness of sensorimotor activity.
Such states can be conceived from the viewpoint of complex dynamical systems (Schiavio, Maes, van den Schyff, 2022). In Leman (2016), we introduced the notion of musical homeostasis. While in medicine, homeostasis refers to states of systems that regulate processes in the human body, such as body temperature, or sugar level, musical homeostasis typically requires physical effort and sensorimotor skills. Obviously, some training and learning may be needed before high-end self-augmentation can be achieved, especially when sensorimotor skills are requested. In that sense, the self-augmented interaction state refers to a precarious level (because it can quickly dis-integrate) requiring physical effort in the form of attention, physical activity, and highly skilled sensorimotor gesturing.
At this point, it may appear that we are far from our caregiver–infant narrative. However, based on what we discussed so far, human interaction between caregiver and infant can be understood in terms of a mutual exchange of gestures which, through co-regulation, may lead to a self-augmented interaction state or homeostasis. The hypothesis is that self-augmented interaction states facilitate empowering effects, such as attention spans and specific outcomes, such as bonding, and other psychosocial effects, such as the formation of goal-directed and anticipatory behavior.

5.2 A Model for Self-augmented Interaction

A global model for understanding self-augmented interaction is shown in Fig. 1 (see Leman, 2016, chapter 8). In a nutshell, this model suggests that music engages sensorimotor processes of prediction, embodiment, and expression, which in turn engage emotion-related processes of control and agency, arousal and attention, and pro-social empathy leading to reward-related processes that drive human subjects to engage more with music. As such, a cycle is created that may support the realization of self-augmented interaction states. The connection between prediction and control/agency, the connection between embodiment and arousal/attention, or the connection between expression and pro-social empathic emotions, may be the focus of separate research projects in modern musicology (e.g., Bader, 2018). Yet, the overall picture is that a network of processes, through its mutual influences of streams of information, hormones, and neurotransmitters, can develop into a state that can be characterized as self-augmented because it surpasses the normal state of being. When such a state can be maintained for a while, homeostasis is established, which opens a window of opportunity for effects that are otherwise hard to obtain. Recall that pattern emergence and gestural expression will facilitate the generation of self-augmented interaction states. These components of the sound design fit with the human capacity for affordance, entrainment, and anticipation. In addition, multi-person co-regulation of actions sets a social context that can be motivating, and the formation of a musical narrative can become compelling.
Thus, when a caregiver and infant happen to maintain interaction through a co-regulated gestural and sound narrative, it is possible that this interaction develops into a self-augmented interaction state. To maintain that homeostasis, the infant and caregiver rely on their own state of co-regulated sensorimotor-emotive processes, using these with extra effort in view of homeostasis. The assumption is that such an interaction state opens a window of opportunity for training and learning, implying a reinforcement of its underlying processes. The gain, apart from developing better music interaction capabilities, should also be seen in terms of its transfer to other sensorimotor, cognitive, and social functions.

5.3 Foundations in Expression Theory

This interaction theory is a testable theory about human expressive behavior and how it contributes to psychosocial empowerment. However, given its complexity, it may be interesting to decompose the theory into sub-theories. Accordingly, the theory can be approached from the viewpoint of embodiment (e.g., Leman, 2007), predictive processing (e.g., Seth, 2014, Koelsch et al., 2019), and expression (Leman, 2016). While embodiment and predictive processing are well-known viewpoints, expression theory is often less well understood in cognitive science, despite its development by scholars who contributed to the foundations of cognitive science, such as David Hume, Adam Smith, Charles Darwin, and Erving Goffman (see Bonicco-Donato, 2016). Briefly stated, expression theory is based on the idea that expression in person A calls for an expressive response in person B, which in turn serves as a stimulus for expression in A and so on, thus leading to a mutual exchange of expressions that might result in an interaction state, plausible a self-augmented interaction state.
An expression can be defined as a pattern from A transmitted to B. However, a major source of confusion in expression theory concerns the idea that expressions are utterances of some underlying state of being. However, in many contexts, expressions really don’t require any inference at all about the expression’s underlying state of being (cf., Kahneman, 2011). Instead, the interaction is direct and spontaneous, based on expressive responses to patterns, through alignment, mirroring, including counterpoint gesturing. Obviously, whether inferencing or gesturing is applied largely depends on the context and type of interaction. But the main point here is that expressions do not always have to point to something underlying.
Based on observations about caregiver–infant interactions, it is likely that much interaction is based on intuitive thinking (Kahneman, 2011), plausibly under the umbrella of overall analytic thinking. Therefore, rather than inferring the latent state of being (known as the theory of mind theory), it is more appropriate to speak about gestural responding (Leman, 2016). The real power of expression exchanges is their ability to build up and maintain self-augmented states. Expression theory may thus be understood in terms of an exchange of expressive gestures as patterns that steer up the interaction towards self-augmented states.

6 Biofeedback Systems

The original neonatologist’s question was whether we, as sound designers, could develop stimuli to lower preterm infants’ stress in the NICU and stimulate their brain development. Based on the above considerations, it is straightforward to consider a sound design that would be able to create and maintain self-augmented interaction states with the preterm infant. Hence, the possibility of having an adaptive sound design, similar to a real human who is adaptive to the infant’s responses, can be considered.

6.1 Sound Design in Biofeedback Systems

The development of a sound design with biofeedback for preterm infants may find inspiration in the development of biofeedback systems in other domains. In recent years, several attempts at developing biofeedback systems for sports have been undertaken, such as in running, weightlifting, and biking (Van den Berghe et al., 2021, 2022; Lorenzoni, 2019a, b; Maes et al., 2019), as well as biofeedback systems for physical rehabilitation of patients with Multiple Sclerosis (Moumdjiam et al., 2019). Typically, the sound design would interfere with the human action–perception coupling in view of changing the behavior.
In Van den Berghe et al. (2021, 2022), for example, a biofeedback system is used to relearn the behavior of a recreational runner. A runner’s way of running will likely cause knee injuries over time when the impulse levels measured at the tibia exceed a particular threshold. The impulse level is an indicator of unwanted or wanted behavior, and it can be measured with an accelerometer. Based on that information, the interactive sound design can be changed. The current system, called Low-Impact Runner, adds noise to music that is nicely synchronized with the running. Accordingly, a reinforcement learning paradigm applies in which the impulse level drives the amount of noise added to the heard music. If the measured impact level is too high, a high noise level is added; if the measured impact level was high and is later lowered, noise is lowered. As such, it is possible to drive the runner’s running style towards a new (self-chosen) running style that generates less impact. Such interactive sound designs are based on principles of intuitive and embodied responses rather than warning signals that would call for analytic thinking and inference. In our example, a balance is regulated between a highly enjoyable and motivating stimulus, that is, preferred music whose tempo is nicely aligned with the regularities of movement during running (using DJogger, see Moens et al., 2014), and a highly annoying disturbance of that same stimulus, based on different noise levels (see Lorenzoni et al., 2019a, b). While music engages the runner in a self-augmented interaction state, noise regulates the degree of annoyance and adaptation. The result is a powerful system having effect sizes in the order of  >25%!

6.2 Ethical Considerations of Sound Design

Whether a biofeedback-based interactive sound design is feasible for preterm infants in the NICU is a matter of careful research strategy. It would be possible to detect awake states in infants, and depending on the infant’s activity during that state, stimuli could be provided that afford body responses. Such responses can be monitored, and immediate sound feedback can be provided in view of establishing particular interaction states. However, further work will have to tell us whether this type of sound design is feasible and effective in terms of psychosocial empowerment. If we really push forward our theoretical ideas, an interactive design should also consider gestural design together with sound design. Based on the theoretical insights, the neonatologist’s request would imply that new types of sound design are not only interactive but also embodied. However, the essence is that they would create interaction states with a window of opportunity for generating beneficial effects.
However, whatever adaptive sound design approach is used, it will raise ethical issues. On the one hand, the failure of a sound design strategy may have severe consequences for the later development of the preterm infant. For example, when the wanted effect of a decrease in stress turns out to be an increase in stress, this may impact the infant’s development. On the other hand, if we are almost sure that a sound design would work, can we then set up an experiment where one group serves as a control (not using the sound design)? These and other issues need further consideration and development.

7 Conclusion

The request of a neonatologist to develop a sound design for preterm infants in the NICI was used here as a common thread for considerations about human–sound interactions and their effects. Using the concept of self-augmentation, it was argued that sound design components that match human capacities can lead to interaction outcomes beyond the human’s individual reach. It is assumed that such outcomes offer windows of opportunity for possible powerful effects and empowerment. This theory of self-augmented interaction evolved over several decades of research in musicology, and it was developed at pace with trends in cognitive science and technological developments. As a measure of its success, one may count the number of PhDs and big research projects and laboratory facilities created in view of this kind of music research over two decades in Europe. Further development of this theory, or at least our interpretation of the theory, depends on evidence-based applications in domains such as interactive arts, sports, and physiotherapy. As it stands, it seems that multi-media-based biofeedback systems are the key to testing the ultimate power of the theory.

Acknowledgment

The idea that sound design requires a theory of gestural music interaction has been advocated by Rolf Inge Godøy since the early years of the “Arbeitskreiss” (called: ISSM or International Society of Systematic Musicology). Founded in the mid-1990s, the ISSM initiated numerous summer schools and conferences advocating the use of new technologies and novel empirical strategies in musicology. Rolf Inge’s contribution to a proactive musicology, in line with the concept of the “Arbeitskreiss,” has been and still is much appreciated, and this chapter is a tribute to Rolf Inge’s achievements.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Literatur
Zurück zum Zitat Beltrán, M.I., Dudink, J., de Jong, T.M., et al.: Sensory-based interventions in the NICU: systematic review of effects on preterm brain development. Pediatr. Res. 92, 47–60 (2022)CrossRef Beltrán, M.I., Dudink, J., de Jong, T.M., et al.: Sensory-based interventions in the NICU: systematic review of effects on preterm brain development. Pediatr. Res. 92, 47–60 (2022)CrossRef
Zurück zum Zitat Bonicco-Donato, D.: Une archéologie de l’interaction. De David Hume à Erving Goffman. Vrin, Paris (2016) Bonicco-Donato, D.: Une archéologie de l’interaction. De David Hume à Erving Goffman. Vrin, Paris (2016)
Zurück zum Zitat Clayton, M., Sager, R., Will, U.: In time with the music: the concept of entrainment and its significance for ethnomusicology. In: European Meetings in Ethnomusicology, vol. 11, pp. 1–82. Romanian Society for Ethnomusicology (2005) Clayton, M., Sager, R., Will, U.: In time with the music: the concept of entrainment and its significance for ethnomusicology. In: European Meetings in Ethnomusicology, vol. 11, pp. 1–82. Romanian Society for Ethnomusicology (2005)
Zurück zum Zitat Collins, T., Tillmann, B., Barrett, F.S., Delbé, C., Janata, P.: A combined model of sensory and cognitive representations underlying tonal expectations in music: from audio signals to behavior. Psychol. Rev. 121(1), 33 (2014)CrossRef Collins, T., Tillmann, B., Barrett, F.S., Delbé, C., Janata, P.: A combined model of sensory and cognitive representations underlying tonal expectations in music: from audio signals to behavior. Psychol. Rev. 121(1), 33 (2014)CrossRef
Zurück zum Zitat François, C., Rodriguez-Fornells, A., Teixidó, M., Agut, T., Bosch, L.: Attenuated brain responses to speech sounds in moderate preterm infants at term age. Dev. Sci. 24, e12990 (2021)CrossRef François, C., Rodriguez-Fornells, A., Teixidó, M., Agut, T., Bosch, L.: Attenuated brain responses to speech sounds in moderate preterm infants at term age. Dev. Sci. 24, e12990 (2021)CrossRef
Zurück zum Zitat Godøy, R.I.: Motor-mimetic music cognition. Leonardo 36(4), 317–319 (2003)CrossRef Godøy, R.I.: Motor-mimetic music cognition. Leonardo 36(4), 317–319 (2003)CrossRef
Zurück zum Zitat Godøy, R.I.: Gestural affordances of musical sound. In: Musical Gestures, pp. 115–137. Routledge (2010) Godøy, R.I.: Gestural affordances of musical sound. In: Musical Gestures, pp. 115–137. Routledge (2010)
Zurück zum Zitat Goldin-Meadow, S.: Hearing Gesture: How Our Hands Help Us Think. Harvard University Press, Cambridge (2005)CrossRef Goldin-Meadow, S.: Hearing Gesture: How Our Hands Help Us Think. Harvard University Press, Cambridge (2005)CrossRef
Zurück zum Zitat Huron, D.: Sweet Anticipation: Music and the Psychology of Expectation. MIT Press, Cambridge (2008) Huron, D.: Sweet Anticipation: Music and the Psychology of Expectation. MIT Press, Cambridge (2008)
Zurück zum Zitat Koelsch, S., Vuust, P., Friston, K.: Predictive processes and the peculiar case of music. Trends Cogn. Sci. 23(1), 63–77 (2019) Koelsch, S., Vuust, P., Friston, K.: Predictive processes and the peculiar case of music. Trends Cogn. Sci. 23(1), 63–77 (2019)
Zurück zum Zitat Kahneman, D.: Thinking, fast and slow. Farrar, Straus Giroux (2011) Kahneman, D.: Thinking, fast and slow. Farrar, Straus Giroux (2011)
Zurück zum Zitat Langner, G.D.: The Neural Code of Pitch and Harmony. Cambridge University Press, Cambridge (2015)CrossRef Langner, G.D.: The Neural Code of Pitch and Harmony. Cambridge University Press, Cambridge (2015)CrossRef
Zurück zum Zitat Leman, M.: Embodied Music Cognition and Mediation Technology. MIT Press, Cambridge (2007)CrossRef Leman, M.: Embodied Music Cognition and Mediation Technology. MIT Press, Cambridge (2007)CrossRef
Zurück zum Zitat Leman, M.: The Expressive Moment: How Interaction (with Music) Shapes Human Empowerment. MIT Press, Cambridge (2016) Leman, M.: The Expressive Moment: How Interaction (with Music) Shapes Human Empowerment. MIT Press, Cambridge (2016)
Zurück zum Zitat Lorenzoni, V., Van den Berghe, P., Maes, P.J., De Bie, T., De Clercq, D., Leman, M.: Design and validation of an auditory biofeedback system for modification of running parameters. J. Multimodal User Interfaces 13(3), 167–180 (2019a) Lorenzoni, V., Van den Berghe, P., Maes, P.J., De Bie, T., De Clercq, D., Leman, M.: Design and validation of an auditory biofeedback system for modification of running parameters. J. Multimodal User Interfaces 13(3), 167–180 (2019a)
Zurück zum Zitat Lorenzoni, V., Staley, J., Marchant, T., Onderdijk, K.E., Maes, P.J., Leman, M.: The sound instructor: a music-based biofeedback system for improving weightlifting technique. Plos One 14(8), e0220915 (2019b) Lorenzoni, V., Staley, J., Marchant, T., Onderdijk, K.E., Maes, P.J., Leman, M.: The sound instructor: a music-based biofeedback system for improving weightlifting technique. Plos One 14(8), e0220915 (2019b)
Zurück zum Zitat Maes, P.J., Leman, M., Palmer, C., Wanderley, M.M.: Action-based effects on music perception. Front. Psychol. 4, 1008 (2014)CrossRef Maes, P.J., Leman, M., Palmer, C., Wanderley, M.M.: Action-based effects on music perception. Front. Psychol. 4, 1008 (2014)CrossRef
Zurück zum Zitat Maes, P.J., Lorenzoni, V., Six, J.: The SoundBike: musical sonification strategies to enhance cyclists’ spontaneous synchronization to external music. J. Multimodal User Interfaces 13(3), 155–166 (2019)CrossRef Maes, P.J., Lorenzoni, V., Six, J.: The SoundBike: musical sonification strategies to enhance cyclists’ spontaneous synchronization to external music. J. Multimodal User Interfaces 13(3), 155–166 (2019)CrossRef
Zurück zum Zitat Malloch, S.: Mothers and infants and communicative musicality. Music. Sci. 3, 29–57 (1999)CrossRef Malloch, S.: Mothers and infants and communicative musicality. Music. Sci. 3, 29–57 (1999)CrossRef
Zurück zum Zitat McGowan, T., Delafield-Butt, J.: Narrative as co-regulation: a review of embodied narrative in infant development. Infant Behav. Dev. 68, 101747 (2022)CrossRef McGowan, T., Delafield-Butt, J.: Narrative as co-regulation: a review of embodied narrative in infant development. Infant Behav. Dev. 68, 101747 (2022)CrossRef
Zurück zum Zitat Moens, B., et al.: Encouraging spontaneous synchronisation with D-Jogger, an adaptive music player that aligns movement and music. PLoS ONE 9(12), e114234 (2014)CrossRef Moens, B., et al.: Encouraging spontaneous synchronisation with D-Jogger, an adaptive music player that aligns movement and music. PLoS ONE 9(12), e114234 (2014)CrossRef
Zurück zum Zitat Moumdjian, L., et al.: Walking to music and metronome at various tempi in persons with multiple sclerosis: a basis for rehabilitation. Neurorehabil. Neural Repair 33(6), 464–475 (2019)CrossRef Moumdjian, L., et al.: Walking to music and metronome at various tempi in persons with multiple sclerosis: a basis for rehabilitation. Neurorehabil. Neural Repair 33(6), 464–475 (2019)CrossRef
Zurück zum Zitat Phillips-Silver, J., Keller, P.E.: Searching for roots of entrainment and joint action in early musical interactions. Front. Hum. Neurosci. 6, 26 (2012)CrossRef Phillips-Silver, J., Keller, P.E.: Searching for roots of entrainment and joint action in early musical interactions. Front. Hum. Neurosci. 6, 26 (2012)CrossRef
Zurück zum Zitat Schiavio, A., Maes, P.J., van der Schyff, D.: The dynamics of musical participation. Music. Sci. 26(3), 604–626 (2022)CrossRef Schiavio, A., Maes, P.J., van der Schyff, D.: The dynamics of musical participation. Music. Sci. 26(3), 604–626 (2022)CrossRef
Zurück zum Zitat Seth, A.K.: The cybernetic Bayesian brain. Open mind. Frankfurt am Main: MIND Group (2014) Seth, A.K.: The cybernetic Bayesian brain. Open mind. Frankfurt am Main: MIND Group (2014)
Zurück zum Zitat Trehub, S.: Communication, music, and language in infancy. In: Arbib, M.A. (ed.) Language, Music, and the Brain. Strüngmann Forum Reports, vol. 10. MIT Press (2013) Trehub, S.: Communication, music, and language in infancy. In: Arbib, M.A. (ed.) Language, Music, and the Brain. Strüngmann Forum Reports, vol. 10. MIT Press (2013)
Zurück zum Zitat Trevarthen, C., Aitken, K.J.: Infant intersubjectivity: research, theory, and clinical applications. J. Child Psychol. Psychiatry Allied Disciplines 42(1), 3–48 (2001)CrossRef Trevarthen, C., Aitken, K.J.: Infant intersubjectivity: research, theory, and clinical applications. J. Child Psychol. Psychiatry Allied Disciplines 42(1), 3–48 (2001)CrossRef
Zurück zum Zitat Trevarthen, C.: The musical art of infant conversation: narrating in the time of sympathetic experience, without rational interpretation, before words. Music. Sci. 12(1_suppl), 15–46 (2008)CrossRef Trevarthen, C.: The musical art of infant conversation: narrating in the time of sympathetic experience, without rational interpretation, before words. Music. Sci. 12(1_suppl), 15–46 (2008)CrossRef
Zurück zum Zitat Trost, W.J., Labbé, C., Grandjean, D.: Rhythmic entrainment as a musical affect induction mechanism. Neuropsychologia 96, 96–110 (2017)CrossRef Trost, W.J., Labbé, C., Grandjean, D.: Rhythmic entrainment as a musical affect induction mechanism. Neuropsychologia 96, 96–110 (2017)CrossRef
Zurück zum Zitat Van den Berghe, P., et al.: Music-based biofeedback to reduce tibial shock in over-ground running: a proof-of-concept study. Sci. Rep. 11(1), 1–12 (2021) Van den Berghe, P., et al.: Music-based biofeedback to reduce tibial shock in over-ground running: a proof-of-concept study. Sci. Rep. 11(1), 1–12 (2021)
Zurück zum Zitat Van den Berghe, P., et al.: Reducing the peak tibial acceleration of running by music-based biofeedback: a quasi-randomized controlled trial. Scand. J. Med. Sci. Sports 32(4), 698–709 (2022)CrossRef Van den Berghe, P., et al.: Reducing the peak tibial acceleration of running by music-based biofeedback: a quasi-randomized controlled trial. Scand. J. Med. Sci. Sports 32(4), 698–709 (2022)CrossRef
Zurück zum Zitat Van Puyvelde, M., et al.: The interplay between tonal synchrony and social engagement in mother–infant interaction. Infancy 18(5), 849–872 (2013)CrossRef Van Puyvelde, M., et al.: The interplay between tonal synchrony and social engagement in mother–infant interaction. Infancy 18(5), 849–872 (2013)CrossRef
Metadaten
Titel
Sound Design and the Theory of Self-augmented Interactions
verfasst von
Marc Leman
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-57892-2_2

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.