Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2018 | OriginalPaper | Buchkapitel

8. The MSCI Platform: A Framework for the Design and Simulation of Multisensory Virtual Musical Instruments

verfasst von : James Leonard, Nicolas Castagné, Claude Cadoz, Annie Luciani

Erschienen in: Musical Haptics

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This chapter presents recent work concerning physically modelled virtual musical instruments and force feedback. Firstly, we discuss fundamental differences in the gesture–sound relationship between acoustic instruments and digital musical instruments, the former being linked by dynamic physical coupling, the latter by transmission and processing of information and control signals. We then present an approach that allows experiencing physical coupling with virtual instruments, using the CORDIS-ANIMA physical modelling formalism, synchronous computation and force-feedback devices. To this end, we introduce a framework for the creation and manipulation of multisensory virtual instruments, called the MSCI platform. In particular, we elaborate on the cohabitation, within a single physical model, of sections simulated at different rates. Finally, we discuss the relevance of creating virtual musical instruments in this manner, and we consider their use in live performance.

8.1 Introduction

Computers have deeply changed our way of thinking, working, communicating and creating. The musical world is no exception to this transformation, whether in popular music—which now relies predominantly on electronic means—or in the processes of many modern composers who use software tools to address formal compositional problems, and to capture, synthesise, process and manipulate sound. The rapid advances in computer technology now enable real-time computing and interactive control of complex digital sound synthesis and processing algorithms. When coupled with interfaces that capture musical gestures and map them to the algorithms’ parameters, such systems are named digital musical instruments (DMIs). They are now widespread musical tools and allow for a true form of virtuosity.
However, a fundamental question arises as to the relationship between a musician and a DMI: is it of a similar nature to the relationship that is established with conventional instruments? This question is complex, especially given the available panoply of synthesis techniques and control paradigms. Moreover, digital synthesis brings forth an array of new possibilities for controlling musical timbres, as well as their arrangement at a macro-structural level. It is quite legitimate to ask oneself if these tools should be envisaged by analogy to acoustical instruments, e.g. if they should offer means of manipulation analogous to traditional instruments, or if they require entirely new control and interaction paradigms.
This issue finally questions the very definition of musical instrument: can (and should) a digital interface controlling a real-time sound synthesis process be called an instrument, in the sense that it enables an embodiment comparable to traditional instruments? Can DMIs and conventional instruments be grouped into the same category? Also, is controlling digital synthesis by imitating the way we interact with traditional instruments the most effective approach?
We discuss these issues by considering that the recreation of the physical instrumental relationship between musicians and DMIs is indeed relevant (see Chap. 2). When a digital sound synthesis process is physically based (i.e. relying on physical laws to create representations of sound-producing virtual objects), a bidirectional link between gesture and sound can be established that coherently transforms mechanical energy provided by the user into airborne vibrations of the virtual instrument. Such is the case in acoustical and electroacoustical instruments, referred to by Cadoz as the ergotic function of instrumental gestures [6], and has been proven a key factor in their expressiveness [24, 33].
The design of DMIs addressing these issues calls for:
  • Specific physical modelling and simulation paradigms for digital sound synthesis, in order to design and simulate the dynamics of virtual vibrating objects and mechanical systems.
  • The use of adequate force-feedback technologies to enable energetic coupling between the user and the simulated instrument.
  • Software and hardware solutions to run such physical models synchronously and in real time, at rates of several kHz for the user instrument control chain, and at audio rates (typically 44.1 kHz or higher) for the acoustical components.
  • Tools to physically model the instrument and the various mechanical features that define its ergonomics and playability.
Our answer to these requirements is the Modeleur-Simulateur pour la Création Instrumentale (MSCI) platform, a complete workstation for designing and crafting physics-based multisensory virtual musical instruments and for playing them with force feedback.
The following sections will present: (a) the specifics of multisensory virtual musical instruments, (b) hardware and software design for the MSCI platform, (c) considerations for modelling the mechanics of musical instruments and their decomposition into sections simulated at different rates and (d) use of the platform as a creative tool, including the first use of the MSCI platform by Claude Cadoz in a live performance.

8.2 A Physical Approach to Digital Musical Instruments

The incorporation of haptic devices into musical applications has become a regular feature in the field of computer music, be it by using force-feedback systems or vibrotactile actuators—now present in widespread consumer electronics (common actuators technology is described in Sect. 13.​2). Devices are becoming more affordable, and a wide number of studies point towards the benefits yielded by such systems in terms of control and manipulation for musical tasks [2, 3, 16, 24, 2729] (see also Chap. 6).
Two main approaches for integrating haptics in digital instrumental performance can be distinguished: (i) augmenting DMIs with haptic feedback to enhance their control and convey information to the user, or (ii) making a virtual instrument tangible by enabling gestural interaction with a haptic representation of all or part of the instrument’s mechanical features. Concerning the latter case, at least two sub-categories can be described, namely: (ii-a) the distributed approach, in which the user interacts haptically with a model of the gestural interface of the instrument, which in turn controls the sound synthesis process through feed-forward mapping strategies (historically referred to as multimodal approach at ACROE-ICA), and (ii-b) the unitary approach, in which the entire instrument is represented by a single physical model that is used to render audio, haptic and possibly even visual feedback (we refer to this single-model scenario as multisensory).

8.2.1 Distributed Approach to Haptic Digital Musical Instruments

The distributed (or multi-model) approach to haptic DMIs follows the classic decomposition into gestural controller and sound synthesis sections [33]. The haptic, aural and sometimes visual stimuli are physically decoupled from each other, due to the distributed architecture of the instrument (see Fig. 8.1). Haptic feedback incorporated into the gestural controller enables coupling with certain components of the DMI, for instance, by programming the mechanical behaviour of the gestural control section using a local haptic model. Data extracted from the interaction between the user and this model can then be mapped to chosen sound synthesis parameters.
Some examples of this approach are the Virtual Piano Action by Gillespie [15], or the DIMPLE software [30] in which the user interacts with a rigid dynamics model, and information concerning this interaction (positions, collisions, etc.) is then mapped to an arbitrary sound synthesis process, possibly a physically based simulation.
Vibrotactile feedback inferred from the sound synthesis process itself can be provided to the user by integrating vibration actuators into the gestural controller. Such is the case of Nichols’ vBow friction-driven haptic interface [26] or Marshall’s vibrotactile feedback incorporated directly into DMI controllers [25].
Technical implementations of these systems generally rely on asynchronous computation loops for haptics and sound, employing low- to mid-priced haptic devices such as the Phantom Omni or the Novint Falcon. While these systems tend to bridge the gap between gestural control section and sound synthesis, the sound is still driven by mapping of sensor data, and the user physically interacts only with a local subsection of the instrument.

8.2.2 Unitary Approach to Virtual Musical Instruments

An alternative approach to implementing haptic DMIs is to model the virtual instrument as a single multisensory physical object that jointly bears mechanical, acoustical and possibly visual properties, inherent to its physical nature. Physical modelling techniques are then the only viable approach. As a result, the gestural controller and sound synthesis sections are tightly interconnected: haptic interaction with one part of the instrument will affect it as a whole, and the player is haptically coupled with a complete single model (see Fig. 8.2 and Chap. 2).
Making use of this approach, one can distinguish:
  • Works such as [4, 11, 29] enable haptic interaction with a sound-producing physical model, and rely on computation schemes and hardware technologies comparable to those described in Sect. 8.2.1. Generally, these works employ fairly cheap haptic devices, limited in terms of reactivity and peak force feedback. Also, the computation of the interaction is done in soft real time, often employing asynchronous protocols such as OSC [30] or MIDI [29]. While they do enable direct haptic interfacing with physical models, these systems do not strive for rigorous and coherent energy exchange between the musician and the virtual instrument.
  • Others [13, 19, 20, 31] aim to model and reproduce the physical coupling between musician and traditional instrument as accurately as possible, including the exchange of energy between the two. To this end, high-performance haptic interfaces and synchronous high-speed computational loops are required. Such systems aim to capture the feel, playability and expressiveness of traditional instruments, while opening to the creative possibilities of physical modelling sound synthesis, and more generally of the computer as an instrument.
MSCI fits into the latter category. The platform provides a musician-friendly physical modelling environment in which users can design virtual musical instruments, and allows unified multisensory interaction by simulating those instruments on a dedicated workstation that supplies coherent aural, visual and haptic feedback.

8.3 Hardware and Software Solutions for the MSCI Platform

8.3.1 The TGR Haptic System

The transducteur gestuel rétroactif (TGR) is a force-feedback device designed by the ACROE-ICA laboratory (Fig. 8.3). The first prototype was proposed by Florens in 1978 [12], conceived specifically for the requirements of artistic creation, in particular for instrumental arts such as music. The first goal of the TGR is to render the dynamic qualities of mechanical interactions with simulated objects with the highest possible fidelity: to this end, it offers both a high mechanical bandwidth (up to 15 kHz) and high peak force feedback (up to 200 N per degree of freedom).
Several slices (1-DoF modular electromechanical systems comprised of a sensor and an actuator) can be combined allowing for any number of force-feedback-enabled degrees of freedom [14]. The device employed in the MSCI platform gathers 12 independent modules that can be combined with various mechanical end-effectors, forming 1D, 2D, 3D or even 6D morphological configurations, adapted to the diverse nature of instrumental gestures such as striking, bowing, plucking, grasping.

8.3.2 The CORDIS-ANIMA Formalism

CORDIS-ANIMA [5] is a modular formalism for modelling and simulating mass-interaction networks—that is physical models described by Newtonian point-based mechanics. It defines two main module types:
  • <MAT> modules: It represents material points that update their position in space in response to the force they receive, according to their inertial behaviour. The simplest of these is a punctual mass.
  • <LIA> modules: It represents interactions between two <MAT> modules. The interaction can be elastic, viscous, nonlinear, etc. A<LIA> connects two <MAT> modules and calculates the interaction force between them, depending on their relative positions (for elastic interactions) or velocities (for viscous interactions). The calculated force is then applied symmetrically to each of the connected <MAT> modules.
CORDIS-ANIMA incorporates the notion of physical coupling between networks of elementary modules through the interdependence of two dual variables: position, an extensive variable that gives <MAT> modules a position in space, and force, an intensive variable that originates from interactions between <MAT> modules described by <LIA> modules. Computing the network requires a closed-loop calculation: first, of the new positions of <MAT> modules, and second, of all the forces produced by the <LIA> modules according to the new positions of the <MAT> modules that they are connected to (Fig. 8.4).
Several CORDIS-ANIMA implementations are declined for different geometrical spaces: 1D with scalar distances, or 1D, 2D and 3D with Euclidian distances. The 1D scalar distance version is generally used to simulate vibroacoustic deformations in which all <MAT> modules move along a single scalar axis. Models built in this way are topological networks that may represent a first-order approach to vibratory deformations as found in musical instruments—a simplification that works well in most cases.
For sound-producing physical models, networks must be simulated at audio-rate frequency (generally set at 44.1 kHz) in order to faithfully represent acoustical deformations that occur in the audible range (up to 20 kHz). Non-vibrating models, designed to, e.g. produce visual motion or mechanical systems, are often simulated in 1D, 2D or 3D geometrical spaces and at lower frequencies in the range 1–10 kHz, a bandwidth suited to instrumental performance.
The TGR haptic device is represented in CORDIS-ANIMA as a <MAT> module: this reports positions taken from its sensors and receives forces from the connected <LIA> modules which are then sent to the TGR’s actuators.

8.3.3 The GENESIS Software Environment

GENESIS [9] is ACROE-ICA’s modelling and simulation software for musical creation. It allows to model vibrating objects—from elementary oscillators to complex musical scenes—and to simulate them off-line at 44.1 kHz. GENESIS implements the 1D version of CORDIS-ANIMA, meaning that all <MAT> physical modules move along a single scalar axis conventionally labelled x.
The modelling interface consists in a workbench representing the y-z plane, where <MAT> modules can be placed and interconnected through <LIA> modules to form topological networks (Figs. 8.5 and 8.6). Modules are given physical parameters that dictate their physical behaviour and initial conditions (initial position and speed of <MAT> modules).

8.3.4 Synchronous Real-Time Computing Architecture

The vast majority of available haptic devices communicate asynchronously with physical simulations [11, 30]. Generally, the haptic loop runs locally at approximately 1 kHz, whereas other model components are computed with a lower rate and low demanding latency constraints, following a distributed approach. Current general-purpose computer architectures are perfectly suited for these applications. However, when striving for energetically coherent instrumental interaction between the user and the simulated object, the communication between the haptic device and the simulation plays a key role.
As underlined in Sect. 8.3.2, the global physical entity composed of the force-feedback device and virtual object can be defined as a physical, energy-conserving system only if the haptic position and force data streams integrate seamlessly into the CORDIS-ANIMA closed-loop simulation. To this end, the haptic loop must run synchronously at the rate of the physical simulation, with single-sample latency between its position output and force input. For simulations running at several kHz, the time step (approximately 20–100 μs) within which AD/DA conversions, bidirectional communication with the haptic device and a single computation loop for the whole physical model must occur imposes a reactive computing architecture with guaranteed response time, which is not attainable by general-purpose machines [10].
Additionally, the simulation of physical models sufficiently complex for musical purposes is computationally demanding and therefore ill-fitted for calculation on most current embedded systems. A previous simulation architecture at ACROE-ICA [19] was based on the TORO board from Innovative Integration; while it allowed running the haptic loop synchronously at audio rate (44.1 kHz), the available processing power limited the system to small-scale physical models [20].
The hardware and software architecture of the MSCI platform (shown in Fig. 8.7) consequently addresses both the need for high computing power and reactive I/O. It is based on the RedHawk Linux real-time operating system (RTOS), where the physical simulation is computed in two sections: one running at audio rate (44.1 kHz) and the other running at control (gestural) rate (1–10 kHz). The TORO DSP board serves as a front-end for haptic I/O. Sound is handled by an external soundcard. These components are synchronised through a shared master clock (the soundcard’s wordclock). Visualisation data, on the other hand, is processed asynchronously so as to display the physical model during simulation.
This platform can simulate virtual scenes with up to 7000 interacting audio-rate physical modules: an approximate performance gain by a factor of 50 compared to the previous embedded architecture.

8.4 Multi-rate Decomposition of the Instrumental Chain

The MSCI architecture is based on the idea of decomposing a physical model into a section running at audio rate and another one running at a lower gestural rate. In what follows we discuss the motivations for this decomposition, and how it can be addressed in the CORDIS-ANIMA framework while retaining physical coupling between the two sections of the physical model.

8.4.1 Gesture–Sound Dynamics

The mechanics of traditional instruments present a natural cohabitation of multiple dynamics. In particular, instruments can be generally separated into:
  • A section that is interfaced with the musician’s gestures, which we label excitation structure. This section is mostly non-vibrating, and its frequency bandwidth is comparable to that of human instrumental gestures. Examples of this section are the piano key mechanics, the violin bow, a percussion mallet, a guitar pick, a harp or timpani pedal.
  • A section that produces vibroacoustic deformations, called the vibrating structure. This corresponds to the strings and body of the piano or violin, or the drum head for a percussion instrument. This is often completed with other components operating at acoustic rates, such as a bridge or a sound box.
These two sections are coupled by means of nonlinear interactions (percussion, friction, plucking, etc.) that transform low-bandwidth gesture energy into high-bandwidth energy of acoustical vibrations (Fig. 8.8).
Since these two sections of an instrument operate at different frequency rates, it comes naturally to simulate their discrete-time representations at different sampling rates. While this results in computational optimisation, a major issue arises: how to retain coherent physical coupling between the low-rate and high-rate sections of the instrument and at the same time meet the constraints of synchronous simulation?

8.4.2 Multi-rate CORDIS-ANIMA Simulations

8.4.2.1 Multi-rate Closed-Loop Dynamic Systems

The physical coupling between two sections of a CORDIS-ANIMA model simulated at different rates brings forth two main questions: (i) how to ensure transparent communication of the position and force variables between the two discrete-time systems in order to represent the physical coupling between them, and (ii) how to limit the bandwidth of position and force signals when transiting from one simulation space to the other? For instance, if no band-limiting is applied to the higher rate signals before passing them to the low-rate section, aliasing is produced.
At first glance, the latter seems to be an elementary signal processing issue, solvable by using up- and down-sampling and low-pass digital filtering. However, the physical simulation imposes strict constraints on the operators that can be used: it is a closed-loop system in which force and position variables are coupled within a single simulation step. In other words, a maximum delay of one sample is allowed between all the inputs and outputs, while any additional delay alters the physical consistency of the system and considerably affects the numerical stability of the simulation [22]. This prevents using many standard signal processing tools for up- and down-sampling and digital filtering, as the vast majority of them introduce additional delays.

8.4.2.2 Inter-Frequency Coupling Operators

To address the above issue, up- and down-sampling of position and force variables travelling between the high- and low-rate sections must rely on delay-free (zero-order) operators, even though they necessarily introduce a trade-off in terms of quality of the reconstructed signals. The operators were chosen in accordance with the nature of the variables and their integration into the CORDIS-ANIMA computational scheme, so as to preserve the integrity of the physical quantities circulating inside the multi-rate simulation.
The two types of connections allowed by these operators are given in Fig. 8.9, where X LF and F LF represent, respectively, the low-rate position and force signals, whereas X HF and F HF represent the high-rate signals. Since no delay is introduced, the closed-loop nature of the simulation is preserved.
Theory and experiments demonstrate that a multi-rate model implemented in this manner behaves identically to an equivalent low-rate model in terms of numerical stability, provided that the model operates only in the lower frequency range. However, an inevitable consequence of using these operators is that high-rate signals are distorted. If left untreated, these distortions make the system completely unusable. Consequently, a solution has to be found to filter out unwanted artefacts, while once again avoiding any delay in the position–force closed loop.

8.4.2.3 Low-Pass Filtering by Means of Physical Models

Fortunately, CORDIS-ANIMA models can act as filtering structures [18]. As a basic example, a simple mass–spring oscillator excited by an input force signal can be regarded as a second-order low-pass filter whose transfer function can be expressed explicitly in terms of physical parameters [17]. This property has, for instance, been used to build small virtual physical systems that smooth noise in the position data provided by the TGR’s sensors [19].
It is thus possible to design physical low-pass filters that are as transparent as possible within the low-rate bandwidth, and present a sharp cut-off before the low-rate Nyquist frequency. We have modelled such filters as propagation lines (mass–spring chaplets) with specific mass, stiffness and damping distribution. They are used to eliminate distortion generated by the up-sampling operators and serve as anti-aliasing filters for the circulation of high-rate signals towards low-rate sections, while preserving physical consistency. Careful tuning and scaling of the filtering structures ensure minimal impact on the mechanical properties of the simulated object (e.g., in terms of added stiffness, damping and inertia).

8.4.2.4 Complete Multi-rate Haptic Simulation Chain

Figure 8.10 presents the complete multi-rate haptic simulation chain as implemented in the MSCI platform. An instrument is decomposed into a lower bandwidth gestural section and higher bandwidth vibrating section, simulated synchronously at audio rate. The two sections are coupled through multi-rate operators, a filtering mechanism and a nonlinear interaction that transform gestural energy into vibroacoustic deformations. Physical energy is conserved throughout the system, ensuring computation stability.
These solutions combined allow establishing a true energetic bridge between the real-world user and the simulated instrument, supporting the ergotic function of musical gestures, as defined by Cadoz and Wanderley [6, 7] (see also Chap. 2).

8.5 Virtual Instruments Created with MSCI

8.5.1 Workflow and Design Process

Creating physical models in MSCI is similar to classic modelling with GENESIS, especially concerning the design of vibrating sections of the instrument. The haptic device is integrated directly into the CORDIS-ANIMA model as a series of TGR <MAT> modules, one for each allocated 1-DoF. However, designing haptic DMIs in this way presents a number of specific concerns:
  • Models should be designed as stable passive physical systems. Not meeting these requirements may result in undesirable and potentially dangerous instabilities of the haptic device—although this may occasionally yield interesting musical results.
  • The feel of the instrument perceived by the player is at least as important as the sound resulting from the interaction. It is therefore necessary to adapt the mechanical impedance of the interface between the real world and the simulation, by setting the dual constraints of position and force-feedback range according to both the model and the interaction(s). See also Chap. 2 in this regard.
  • Connecting a haptic device to a virtual instrument may forward hardware-related issues into the simulation domain. For instance, noise from the haptic device’s sensors may propagate through to the virtual instrument’s vibrating components.
Details concerning calibration and impedance matching are described in [20], and various instrument designs are discussed in [21].

8.5.2 Specificities of MSCI Haptic Virtual Instruments

Since the first release of the MSCI platform in 2015, over 100 virtual instruments have been created by the authors, students and the general public. The computing power of modern systems has allowed for the first time to simulate and interact haptically with large-scale instruments composed of thousands of interacting modules. Figure 8.11 shows an example of such models. Especially for large structures with nonlinear acoustical behaviour—such as membranes or cymbals—exploration through real-time manipulation greatly facilitates the iterative design and fine-tuning process.
One notable feature of MSCI’s models is their rich and complex response to different categories of musical gestures [6]. Indeed, as the entire instrument is modelled physically with CORDIS-ANIMA, the user has access to each single simulated point of physical matter. This is not possible in more encapsulated or global physical modelling techniques such as digital waveguides [32] or modal synthesis [1]. This allows for subtle and complex control of the virtual instrument using various haptic modules for different gestures. In the case of a simple string, the excitation gesture could be, e.g. plucking, striking or bowing, whereas modification gestures could be, e.g. pinning down the string onto the fretboard to change its length and pitch (as shown in Fig. 8.12), gently applying pressure onto specific points of the open string to obtain natural harmonics, applying pressure near the bridge to “palm mute” the string or even dynamically move the bridge or the tuning peg of the string to change its acoustical properties over time.
Demonstration sessions and feedback from users tend to strongly confirm the importance of tight physical interaction with the virtual instruments. Even the simplest models can yield a wide palette of sonic possibilities, often leading users to spend a fairly long time (up to 30 min) exploring the dynamics, playing modalities and haptic response of a single instrument. This fine degree of control enables an enactive learning process of getting to know an object (a virtual instrument in this case) through physical manipulation.

8.5.3 Real-Time Performance in Hélios

Hélios is an interactive musical and visual piece that was created for the AST 20151 festival. For the first time, an MSCI force-feedback station was used in a public live performance. The entire musical content and the visual scenes are created with GENESIS, associating a vast pre-calculated physical model with a real-time MSCI simulation. Video content is projected onto two screens: a large screen for the calculated visual scenes and a screen for the real-time visuals associated with the MSCI simulation. The sound projection is handled with a sound dome of 24 speakers, placed in a semi-sphere above the audience.
The pre-calculated virtual instrumental scene in Hélios is composed of approximately 200000 GENESIS modules (Fig. 8.13). The off-line simulation of this vast instrumental scene allows:
  • to distribute the sonic output to 24 audio channels routed to 24 loudspeakers during the concert;
  • to memorise the entire 3D visual scene, including (low-rate and vibratory) motion of all of the virtual instruments (using the GMDL format [23]). The scene may be navigated through during playback using ordinary gestural input systems such as a mouse. This “interpreted” playback can then be recorded, edited and incorporated into the video projection during the concert.
The MSCI system incorporated into the installation uses a 12 DoF force-feedback device (Fig. 8.3). A model made of approximately 7000 physical modules is loaded onto the MSCI workstation. This model is a subgroup of the entire model, which guarantees coherency between the sound textures produced by the off-line simulation and those produced during the real-time interaction with the MSCI virtual instrument. This fusion blurs the boundaries between off-line and real-time sections and offers rich possibilities for the composition and musical structure in the temporal, spatial and structural dimensions of the piece.
The described configuration illustrates one of the many possible interaction scenarios between real and virtual players, real and virtual instruments, real-time and off-line (“supra-instrumental”) instrumental situations, as previously described by Cadoz [8].

8.6 Conclusions

We have presented and discussed recent solutions developed at ACROE-ICA for designing and implementing multisensory virtual musical instruments. These converged into the MSCI platform, the first modelling and simulation environment of its kind, enabling large-scale computation of physical models and synchronous high-performance haptic interaction that retains the ergotic qualities of musical gestures in a digital context.
Several scientific and technological questions have been addressed by this work, in particular concerning the formalisation and implementation of physical models containing sections running at different rates. The models created so far and feedback from users lead us to believe that MSCI offers high potential as a musical meta-instrument, and that it is suitable for use in live performances, as demonstrated by Claude Cadoz in his two representations of Hélios.
Further developments will include incorporating mixed interaction between user manipulation and virtual agents inside the physical models. Most importantly, MSCI will be used in various creative contexts by musicians and composers and in pedagogical contexts to teach about physics, acoustics and haptics.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fußnoten
1
Art—Science—Technologie—November 14-21, 2015—Grenoble, France.
 
Literatur
1.
Zurück zum Zitat Adrien, J.M.: The missing link: modal synthesis. In: De Poli, G., Piccialli, A., Roads, C. (Eds.), Representations of Musical Signals, pp. 269–298. MIT Press (1991) Adrien, J.M.: The missing link: modal synthesis. In: De Poli, G., Piccialli, A., Roads, C. (Eds.), Representations of Musical Signals, pp. 269–298. MIT Press (1991)
2.
Zurück zum Zitat Ahmaniemi, T.: Gesture controlled virtual instrument with dynamic vibrotactile feedback. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 485–488 (2010) Ahmaniemi, T.: Gesture controlled virtual instrument with dynamic vibrotactile feedback. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 485–488 (2010)
3.
Zurück zum Zitat Berdahl, E., Niemeyer, G, Smith JO: Using Haptics to Assist Performers in Making Gestures to a Musical Instrument. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 177–182 (2009) Berdahl, E., Niemeyer, G, Smith JO: Using Haptics to Assist Performers in Making Gestures to a Musical Instrument. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 177–182 (2009)
4.
Zurück zum Zitat Berdahl, E., Kontogeorgakopoulos, A.: The FireFader design: simple, open-source, and reconfigurable haptics for musicians. In: Proceedings of the Sound and Music Computing Conference (SMC), pp. 90–98 (2012) Berdahl, E., Kontogeorgakopoulos, A.: The FireFader design: simple, open-source, and reconfigurable haptics for musicians. In: Proceedings of the Sound and Music Computing Conference (SMC), pp. 90–98 (2012)
5.
Zurück zum Zitat Cadoz, C., Luciani, A., Florens, J.L.: CORDIS-ANIMA: a modeling and simulation system for sound and image synthesis: the general formalism. Comput. Music J. 17(1), 19–29 (1993)CrossRef Cadoz, C., Luciani, A., Florens, J.L.: CORDIS-ANIMA: a modeling and simulation system for sound and image synthesis: the general formalism. Comput. Music J. 17(1), 19–29 (1993)CrossRef
6.
Zurück zum Zitat Cadoz, C.: Le geste canal de communication homme/machine: la communication « instrumentale». Technique et science informatiques 13(1), 31–61 (1994) Cadoz, C.: Le geste canal de communication homme/machine: la communication « instrumentale». Technique et science informatiques 13(1), 31–61 (1994)
7.
Zurück zum Zitat Cadoz, C., Wanderley, M.M.: Gesture: music. In: Wanderley, M.M., Battier, M. (eds.) Trends in gestural control of music. IRCAM/Centre Pompidou, Paris, France (2000) Cadoz, C., Wanderley, M.M.: Gesture: music. In: Wanderley, M.M., Battier, M. (eds.) Trends in gestural control of music. IRCAM/Centre Pompidou, Paris, France (2000)
8.
Zurück zum Zitat Cadoz, C.: Supra-instrumental interactions and gestures. J. New Music Res. 38(3), 215–230 (2009)CrossRef Cadoz, C.: Supra-instrumental interactions and gestures. J. New Music Res. 38(3), 215–230 (2009)CrossRef
9.
Zurück zum Zitat Castagné, N, Cadoz, C, Allaoui, A., Tache, O: G3: GENESIS software environment update. In: Proceedings of the International Computer Music Conference on (ICMC), pp. 407–410 (2009) Castagné, N, Cadoz, C, Allaoui, A., Tache, O: G3: GENESIS software environment update. In: Proceedings of the International Computer Music Conference on (ICMC), pp. 407–410 (2009)
10.
Zurück zum Zitat Castet, J., Florens, J.L.: A virtual reality simulator based on haptic hard constraints. In: Proceedings of the International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, pp. 918–923. Springer Berlin Heidelberg (2008) Castet, J., Florens, J.L.: A virtual reality simulator based on haptic hard constraints. In: Proceedings of the International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, pp. 918–923. Springer Berlin Heidelberg (2008)
11.
Zurück zum Zitat Erkut, C., Jylhä, A., Karjalainen, M., Altinsoy, E.M.: Audio-tactile interaction at the nodes of a block-based physical sound synthesis model. In: Proceedings of the Haptic and Auditory Interaction Design. Workshop 2, 25–26 (2008) Erkut, C., Jylhä, A., Karjalainen, M., Altinsoy, E.M.: Audio-tactile interaction at the nodes of a block-based physical sound synthesis model. In: Proceedings of the Haptic and Auditory Interaction Design. Workshop 2, 25–26 (2008)
12.
Zurück zum Zitat Florens, J.L.: Coupleur gestuel interactif pour la commande et le contrôle de sons synthétisés en temps réel. Ph.D. thesis, Grenoble-INPG (1978) Florens, J.L.: Coupleur gestuel interactif pour la commande et le contrôle de sons synthétisés en temps réel. Ph.D. thesis, Grenoble-INPG (1978)
13.
Zurück zum Zitat Florens, J.L.: Real time bowed string synthesis with force feedback gesture. In: Proceedings of the Forum Acousticum (2002) Florens, J.L.: Real time bowed string synthesis with force feedback gesture. In: Proceedings of the Forum Acousticum (2002)
14.
Zurück zum Zitat Florens, J.L., Luciani, A., Cadoz, C., Castagné, N.: ERGOS: multi-degrees of freedom and versatile force-feedback panoply. In: Proceedings of the EuroHaptics Conference (2004) Florens, J.L., Luciani, A., Cadoz, C., Castagné, N.: ERGOS: multi-degrees of freedom and versatile force-feedback panoply. In: Proceedings of the EuroHaptics Conference (2004)
15.
Zurück zum Zitat Gillespie, B.: The virtual piano action: design and implementation. In: Proceedings of the International Computer Music Conference (ICMC) (1994) Gillespie, B.: The virtual piano action: design and implementation. In: Proceedings of the International Computer Music Conference (ICMC) (1994)
16.
Zurück zum Zitat Giordano, B.L., Avanzini, F., Wanderley, M., McAdams, S.: Multisensory integration in percussion performance. In: Congr. Fr. d’Acoustique, pp. 12–16 (2010) Giordano, B.L., Avanzini, F., Wanderley, M., McAdams, S.: Multisensory integration in percussion performance. In: Congr. Fr. d’Acoustique, pp. 12–16 (2010)
17.
Zurück zum Zitat Incerti, E.: Synthèse de sons par modélisation physique de structures vibrantes. Applications pour la création musicale par ordinateur. Ph.D. thesis, INP GRENOBLE (1996) Incerti, E.: Synthèse de sons par modélisation physique de structures vibrantes. Applications pour la création musicale par ordinateur. Ph.D. thesis, INP GRENOBLE (1996)
18.
Zurück zum Zitat Kontogeorgakopoulos, A., Cadoz, C.: Filtering within the framework of mass-interaction physical modeling and of haptic gestural interaction. In: Proceedings of the International Conference on Digital Audio Effects (DAFx), pp. 319–325 (2007) Kontogeorgakopoulos, A., Cadoz, C.: Filtering within the framework of mass-interaction physical modeling and of haptic gestural interaction. In: Proceedings of the International Conference on Digital Audio Effects (DAFx), pp. 319–325 (2007)
19.
Zurück zum Zitat Leonard, J., Castagné, N., Cadoz, C., Florens, J.L.: Interactive physical design and haptic playing of virtual musical instruments. In: Proceedings of the International Computer Music Conference (ICMC), pp. 108–115 (2013) Leonard, J., Castagné, N., Cadoz, C., Florens, J.L.: Interactive physical design and haptic playing of virtual musical instruments. In: Proceedings of the International Computer Music Conference (ICMC), pp. 108–115 (2013)
20.
Zurück zum Zitat Leonard, J., Cadoz, C., Castagné, N., Florens, J.L., Luciani, A.: A virtual reality platform for musical creation: GENESIS-RT. In: Proceedings of the International Symposium on Computer Music Modeling and Retrieval (CMMR), pp. 346–371 (2013) Leonard, J., Cadoz, C., Castagné, N., Florens, J.L., Luciani, A.: A virtual reality platform for musical creation: GENESIS-RT. In: Proceedings of the International Symposium on Computer Music Modeling and Retrieval (CMMR), pp. 346–371 (2013)
21.
Zurück zum Zitat Leonard, J., Cadoz, C.: Physical modelling concepts for a collection of multisensory virtual musical instruments. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), Baton Rouge, LA, USA (2015) Leonard, J., Cadoz, C.: Physical modelling concepts for a collection of multisensory virtual musical instruments. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), Baton Rouge, LA, USA (2015)
22.
Zurück zum Zitat Leonard, J., Cadoz, C., Castagné, N., Luciani, A.: Lutherie virtuelle et interaction instrumentale. Traitement du Signal 32(4/2015), 391–416 (2015)CrossRef Leonard, J., Cadoz, C., Castagné, N., Luciani, A.: Lutherie virtuelle et interaction instrumentale. Traitement du Signal 32(4/2015), 391–416 (2015)CrossRef
23.
Zurück zum Zitat Luciani, A., Evrard, M., Couroussé, D., Castagné, N., Cadoz, C., Florens, JL: A basic gesture and motion format for virtual reality multisensory applications. In: Proceedings of the International Conference on Computer Graphics Theory and Applications, pp. 349–356 (2006) Luciani, A., Evrard, M., Couroussé, D., Castagné, N., Cadoz, C., Florens, JL: A basic gesture and motion format for virtual reality multisensory applications. In: Proceedings of the International Conference on Computer Graphics Theory and Applications, pp. 349–356 (2006)
24.
Zurück zum Zitat Luciani, A., Florens, J.L., Couroussé, D., Castet, J.: Ergotic sounds: A new way to improve playability, believability and presence of virtual musical instruments. J. New Music Res. 38(3), 309–323 (2009)CrossRef Luciani, A., Florens, J.L., Couroussé, D., Castet, J.: Ergotic sounds: A new way to improve playability, believability and presence of virtual musical instruments. J. New Music Res. 38(3), 309–323 (2009)CrossRef
25.
Zurück zum Zitat Marshall, M.T., Wanderley, M.M.: Vibrotactile feedback in digital musical instruments. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 226–229 (2006) Marshall, M.T., Wanderley, M.M.: Vibrotactile feedback in digital musical instruments. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 226–229 (2006)
26.
Zurück zum Zitat Nichols, C.: The vBow: development of a virtual violin bow haptic human-computer interface. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 1–4 (2000) Nichols, C.: The vBow: development of a virtual violin bow haptic human-computer interface. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 1–4 (2000)
27.
Zurück zum Zitat O’Modhrain, S., Chafe, C.: Incorporating haptic feedback into interfaces for music applications. In: Proceedings of the ISORA, World Automation Conference (2000). 
 O’Modhrain, S., Chafe, C.: Incorporating haptic feedback into interfaces for music applications. In: Proceedings of the ISORA, World Automation Conference (2000). 

28.
Zurück zum Zitat O’Modhrain, S., Serafin, S., Chafe, C., Smith, J.O.: Influence of attack parameters on the playability of a virtual bowed string instrument: tuning the model. In: Proceedings of the International Computer Music Conference (ICMC) (2000) O’Modhrain, S., Serafin, S., Chafe, C., Smith, J.O.: Influence of attack parameters on the playability of a virtual bowed string instrument: tuning the model. In: Proceedings of the International Computer Music Conference (ICMC) (2000)
29.
Zurück zum Zitat Rimell, S., Howard, D.M., Tyrrell, A.M., Kirk, R., Hunt, A.: Cymatic: restoring the physical manifestation of digital sound using haptic interfaces to control a new computer based musical instrument. In: Proceedings of the International Computer Music Conference (ICMC) (2002) Rimell, S., Howard, D.M., Tyrrell, A.M., Kirk, R., Hunt, A.: Cymatic: restoring the physical manifestation of digital sound using haptic interfaces to control a new computer based musical instrument. In: Proceedings of the International Computer Music Conference (ICMC) (2002)
30.
Zurück zum Zitat Sinclair, S., Wanderley, M.M.: A run-time programmable simulator to enable multi-modal interaction with rigid-body systems. In: Proceedings of the Interacting with Computers (2009)CrossRef Sinclair, S., Wanderley, M.M.: A run-time programmable simulator to enable multi-modal interaction with rigid-body systems. In: Proceedings of the Interacting with Computers (2009)CrossRef
31.
Zurück zum Zitat Sinclair, S., Wanderley, M.M., Hayward, V., Scavone, G.: Noise-free haptic interaction with a bowed-string acoustic model. In: Proceedings of the IEEE World Haptics Conference pp. 463–468 (2011) Sinclair, S., Wanderley, M.M., Hayward, V., Scavone, G.: Noise-free haptic interaction with a bowed-string acoustic model. In: Proceedings of the IEEE World Haptics Conference pp. 463–468 (2011)
32.
Zurück zum Zitat Smith, J.O.: Physical modeling using digital waveguides. Comput. Music J. 16(4), 74–91 (1992)CrossRef Smith, J.O.: Physical modeling using digital waveguides. Comput. Music J. 16(4), 74–91 (1992)CrossRef
33.
Zurück zum Zitat Wanderley, M.M., Depalle, P.: Gestural control of sound synthesis. Proc. IEEE 92(4), 632–644 (2004)CrossRef Wanderley, M.M., Depalle, P.: Gestural control of sound synthesis. Proc. IEEE 92(4), 632–644 (2004)CrossRef
Metadaten
Titel
The MSCI Platform: A Framework for the Design and Simulation of Multisensory Virtual Musical Instruments
verfasst von
James Leonard
Nicolas Castagné
Claude Cadoz
Annie Luciani
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-58316-7_8

Neuer Inhalt