Skip to main content
main-content

Inhaltsverzeichnis

Frontmatter

Dynamic Simulation of Note Transitions in Reed Instruments: Application to the Clarinet and the Saxophone

Abstract
This paper deals with the simulation of transitions between notes in the context of real-time sound synthesis based on physical models of reed instruments. For that purpose, both the physical and the subjective point of view are considered. From a physical time-varying tonehole model a simple transition model is built, the parameters of which are adapted in order to fit with measurements obtained in normal playing situations. The model is able to reproduce the main perceptive effects, both from the listener point of view, which is a frequency glissando, loudness and brightness variations, and from the player point of view, which is a reduced ease of playing during the transition.
Philippe Guillemain, Jonathan Terroir

The BRASS Project, from Physical Models to Virtual Musical Instruments: Playability Issues

Abstract
The Brass project aims to deliver software virtual musical instruments (trumpet, trombone, tenor saxophone) based on physical modelling. This requires to work on some aspects of the playability of the models so that they can be played in real time through a simple keyboard : better control of the attacks, automatic tuning, humanization.
Christophe Vergez, Patrice Tisserand

The pureCMusic (pCM++) Framework as Open-Source Music Language

Abstract
The pureCMusic (pCM++) framework gives the possibility to write a piece of music in terms of an algorithmic-composition-based program -also controlled by data streaming from external devices for giving expressiveness in electro-acoustic music performances- and of synthesis algorithms. Everything is written following the C language syntax and compiled into machine code that runs at CPU speed. The framework provides a number of predefined functions for sound processing, for generating complex events and for managing external data coming from standard Midi controllers and/or other special gesture interfaces. I’m going to propose pCM++ as open-source code.
Leonello Tarabella

Timbre Variations as an Attribute of Naturalness in Clarinet Play

Abstract
A digital clarinet played by a human and timed by a metronome was used to record two playing control parameters, the breath control and the reed displacement, for 20 repeated performances. The regular behaviour of the parameters was extracted by averaging and the fluctuation was quantified by the standard deviation. It was concluded that the movement of the parameters seem to follow rules. When removing the fluctuations of the parameters by averaging over the repetitions, the result sounded less expressive, although it still seemed to be played by a human. The variation in timbre during the play, in particular within a note’s duration, was observed and then fixed while the natural temporal envelope was kept. The result seemed unnatural, indicating that the variation of timbre is important for the naturalness.
Snorre Farner, Richard Kronland-Martinet, Thierry Voinier, Sølvi Ystad

Scoregram: Displaying Gross Timbre Information from a Score

Abstract
This paper introduces a visualization technique for music scores using a multi-timescale aggregation that offers at-a-glance information interpretable as the global timbre resulting from a normative performance of a score.
Rodrigo Segnini, Craig Sapp

A Possible Model for Predicting Listeners’ Emotional Engagement

Abstract
This paper introduces a possible approach for evaluating and predicting listeners’ emotional engagement during particular musical performances. A set of audio parameters (cues) is extracted from recorded audio files of two contrasting movements from Bach’s Solo Violin Sonatas and Partitas and compared to listeners’ responses, obtained by moving a slider while listening to music. The cues showing the highest correlations are then used for generating decision trees and a set of rules which will be useful for predicting the emotional engagement (EM) experienced by potential listeners in similar pieces. The model is tested on two different movements of the Solos showing very promising results.
Roberto Dillon

About the Determination of Key of a Musical Excerpt

Abstract
Knowledge of the key of a musical passage is a pre-requisite for all the analyses that require functional labelling. In the past, people from either a musical or AI background have tended to solve the problem by means of implementing a computerized version of musical analysis. Previous attempts are discussed and then attention is focused on a non-analytical solution first reported by J.A.Gabura. A practical way to carry it out is discussed as well as its limitations in relation to examples. References are made to the MusicXML format as needed.
Héctor Bellmann

An Interactive Musical Exhibit Based on Infrared Sensors

Abstract
This paper deals with the description of the design of an exhibit for controlling real-time audio synthesis with a wireless, IR-based interface. Researching new way for playing and real-time controlling electronic music is today’s hot topic in the computer music field. The goal of this specific project consists of an enjoyable, robust and reliable exhibit that gives the possibility to constantly operate with young users (especially children) and, more in general, non-expert people. Our effort has been focused to carefully design the hardware/software project, in a way that the final user will interact only with non-critical parts of the system.
Graziano Bertini, Massimo Magrini, Leonello Tarabella

Metris: A Game Environment for Music Performance

Abstract
Metris is a version of the Tetris game that uses a player’s musical response to control game performance. The game is driven by two factors: traditional game design and the player’s individual sense of music and sound. Metris uses tuning principles to determine relationships between pitch and the timbre of the sounds produced. These relationships are represented as bells synchronised with significant events in the game. Key elements of the game design control a musical environment based on just intonation tuning. This presents a scenario where the game design is enhanced by a user’s sense of sound and music. Conventional art music is subverted by responses to simple design elements in a popular game.
Mark Havryliv, Terumi Narushima

Strategies for the Control of Microsound Synthesis Within the “GMU” Project

Abstract
Sound synthesis using granular (or micro-sonic) techniques offers very attractive possibilities for creating musical sounds. We present a state of the researches conducted at GMEM for an integrated microsound synthesis system. This system includes at once synthesis generators and synthesis control programs, both gestures and sound analysis based control.
Laurent Pottier

Building Low-Cost Music Controllers

Abstract
This paper presents our work on building low-cost music controllers intended for educational and creative use. The main idea was to build an electronic music controller, including sensors and a sensor interface, on a “10 euro” budget. We have experimented with turning commercially available USB game controllers into generic sensor interfaces, and making sensors from cheap conductive materials such as latex, ink, porous materials, and video tape. Our prototype controller, the CheapStick, is comparable to interfaces built with commercially available sensors and interfaces, but at a fraction of the price.
Alexander Refsum Jensenius, Rodolphe Koehly, Marcelo M. Wanderley

Evaluation of Sensors as Input Devices for Computer Music Interfaces

Abstract
This paper presents ongoing research into the design and creation of interfaces for computer music. This work concentrates on the use of sensor as the primary means of interaction for computer music, and examines the relationships between types of sensors and musical functions. Experiments are described which aim to discover the particular suitability of certain sensors for specific musical tasks. The effects of additional visual feedback on the perceived suitability of these sensors is also examined. Results are given, along with a discussion of their possible implications for computer music interface design and pointers for further work on this topic.
Mark T. Marshall, Marcelo M. Wanderley

Aspects of the Multiple Musical Gestures

Abstract
A simple to use pointer interface in 2D for producing music is presented as a means for real-time playing and sound generation. The music is produced by simple gestures that are repeated easily. The gestures include left-to-right and right-to-left motion shapes for spectral envelope and temporal envelope of the sounds, with optional backwards motion for the addition of noise; downward motion for note onset and several other manipulation gestures. The initial position controls which parameter is being affected, the notes intensity is controlled by the downward gesture speed, and a sequence is finalized instantly with one upward gesture. Several synthesis methods are presented and the control mechanisms are mapped into the multiple musical gesture interface. This enables a number of performers to interact on the same interface, either by each playing the same musical instruments simultaneously, or by performing a number of potentially different instruments on the same interface.
Kristoffer Jensen

Gran Cassa and the Adaptive Instrument Feed-Drum

Abstract
The physical-mathematical models of orchestral instruments represent an important theoretical and experimental support for the composer and for the application of new acoustic and performance criteria. Western music has linked its evolution to the transformation of instruments and performance techniques through the constant interaction between the expressive demands of the musical language (e.g. pitch range and control), acoustic requirements (e.g. sound irradiation level and type), sound emission techniques (e.g. ergonomics and excitation control). There is a constant interaction and reciprocal adaptation between the construction of the instrument and the composition and performance of music. For instance, consider how the tenth-century Viella evolved into the family of Renaissance Violas and then into the family of Violins. In terms of composition this coincides with the transition from monodic forms that duplicate voice and syllabic rhythm to the formal autonomy of instrumental music, with the spread of the frequency range, to the grand forms and orchestral ensembles of Baroque music. Executive technique is integrated in this process, since the player not only fills the role of agent producing the acoustic rendering, but also of expert demonstrating the criteria of agility and ergonomics of the instrument and inventing solutions of adaptation and virtuosity.
Michelangelo Lupone, Lorenzo Seno

Generating and Modifying Melody Using Editable Noise Function

Abstract
This paper introduces a way to generate or modify a melody using the editable noise function. The band-limited random numbers generated by the noise function are converted to the various property values of notes such as pitch and duration. Using this technique, we can modify an existing melody to produce new, similar melodies. The noise values can be edited, if necessary, while preserving the statistical characteristics of the noise function. By using this noise editing method, the noise function can generate a melody that satisfies given constraints.
Yong-Woo Jeon, In-Kwon Lee, Jong-Chul Yoon

Unifying Performer and Accompaniment

Abstract
A unique real time system for correlating a vocal, musical performance to an electronic accompaniment is presented. The system has been implemented and tested extensively in performance in the author’s opera ‘La Quintrala’, and experience with its use in practice is presented. Furthermore, the system’s functionality is outlined, it is put into current research perspective, and its possibilities for further development and other usages is discussed. The system correlates voice analysis to an underlying chord structure, stored in computer memory. This chord structure defines the primary supportive pitches, and links the notated and electronic score together, addressing the needs of the singer for tonal ‘indicators’ at any given moment. A computer-generated note is initiated by a combination of the singer – by the onset of a note, or by some element in the continuous spectrum of the singing – and the computer through an accompaniment algorithm. The evolution of this relationship between singer and computer is predefined in the application according to the structural intentions of the score, and is affected by the musical and expressive efforts of the singer. The combination of singer and computer influencing the execution of the accompaniment creates a dynamic, musical interplay between singer and computer, and is a very fertile musical area for a composer’s combined computer programming and score writing.
Lars Graugaard

Recognizing Chords with EDS: Part One

Abstract
This paper presents a comparison between traditional and automatic approaches for the extraction of an audio descriptor to recognize chord into classes. The traditional approach requires signal processing (SP) skills, constraining it to be used only by expert users. The Extractor Discovery System (EDS) [1] is a recent approach, which can also be useful for non expert users, since it intends to discover such descriptors automatically. This work compares the results from a classic approach for chord recognition, namely the use of KNN-learners over Pitch Class Profiles (PCP), with the results from EDS when operated by a non SP expert.
Giordano Cabral, François Pachet, Jean-Pierre Briot

Improving Prototypical Artist Detection by Penalizing Exorbitant Popularity

Abstract
Discovering artists that can be considered as prototypes for particular genres or styles of music is a challenging and interesting task. Based on preliminary work, we elaborate an improved approach to rank artists according to their prototypicality. To calculate such a ranking, we use asymmetric similarity matrices obtained via co-occurrence analysis of artist names on web pages. In order to avoid distortions of the ranking due to ambiguous artist names, e.g. bands whose name equal common speech words (like “Kiss” or “Bush”), we introduce a penalization function. Our approach is demonstrated on a data set containing 224 artists from 14 genres.
Markus Schedl, Peter Knees, Gerhard Widmer

Music Analysis and Modeling Through Petri Nets

Abstract
Petri Nets are a formal tool for studying systems that are concurrent, asynchronous, distributed, parallel, nondeterministic, and/or stochastic. They were used in a number of real-world simulations and scientific problems, but seldom considered an effective means to describe and/or generate music. The purpose of this paper is demonstrating that Petri Nets (enriched with some peculiar extensions) can well represent the results of a musicological analysis process.
Adriano Baratè, Goffredo Haus, Luca A. Ludovico

A Review on Techniques for the Extraction of Transients in Musical Signals

Abstract
This paper presents some techniques for the extraction of transient components from a musical signal. The absence of a unique definition of what a “transient” means for signals that are by essence non-stationary implies that a lot of methods can be used and sometimes lead to significantly different results. We have classified some amongst the most common methods according to the nature of their outputs. Preliminary comparative results suggest that, for sharp percussive transients, the results are roughly independent of the chosen method, but that for slower rising attacks – e.g. for bowed string or wind instruments - the choice of method is critical.
Laurent Daudet

Dimensionality Reduction in Harmonic Modeling for Music Information Retrieval

Abstract
A 24-dimensional model for the ‘harmonic content’ of pieces of music has proved to be remarkably robust in the retrieval of polyphonic queries from a database of polyphonic music in the presence of quite significant noise and errors in either query or database document. We have further found that higher-order (1st- to 3rd-order) models tend to work better for music retrieval than 0th-order ones owing to the richer context they capture. However, there is a serious performance cost due to the large size of such models and the present paper reports on some attempts to reduce dimensionality while retaining the general robustness of the method. We find that some simple reduced-dimensionality models, if their parameter settings are carefully chosen, do indeed perform almost as well as the full 24-dimensional versions. Furthermore, in terms of recall in the top 1000 documents retrieved, we find that a 6-dimensional 2nd-order model gives even better performance than the full model. This represents a potential 64-times reduction in model size and search-time, making it a suitable candidate for filtering a large database as the first stage of a two-stage retrieval system.
Tim Crawford, Jeremy Pickens, Geraint Wiggins

Abstracting Musical Queries: Towards a Musicologist’s Workbench

Abstract
In this paper, we propose a paradigm for computer-based music retrieval and analysis systems that employs one or more explicit abstraction layers between the user and corpus– and representation–specific tools. With illustrations drawn from “battle music”, a genre popular throughout Renaissance Europe, we show how such an approach may not only be more obviously useful to a user, but also offer extra power through the ability to generalise classes of tasks across collections.
David Lewis, Tim Crawford, Geraint Wiggins, Michael Gale

An Editor for Lute Tablature

Abstract
We describe a system for the entry and editing of music in lute tablature. The editor provides instant visual and MIDI feedback, mouse and keyboard controls, a macro recording facility, and full run-time extensibility. We conclude by discussing planned future functionality and considering other potential applications for the technology.
Christophe Rhodes, David Lewis

Interdisciplinarity and Computer Music Modeling and Information Retrieval: When Will the Humanities Get into the Act?

Abstract
This paper takes a look at computer music modeling and information retrieval (CMMIR) from the point of view of the humanities with emphasis upon areas relevant to the philosophy of music. The desire for more interdisciplinary research involving CMMIR and the humanities is expressed and some specific positive experiences are cited which have given this author reason to believe that such cooperation is beneficial for both sides. A short list of some contemporary areas of interest in the philosophy of music is provided, and it is suggested that these could be interesting areas for interdisciplinary work involving CMMIR. The paper concludes with some remarks proffered during a panel discussion which took place near the end of the Pisa conference on September 28, 2006 and in correspondence inspired by this discussion, together with some brief commentary on the same. An earlier, somewhat short version of the present paper provided the impetus for said panel discussion.
Cynthia M. Grund

Backmatter

Weitere Informationen