Hostname: page-component-848d4c4894-p2v8j Total loading time: 0 Render date: 2024-05-12T06:54:13.563Z Has data issue: false hasContentIssue false

A comparative evaluation of auditory-visual mappings for sound visualisation

Published online by Cambridge University Press:  17 November 2006

KOSTAS GIANNAKIS
Affiliation:
P.O. Box 60572, Athens 153 05, Greece. E-mail: kgiannakis@mixedupsenses.com

Abstract

The significant role of visual communication in modern computer applications is indisputable. In the case of music, various attempts have been made from time to time to translate non-visual ideas into visual codes (see Walters 1997 for a collection of graphic scores from the late computer music pioneer Iannis Xenakis, John Cage, Karlheinz Stockhausen, and others). In computer music research, most current sound design tools allow the direct manipulation of visual representations of sound such as time-domain and frequency-domain representations, with the most notable examples being the UPIC system (Xenakis 1992), Phonogramme (Lesbros 1996), Lemur (Fitz and Haken 1997), and MetaSynth (Wenger 1998), among others. Associations between auditory and visual dimensions have also been extensively studied in other scientific domains such as visual perception and cognitive psychology, as well as inspired new forms of artistic expression (see, for example, Wells 1980; Goldberg and Schrack 1986; Whitney 1991).

Type
Articles
Copyright
Cambridge University Press 2006

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)