We will present some of the most important devices present in the literature following these two categories and provide a threefold taxonomy for the active tactile case.
10.2.2.2 Active Kinaesthetic Interfaces
Active kinaesthetic feedback is the response of the controller to the user’s actions, usually by means of synthesized signals supplied into motors or actuators, which stimulate kinaesthetic receptors. This is most commonly referred to as force feedback.
The earliest example of a force-feedback device specifically developed for musical applications is probably the Transducteur Gestuel Rétroactif (TGR) developed at ACROE, whose development is described in Sect.
8.3. This device was recently used by Sinclair et al. [
46] to investigate velocity estimation methods in the haptic rendering of a bowed string.
Another classical example is the Moose, developed by O’Modhrain and Gillespie [
42], consisting of a plastic puck that the user can manipulate in a 2D space, which is attached to flexible metal bars, connected to linear motors. Two encoders sense the movements of the puck, and the motors provide the correspondent force feedback. The device was used in a bowing test, using a virtual string programmed in Synthesis ToolKit (STK) [
10], where the presence of friction between the bow and the string was simulated using the haptic device.
The vBow by Nichols [
40] is a violin-like controller that uses a series of servomotors and encoders to sense the movement of a rod, acting as the bow, connected to a metallic cable. In its last incarnation, the vBow is capable of sensing moment in 4-DoF and producing haptic feedback accordingly.
More recently, Berdahl and Kontogeorgakopoulos [
2] developed the FireFader, a motorized faders using sensors and DC motors to introduce musicians to haptic controllers. Both the software and hardware used for the project are open-source, allowing musicians to customize the mapping of the interface to their specific needs. Applications of the device are described in Chap.
9.
10.2.2.4 Active Tactile Feedback and Stimulation: A Taxonomy for Musical Interaction
Active tactile feedback and stimulation are the main focus of this chapter, and for this reason we provide a more in-depth analysis of the related literature, as well as an updated taxonomy, based on Giordano and Wanderley [
19], which will help categorize examples in this field.
We propose a classification identifying in active tactile feedback and stimulation three different categories according to the function that the tactile effects have in the interface design: tactile notification, tactile translation, and tactile languages.
Tactile Notification
The most straightforward application of tactile stimulation is intended for notifying the users about events taking place in the surrounding environment or about results of their interaction with a system. The effects designed for this kind of applications can be as simple as single, supra-threshold stimuli
2 aimed at directing users’ attention, but they can also be more complex, implementing temporal envelopes and/or spatial patterns.
Michailidis and Berweck [
37] and Michailidis and Bullock [
38] have explored solutions to provide haptic feedback in live-electronics performance. The authors developed the Tactile Feedback Tool, a general-purpose interface using small vibrating motors embedded in a glove. The interface gave musicians information about the successful triggering of effects in a live-electronics performance, using an augmented trumpet or a foot pedal switch. This device leverages the capacity of the tactile sense to attract users’ attention, while not requiring them to lose focus on other modalities, which would have been the case with the use of onstage visual displays.
Van der Linden et al. [
49] implemented a whole-body general-purpose vibrotactile device. The authors used a motion capture system and a suit embedded with vibrating motors distributed over the body to enhance the learning process of bowing for novice violin players. A set of ideal bowing trajectories was computed using the motion capture system; when practicing, the players’ postures would be compared in real time with the predefined ideal trajectories. If the distance between any two corresponding points in the two trajectories exceeded the threshold value, the motor spatially closer to that point would vibrate, notifying the users to correct their posture. The authors conducted a study in which several players used the suit during their violin lessons. Results showed an improved coordination of the bowing arm, and participants reported an enhancement in their body awareness produced by the feedback.
A similar solution was developed by Grosshauser and Hermann [
21], which used a vibrating actuator embedded in a violin bow to correct hand posture. Using accelerometers and gyroscopes, the position of the bow could be compared in real time to a given trajectory, and the tactile feedback would automatically activate to notify the users about their wrong posture.
Tactile Notification
With tactile translation, we refer to two separate classes of applications: One class implements sensory substitution techniques to convey to the sense of touch stimuli which would normally be addressed to other modalities; the other class simulates the haptic behavior of other structures whose vibrational properties have previously been characterized.
Sensory Substitution
The field of sensory substitution has been thoroughly investigated since the beginning of the last century. In 1930, von Békésy started investigating the physiology behind tactile perception by drawing a parallel between the tactile and the auditory channels in terms of the mechanism governing the two perception mechanisms [
53]. A thorough review of sensory substitution applications can be found in Visell [
52]. In a musical context, several interfaces have been produced with the aim of translating sound into perceivable vibrations delivered via vibrotactile displays.
Crossmodal mapping techniques can be utilized to perform the translation, identifying sound descriptors to be mapped to properties of vibrotactile feedback.
Karam et al. [
27] developed a general-purpose interface in the form of an augmented chair (the Emoti-Chair) embedded with an array of eight speakers disposed along the back. The authors’ aim was to create a display for deaf people to enjoy music through vibrations. They developed the Model Human Cochlea [
26]—a sensory substitution model of the cochlear critical band filter on the back—and mapped different frequency bands of a musical track, rescaled to fit into the frequency range of sensitivity of the skin (see Sect.
4.2), to each of the speakers on the chair. In a related study, Egloff et al. [
12] investigated people’s ability to differentiate between musical intervals delivered via the haptic channel, finding that on the average smallest perceptible difference was a major second (i.e., two semitones). It was also noted that results vary widely due to the sensitivity levels of different receptive fields across the human body. Thus, care must be taken when designing vibrotactile interfaces intended to be used as a means for sensory substitution.
Merchel et al. [
36] developed a prototype mixer equipped with a tactile translation system to be used by sound recording technicians. A mixer augmented with an actuator would allow the user to recognize the instrument playing in the selected track only by means of tactile stimulation: A tactile preview mode would be enabled on the mixer, performing a real-time translation of the incoming audio. Preliminary results show that users were able to recognize different instruments only via the sense of touch; better performance was obtained for instruments producing very low-frequency vibrations (bass) or strong rhythmical patterns (drums). A similar touch screen-based system and related test applications are described in Chap.
12.
Tactile Stimulation
In tactile stimulation applications, the vibrational behavior of a vibrating structure is characterized and modeled so as to be able to reproduce it in another interface. Examples in this category include physical modeling of the vibrating behavior of a musical instrument, displayed by means of actuators.
A DMI featuring tactile stimulation capability is the Viblotar by Marshall [
35]. The instrument is composed of a long, narrow wooden box equipped with sensors and embedded speakers. Sound is generated from a hybrid physical model of an electric guitar and a flute programmed in the Max/MSP environment. During performance, the instrument rests on the performer’s lap or on a stand. One hand manipulates a long linear position sensor and matching force sensitive resistor (FSR) underneath to “pluck” a virtual string. The location, force, and speed of the motion are mapped to frequency, amplitude, and timbre parameters of the physical model. The other hand operates two small FSRs which control pitch bend up and down. The sound output from the Viblotar can be redirected to external speakers, hence allowing the embedded speakers to function primarily for generating vibrotactile feedback instead of sound output. In this configuration, the sound output is split, with one signal sent directly to the external speakers and another routed through a signal processing module that can produce a variety of customized vibrotactile effects such as compensating for frequency response of loudspeakers, simulating the frequency response of another instrument or amplifying the frequency band to which the skin is most sensitive [
34].
Tactile Languages
Tactile languages are an attempt to create compositional languages solely addressed to the sense of touch, in which tactile effects are not just simple notifications, issued from the interaction with a system, but can be units or icons for abstract communication mediated by the skin.
An early example of tactile language is the “vibratese,” proposed by Geldard [
16], who aimed at creating a complete new form of tactile communication delivered by voice coil actuators (see Sect.
13.2). Parameters for defining building blocks for the language would be elements such as frequency, intensity, and waveform. A total of 45 unit blocks representing numbers and letters of the English alphabet were produced, allowing for expert users to read at rates up to 60 words per minute.
More recently, much research on tactile languages has been directed toward the development of tactile icons. Brewster and Brown [
6] introduced the notion of
tactons, i.e., tactile icons to be used to convey non-visual information by means of abstract or meaningful associations, which have been used to convey information about interaction with mobile phones [
8]. Enriquez and MacLean [
13] studied the learnability of tactile icons delivered to the fingertips by means of voice coil-like actuators. By modulating frequency, amplitude and rhythm of the vibration, they produced a set of 20 icons, which were tested in a user-based study organized in two sessions, two-weeks apart. Participants recognition rates reached 80% in the first session after 10 min of familiarization with the system and more than 90% during the second session.
In a musical context, attempts to create compositional languages for the sense of touch can be found in the literature. Gunther [
22] developed the Skinscape system, a tactile compositional language whose building blocks varied in frequency, intensity, envelope, spectral content of vibrations, and spatial position on the body of the user. The language was at the base of the Cutaneous Grooves project by Gunther and O’Modhrain [
23], in which it was used to compose a musical piece to be accompanied by vibrations delivered by a custom-built set of suits embedded with various kinds of actuators.
In terms of tactons, we are not aware of any study evaluating their effectiveness in the context of music performance and practice. This is the object of the remainder of this chapter, where we present the design and evaluation of tactile icons for expert musicians.