Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2018 | OriginalPaper | Buchkapitel

13. Implementation and Characterization of Vibrotactile Interfaces

verfasst von : Stefano Papetti, Martin Fröhlich, Federico Fontana, Sébastien Schiesser, Federico Avanzini

Erschienen in: Musical Haptics

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

While a standard approach is more or less established for rendering basic vibratory cues in consumer electronics, the implementation of advanced vibrotactile feedback still requires designers and engineers to solve a number of technical issues. Several off-the-shelf vibration actuators are currently available, having different characteristics and limitations that should be considered in the design process. We suggest an iterative approach to design in which vibrotactile interfaces are validated by testing their accuracy in rendering vibratory cues and in measuring input gestures. Several examples of prototype interfaces yielding audio-haptic feedback are described, ranging from open-ended devices to musical interfaces, addressing their design and the characterization of their vibratory output.

13.1 Introduction

The use of cutaneous feedback, in place of a full-featured haptic experience, has recently received increased attention in the haptics community [5, 31], both at research level and industrial level. Indeed, enabling vibration in consumer devices—especially portable ones—is far more practical than providing motion and force feedback to the user, which would generally result in bulky and mechanically complex implementations requiring powerful motors. Recently, several studies have been conducted on the use of vibratory cues as a sensory substitution method to convey pseudo-haptic effects, e.g., to simulate textures [2, 26], moving objects [43], forces [14, 25, 29, 35], or alter the perceived nature and compliance of materials [30, 32, 41]. Other studies exist that assessed intuitiveness of vibrotactile feedback with untrained subjects [21] and how it may improve user performance
Among the approaches adopted to design vibrotactile feedback for non-visual information display, complex semantics have been investigated [20] on top of simpler vibrotactile codes [3, 22]. Focusing in particular on DMIs, the most straightforward solution is to obtain tactile signals directly from their audio output. In practice, this may be done either by rendering to the skin the vibratory by-products generated by embedded loudspeakers—for instance, this may occur as a side effect while playing some inexpensive digital pianos for home practicing—or, using a slightly more sophisticated technique, by feeding dedicated vibrotactile actuators with the same signals used for auditory feedback [12]. In spite of the minimal design effort, these approaches have the potential to result in a credible multimodal experience. Sound and vibration are in fact tightly coupled phenomena, as sound is the acoustic manifestation of a vibratory process. However, these simple solutions overlook a number of spurious and unwanted issues such as odd coupling between the electroacoustic equipment and the rest of the instrument, and unpredictable nonlinearities in the vibrotactile response of the setup [10]. A more careful design should be adopted instead, in which vibrotactile signals are tailored to match human vibrotactile sensitivity (see Sect. 4.​2) and adapted to the chosen actuator technology. In musical interfaces, this can be generally done by equalizing the original audio signal with respect to both its overall energy and frequency content, as discussed in more detail in Sect. 13.3 of this chapter.
To make sure that newly developed musical haptic devices actually render feedback as designed, we suggest that they should undergo characterization and validation procedures. The literature of touch psychophysics shows that divergent results are possible, due to the varying accuracy of haptic devices [23, 36]. As an example, when studying vibrotactile sensitivity the characterization of vibratory output would allow experimenters to compare the stimuli actually delivered to the skin with the original stimuli fed in the experimental device. Notably, a similar practice is routinely implemented in psychoacoustic studies where, e.g., the actual sound intensity reaching the participants’ ears is usually measured and reported together with other experimental data. Particular attention should also be devoted to analyzing the mechanical coupling between a vibrotactile interface and the skin, as that is ultimately how vibratory stimuli are conveyed [27]. However, as discussed in Sect. 4.​1, this may turn out especially difficult when targeting everyday interaction involving active touch, as opposed to controlled passive settings that are only possible in a laboratory. Once characteristics have been measured, they may guide the iterative design and refinement of haptic interfaces and may offer experimenters a more insightful interpretation of experimental results.
In what follows, we first discuss readily available technology that is suitable for implementing vibrotactile feedback in musical interfaces and then describe the design and characterization of a few exemplary devices that were recently developed by the authors for various purposes.

13.2 Vibrotactile Actuators’ Technology

When selecting vibrotactile actuators, designers and engineers need to consider factors such as cost, size, shape, power and driving requirements, frequency, temporal, and amplitude response [5]. For rendering effective tactile feedback, such responses should at least be compatible with results of touch psychophysics. Also, to grant versatility in the design of vibrotactile cues, actuators’ frequency response and dynamic range should be as wide as possible, and their onset/stop time negligible. For example, while it is known that piano mechanics results in variable delay between action and audio-tactile feedback [1], to have full control over this aspect while designing keyboard-based DMIs, audio and tactile devices should offer the lowest possible latency [7, 17].
Among the currently available types of actuators suitable to convey vibrotactile stimuli, the more common ones are as follows: eccentric rotating mass (ERM) actuator, voice coil actuator (VCA), and piezoelectric actuator [5, 24].
ERM actuators make use of a direct current (DC) motor, which spins an eccentric rotating mass. They come in various designs with different form factors, ranging from cylinders to flat ‘pancakes.’ This technology has two main downsides: The first one is that vibration frequency and amplitude are interdependent, as the rotational speed (frequency), which is proportional to the applied voltage, is also proportional to the generated vibration amplitude; the second one is that, mainly due to its inertia, the rotating mass requires some time to reach a target speed. Overall, these issues make ERM unsuitable to reproduce audio-like signals that have rich frequency content and fast transients. Despite these limitations, thanks to their simple implementation ERM actuators have been commonly used in consumer electronics such as mobile phones and game devices.
VCAs are driven by alternate current (AC) and consist of an electrically conductive coil (usually made of copper) interacting with a permanent magnet. Two main VCA types are available, either using a moving coil or using a moving, suspended magnet. The functioning principle of moving coil VCAs is similar to that of the loudspeaker, except that, instead of a membrane producing sound pressure waves, there is a moving mass generating vibrations. Moving coil VCAs are generally designed to move small masses, and since their output energy in the lower frequency range is constrained by the size of the moving mass, they cannot produce substantial low-frequency vibration. Conversely, moving magnet VCAs are of greater interest for vibrotactile applications as they can generally provide higher energy in the However, to keep them compact and light, a smaller moving mass must be compensated by a larger peak-to-peak excursion, complicating the suspension design [44]. Linear resonating actuators (LRAs) are particular voice coil designs that use a moving magnetic mass attached to a spring. They are meant to produce fixed frequency vibration at the resonating frequency of the spring–mass system, and therefore, they are highly power-efficient. Because of their increased power efficiency and compactness compared to ERM actuators, LRAs are becoming the preferred choice for use in consumer electronics, at the cost of higher complexity of the driving circuit. Generally though, VCAs offer wide band frequency operation and quick response times, making them suitable for audio-like input signals, with complex frequency content and fast transients.
Piezoelectric materials deform proportionally to an applied electric field, or conversely develop an electric charge proportional to the applied mechanical stress. For this reason, they can be used both as sensors and actuators. In the latter case, they may be driven either by DC or by AC current. Since piezoelectric actuators have no moving parts and no friction is produced, they present minimal aging effects and are generally regarded as highly robust. Variations of size, form, and cost/quality factors are available, ranging from ultra-cheap thin piezo disks to high-performance devices made of stacked piezoelectric elements (e.g., used for precision positioning). Piezo actuators have extremely fast response times, and their frequency range can be very wide (although not particularly in the lower band), so they may be used, e.g., as extremely compact loudspeakers or to generate ultrasounds. Since they do not generate magnetic fields while operating, they are suitable when space is tight and insulation from other electronic components is not possible. On the downside, while their current consumption is low (similar to LRAs), compared to VCAs and ERM they require higher voltage input to operate, up to a few hundreds Volt. Therefore, they usually need special driving electronics to be used with audio signals.
Several solutions are available for controlling the above types of actuators, both in the form of hardware and software. Hardware solutions are typically driving circuits used to condition input signals to conform with target actuator specifications,1 while software solutions include libraries of pre-recorded optimized input signals to achieve different effects in interactive applications.2

13.3 Interface Examples

13.3.1 The Touch-Box

The Touch-Box is an interface originally developed for conducting experiments on human performance and psychophysics under vibrotactile feedback conditions. The device, shown in Fig. 13.1, measures normal forces applied to its top panel, which provides vibrotactile feedback. An early prototype was used to study how auditory, tactile, and audio-tactile feedback affect the accuracy of finger pressing force [18]. A more recent psychophysical experiment—described in Sect. 4.​2 and making use of a more advanced prototype, described below—investigated how vibrotactile sensitivity is influenced by actively applied finger pressing forces of various intensities.

13.3.1.1 Implementation

For the latter experiment, a high-fidelity version of the Touch-Box was developed. Load cell technology was selected for force sensing, thanks to superior reliability and reproducibility of results: A CZL635 load cell was chosen, capable of measuring forces up to \(49\,\mathrm {N}\). For vibrotactile feedback, a Tactile Labs Haptuator mark II3 was used: a VCA with moving magnet suitable to render vibration up to \(1000\,\mathrm {Hz}\). An Arduino UNO computing platform4 receives the analog force signal from the load cell and samples it uniformly at \(1920\,\mathrm {Hz}\) with 10-bit resolution [6]. The board is connected via USB to ad hoc software developed in the Pure Data environment and run on a host computer. The software receives force data and uses them to synthesize vibrotactile signals in return. These are routed as audio signals through a RME Fireface 800 audio interface5 feeding an audio amplifier connected to the actuator. The device measures the area of contact of a finger touching its top surface. Similar to the technological solution described in [42], a strip of infrared LEDs was attached at one side of the top panel, which is made of transparent Plexiglas: In this way, a finger pad touching the surface is illuminated by the infrared light passing through it. A miniature infrared camera placed under the top panel captures high-resolution (\(1280 \times 960\) pixels) images at 30 fps and sends them via USB to a video processing software developed in the Max/MSP/Jitter environment, where finger contact area is estimated.
The mechanical construction of the interface was iteratively refined, so as to optimize the response of the force sensor and vibrotactile actuator. For instance, since the moving magnet of the Haptuator moves along its longitudinal direction, the actuator was suspended and mounted perpendicularly at the lower side of the Touch-Box top panel, thus maximizing the amount of energy conveyed to it. Special care was devoted to forbid coupling of the Haptuator with the rest of the structure, which could generate spurious resonances and dissipate energy. Various weight and thickness values of the Plexiglas panel were also tested, with the purpose of minimizing nonlinearities in the produced vibration, while keeping the equivalent mass of a finger pressing on top of the panel compatible with the vibratory power generated by our system.

13.3.1.2 Characterization of Force Measurement

The offset load on the force sensor due to the device construction was first measured and subtracted for subsequent processing. Force acquisition was characterized by performing measurements with a set of test weights from 50 to 5000 g resulting in a pseudo-linear curve which maps digital data readings from the Arduino board (10-bit values) to the corresponding force values in Newtons. The obtained map was used in the Pure Data software to read force data.

13.3.1.3 Characterization of Contact Area Measurement

Finger contact area is obtained from the data recorded by the infrared camera. Acquired images are processed in real time to extract the contour of the finger pad portion in contact with the panel and to count the number of contained pixels.
The area corresponding to a single pixel (i.e., the resolution of the area measurement system) was calibrated by applying a set of laser-cut adhesive patches of predefined sizes on the top panel. Test weights of 200, 800, and \(1500\,\mathrm {g}\) were used to simulate the pressing forces used in the experiment described in Sect. 4.​2, which result in slightly different distances of the top panel from the camera, influencing its magnification ratio. The measurements were averaged for each pressing force level, obtaining the following pixel size values: \(0.001161\,\mathrm {mm^2}\) (\(200\,\mathrm {g}\)), \(0.001125\,\mathrm {mm^2}\) (\(800\,\mathrm {g}\)), and \(0.001098\,\mathrm {mm^2}\) (\(1500\,\mathrm {g}\)).
Finger contact areas in \(\mathrm {mm^2}\) were finally obtained by multiplying the counted number of pixels by the appropriate pixel size value, depending on the applied force.

13.3.1.4 Characterization of Vibration Output

The accuracy of the device in reproducing a given vibrotactile signal was tested. The test signals were those used in the mentioned experiment: a sine wave at \(250\,\mathrm {Hz}\), and a white noise band-pass filtered with \(48\,\mathrm {dB}\)/octave cutoffs at 50 and 500 Hz. Vibration measurements were carried out with a Wilcoxon 736 T piezoelectric accelerometer6 (sensitivity \(=10.2\,\mathrm {mV/m/s^2}\), \(\pm 5\%\), \(25\,^{\circ }\mathrm {C}\)) with frequency response flat \(\pm 5\%\) in the 5–32200 Hz range) connected to a Wilcoxon iT111M transmitter.7 The accelerometer was secured to the top of the Touch-Box with double adhesive tape. The AC-coupled output of the transmitter was recorded via a RME Fireface 800 interface as audio signals at \(48\,\mathrm {kHz}\) with 24-bit resolution.
Vibrations produced by the Touch-Box were recorded at different amplitudes in \(2\,\mathrm {dB}\) steps, in the range used in the reference experiment. Measurements were repeated by placing 200, 800 and 1500 g test weights on top of the device, accounting for the pressing forces used in the experiment.
The following calculations were performed on the recorded vibration signals to extract acceleration values: (i) Digital values in the range \([-1, 1]\) were translated to a \(\mathrm {dB_{FS}}\) representation; (ii) voltage values in Volt were obtained from \(\mathrm {dB_{FS}}\) values, based on the nominal input sensitivity of the audio interface (\(+19\,\mathrm {dBu}\) @ \(0\,\mathrm {dB_{FS}}\), reference \(0.775\,\mathrm {V}\)); (iii) acceleration values in \(\mathrm {m/s}^2\) were calculated from Volt values, based on the nominal sensitivity of the accelerometer. Finally, RMS acceleration values in \(\mathrm {dB}\) (re \(10^{-6}\,\mathrm {m/s^2}\)) were computed over an observation interval of 8 seconds to minimize the contribution of unwanted external noise. Notice that the considered vibration signals are periodic or stationary.
Amplitude Response
The curves in Fig. 13.2a, b relate the relative amplitudes of the stimuli to the corresponding actual vibration energy produced by the Touch-Box, expressed as RMS acceleration. Vibration acceleration was measured in the range from the initial amplitude used in the reference experiment down to \(-6\,\mathrm {dB}\) below the minimum average vibrotactile threshold found. Generally, vibration amplitude varied consistently with that of the input signal, resulting in a pseudo-linear relationship. However, the three weights resulted in different amplitude offsets, due to mechanical dampening. In the analysis of experimental data, this characterization was used for mapping the experimental results to actual RMS vibration acceleration values, in this way compensating for the dampening effect of pressing forces on vibration amplitude. As shown in Table 13.1a, the effective step size of amplitude variation for the three weights is consistent across the considered range.
Table 13.1
Mean and standard deviation (in brackets) of (a) RMS acceleration amplitude variation (original step size \(2\,\mathrm {dB}\)), and (b) offsets relative to amplitudes measured for the \(200\,\mathrm {g}\) weight. Table reprinted from [33] (Appendix)
Weight (g)
Sinusoidal vibration (dB)
Noise vibration (dB)
(a)
200
1.98 (0.06)
1.79 (0.33)
800
1.99 (0.11)
2.01 (0.32)
1500
1.95 (0.13)
1.95 (0.19)
(b)
800
−8.76 (0.09)
−8.61 (1.13)
1500
−10.65 (0.21)
−6.95 (0.65)
Table 13.1b shows amplitude offsets for the 800 and 1500 g weights, relative to the measured amplitudes for the \(200\,\mathrm {g}\) weight. Overall, the performed characterization shows that the device behaves consistently with regard to amplitude and energy response, with slightly higher accuracy when sinusoidal vibration is used.
Frequency Response
Fig. 13.3 shows the measured magnitude spectra of noise stimuli, for three sample amplitudes ranging from the initial level used in the experiment down to \(-6\,\mathrm {dB}\) below the minimum average threshold found. In addition to the dampening effect on RMS vibration amplitudes noted above—which is the only effect measured in the sinusoidal condition—in the case of the noise stimulus, the three weights resulted in spectral structures slightly different from the original flat spectrum in the 50–500 Hz range used as input signal. For a given weight, the spectral centroid (i.e., the amplitude-weighted average frequency, which roughly represents the ‘center of mass’ of a spectrum) of noise vibration was found to generally decrease with the signal amplitude: For the \(200\,\mathrm {g}\) weight, the spectral centroid varied from \(188\,\mathrm {Hz}\) at the initial amplitude to \(173\,\mathrm {Hz}\) at \(-6\,\mathrm {dB}\) below the minimum average threshold found. For the 800 and 1500 g weights, the spectral centroid varied, respectively, from 381.3 to \(303\,\mathrm {Hz}\) and from 374.5 to \(359.4\,\mathrm {Hz}\).
The characterization of vibrotactile feedback highlighted strengths and weaknesses of the Touch-Box implementation, allowing to validate experimental results and to compensate for hardware limitations (namely, amplitude dampening and non-flat spectral response). For instance, as mentioned in Sect. 4.​2.​4, finding that the peak energy of the stimuli in the higher force condition shifted above the region of maximum sensitivity (200–300 Hz, [39]) suggests that the vibrotactile threshold measured in that case was likely higher than in reality.

13.3.2 The VibroPiano

Historically, the reproduction of haptic properties of the piano keyboard has been first approached from a kinematic perspective with the aim of recreating the mechanical response of the keys [4, 28], also in light of experiments emphasizing the sensitivity of pianists to the keyboard mechanics [13]. Only recently, and in parallel to industrial outcomes [16], researchers started to analyze the role of the vibrotactile feedback component as a potential conveyor of salient cues. An early attempt by some of the present authors claimed possible qualitative relevance of these cues while playing a digital piano [12]. A few years later, a refined digital piano prototype was implemented, capable of reproducing various types of vibrotactile feedback at the keyboard. This new prototype was used to test whether the nature of feedback can affect pianists’ performance and their perception of quality features (see Sect. 5.​3.​2.​2).

13.3.2.1 Implementation

A digital piano was used as a platform for the development of a keyboard prototype yielding vibrotactile feedback. After some preliminary testing with different tactile actuators attached to the bottom of the original keyboard, the instrument was disassembled, and the keyboard detached from its metal casing and screwed to a thick plywood board (see Fig. 13.4). This customization improved the reproduction of vibrations at the keys: on the one hand by avoiding hardly controllable nonlinearities arising from the metal casing, and on the other hand by conveying higher vibratory energy to the keys thanks to the stiffer wooden board. Two Clark Synthesis TST239 tactile transducers8 were attached to the bottom of the wooden board, placed, respectively, in correspondence of the lower and middle octaves, in this way conveying vibrations at the most relevant areas of the keyboard [11]. Once equipped in this way, the keyboard was laid on a stand, interposing foam rubber at the contact points to minimize the formation of additional resonances.
The transducers were driven by a high-power stereo audio amplifier set to dual mono configuration and fed with a monophonic signal sent by a host computer via a RME Fireface 800 audio interface. The audio interface received MIDI data from the keyboard and passed it to the computer, where sound and vibrotactile feedback were, respectively, generated by Modartt Pianoteq,9 a physical modeling piano whose audio feedback was delivered to the performer via earphones, and a software sampler playing back vibration samples, which were prepared beforehand as described below. A diagram of the setup is shown in Fig. 13.5.

13.3.2.2 Preparation of Vibration Samples

Recording of Piano Keyboard Vibrations
Vibrations were recorded at the keyboard of two Yamaha Disklavier pianos—a grand model DC3-M4, and an upright model DU1A with control unit DKC-850—via the same measurement setup described in Sect. 13.3.1.4. The accelerometer was secured to each measured key with double-sided tape to ensure stable coupling and easy removal. As explained in Sect. 4.​3.​1, Disklavier pianos can be controlled remotely by sending them MIDI control data. That allowed to automate the recording of vibration samples by playing back MIDI ‘note ON’ messages at various MIDI velocities for each of the 88 actuated keys of the Disklaviers.
The choice of suitable MIDI velocities required to analyze the Disklaviers’ dynamic range. The MIDI volume of the two Disklavier pianos was first set to approximate a linear response to MIDI velocity, according to Yamaha’s recommendations. The acoustic dynamic response to MIDI velocity was then measured by means of a KEMAR mannequin10 (grand Disklavier) or a sound level meter (upright Disklavier) placed above the stool, approximately at the height of a pianist’s ears [11]. The loudness of a A4 tone was measured for ten, evenly spaced, values of MIDI velocity in the range 2–127. Each measurement was repeated several times and averaged. Results are reported in Table 13.2. In accordance with a previous study [15] that measured temporal and dynamic accuracy of computer-controlled grand pianos in reproducing MIDI control data, our results show a flattened dynamic response for high velocity values. Also, the upright model shows a narrower dynamic range, especially for low velocity values.
Table 13.2
Sound level of a A4 tone, generated by the two Disklavier pianos for various MIDI velocities
MIDI velocity
Grand Disklavier (DC3-M4) (dB)
Upright Disklavier (DU1A) (dB)
2
47.8
73.3
16
51.8
73.9
30
60.0
74.6
44
66.3
79.8
58
72.4
84.5
71
76.7
87.6
85
80.1
90.7
99
83.0
90.6
113
85.1
91.6
127
85.5
91.2
Based on the above results, MIDI velocities 12, 23, 34, 45, 56, 67, 78, 89, 100, 111 were selected for acquiring vibration recordings. This substantially covered the entire dynamic range of the pianos with evenly spaced velocity values. Extreme velocity values were excluded, as they result in flattened dynamics or unreliable response. For each of the selected velocity values, acceleration samples were recorded at the 88 keys of the two pianos. Recordings for each key/velocity combination lasted 16 seconds, thus amply describing the decay of vibration amplitude. Since the accelerometer was mounted on top of the measured keys, the initial part of the recorded samples represents the displacement of the keys being depressed by the actuation mechanism, until they hit the keybed and stop (see Fig. 4.​4). Not being interested in kinesthetic components for the purpose of our research, these transients were manually removed from each of the samples, thus leaving only the purely vibratory part.
Synthetic Vibration Samples
A further set of vibration samples was instead synthesized, aiming at reproducing the same amplitude envelope of the real vibration signals while changing only their spectral content. Synthetic signals for each key and each of the selected velocity values were generated as follows. First, a white noise was bandlimited in the range 20–500 Hz, covering the vibrotactile bandwidth [40] while being compatible with audio equipment.11 The bandlimited noise was then passed through a second-order resonant filter centered at the fundamental frequency of the note corresponding to the key. The resulting signal was modulated by the amplitude envelope of the matching vibration sample recorded on the grand piano, which in turn was estimated from the energy decay curve of the sample via the Schroeder integral [37]. Finally, the power (RMS level) of the synthetic sample was equalized to that of the corresponding recorded sample.
Vibration Sample Libraries
The recorded and synthetic vibration samples sets were stored into the software sampler, which offers sample interpolation across MIDI velocities. Overall, three sample libraries were created: two from recordings on the grand and upright Disklavier pianos, and one from the generated synthetic samples.

13.3.2.3 Characterization and Calibration

As suggested in the Chapter, to make sure that the piano prototype could accurately reproduce the designed audio and tactile feedback, it was subjected to a calibration procedure dealing with the following aspects: (i) auditory loudness; (ii) keyboard velocity response; (iii) amplitude and frequency response of vibrotactile feedback.
Loudness Matching
As a first step, the loudness of the piano synthesizer at the performer’s ear was matched to that of the Disklavier pianos. The piano synthesizer was set to simulate either a grand or an upright piano, to match the character of the reference Disklaviers. Measurements were taken with the KEMAR mannequin wearing earphones by having Pianoteq playback A notes on all octaves at the previously selected velocities. By using the volume mapping feature of Pianoteq—which allows one to set independently the volume of each key across the keyboard—the loudness of the piano synthesizer was then matched to the measurements taken on the Disklavier pianos as described in Sect. 13.3.2.2.
Keyboard Velocity Calibration
As expected, the keyboards of the Disklaviers and that of the Galileo digital piano have markedly different response dynamics due to their different mechanics and mass. Once the loudness of the piano synthesizer was set, the velocity response of the digital piano keyboard was matched to that of the Disklavier pianos.
The keyboard response was adjusted via the velocity calibration routine included with Pianoteq, which was performed by an experienced pianist first on the Disklavier pianos—this time used as silent MIDI controllers driving Pianoteq—and then on the digital keyboard. Fairly different velocity maps were obtained. By making use of a MIDI data filter, each point of the digital keyboard velocity map was projected onto the corresponding point of the Disklavier velocity map. Two maps were therefore created, one for each synthesizer-Disklavier pair (grand and upright models). The resulting key velocity transfer characteristics were then independently checked by two more pianists, to validate its reliability and neutrality. Such maps ensured that, when a pianist played the digital keyboard at a desired dynamics, the generated auditory and tactile feedback were consistent with that of the corresponding Disklavier piano.
Spectral Equalization
As a final refinement, the vibratory frequency response of the setup was analyzed and then equalized for spectral flattening. Despite the optimized construction, spurious resonances were still present in the keyboard-plywood system, and additionally, the transducers’ frequency response exhibits a prominent notch around \(300\,\mathrm {Hz}\).
The overall frequency response of the transduction-transmission chain was measured in correspondence of all the A keys, leading to an average magnitude spectrum that, once inverted, provided the spectral flattening equalization characteristics shown in Fig. 13.6. The \(300\,\mathrm {Hz}\) notch of the transducers got compensated along with resonances and anti-resonances of the mechanical system.
In order to prevent the generation of resonance peaks along the keyboard, the equalization curve was approximated using a software parametric equalizer in series with the software sampler that reproduced vibration signals. Focusing on the tactile bandwidth range, the approximation made use of a shelving filter providing a ramp climbing by \(18\,\mathrm {dB}\) in the range 100–600 Hz, and a 2nd-order filter block approximating the peak around \(180\,\mathrm {Hz}\).
At the present stage, the VibroPiano has undergone informal evaluation by several pianists, who gave very positive feedback. Moreover, as described in Sect. 5.​3.​2.​2, it has been used to test how different vibrotactile feedback (namely, realistic, realistic with increased intensity, synthetic, no feedback) may influence the user experience and perception of quality features such as control of dynamics, loudness, richness of tone, naturalness, engagement and general preference.

13.3.3 The HSoundplane

The HSoundplane, shown in Fig. 13.7, is a multi-touch musical interface prototype offering multi-point, localized vibrotactile feedback. The main purpose of the interface is to provide an open and versatile framework allowing experimentation with different audio-tactile mappings, for testing the effectiveness of vibrotactile feedback in musical practice.

13.3.3.1 Hardware Implementation

Most current touchscreen technology still lacks finger pressure sensing12 and often do not offer satisfying response times for use in real-time musical performance. To overcome these issues, our prototype was developed based on the Madrona Labs Soundplane: an advanced musical controller, first described in [19] and now commercially available.13 The interface allows easy disassembly and is potentially open to hacking, which was required for our purpose. The Soundplane has a large multi-touch and pressure-sensitive surface based on ultra-fast patented capacitive sensing technology, offering tracking times in the order of a few \(\mathrm {ms}\), as opposed to the lag \({\ge }50\,\mathrm {ms}\) of the current best touchscreen technology [8]. Its sensing layer uses several carrier antennas, each transporting an audio-rate signal at a different fixed frequency. Separated by a dielectric layer, transversal pickup antennas catch these signals, which are modulated by changes of thickness in the dielectric layer due to finger pressure on the Soundplane’s flexible surface. An internal DSP takes care of generating the carrier signals and decoding the touch-modulated signals for multiple fingers. The computed touch data (describing multi-finger positions and pressing forces) are sent to a host computer via USB connection. The Soundplane’s sensing technology requires the top surface and underlying layers to be as flat and uniform as possible. A software calibration routine is provided to compensate for minor irregularities.
In the following of this section, we describe how the original Soundplane was augmented with vibrotactile feedback, resulting in the HSoundplane prototype (where ‘H’ stands for ‘haptic’).
Construction
The original Soundplane’s multilayered design consists of a top tiled surface—a sandwich construction made of wood veneer stuck to a thin Plexiglas plate and a natural rubber foil—resting on top of the capacitive sensing layer described above. Since these components are simply laid upon each other and kept in place with pegs built into the wooden casing, it is quite simple to disassemble the structure and replace some of its elements.
To implement a haptic layer for the Soundplane, we chose a solution based on low-cost piezoelectric elements: In addition to the advantages pointed out in Sect. 13.2, such devices are extremely thin (down to a few tenths of a millimeter) and allow scaling up due to their size and cheap price. The proposed solution makes use of piezo actuator disks arranged in a \(30 \times 5\) matrix configuration matching the tiled pads on the Soundplane surface, so that each actuator corresponds to a tile (see Fig. 13.8).
In order to maximize the vibration energy conveyed to the fingers, vibrotactile actuators should be ideally placed as close as possible to the touch location. The actuators layer was therefore placed between the top surface and the sensing components. However, such a solution poses some serious challenges: The original flexibility, flatness, and thickness of the layers above the sensing components have to be preserved as much as possible, so as to retain the sensitivity and calibration uniformity of the Soundplane’s sensor surface. To this end, the piezo elements were wired via an ad hoc designed flexible PCB foil with SMD soldering techniques and electrically conductive adhesive transfer tapes (3M 9703). The PCB with attached piezo elements was laid on top of an additional thin rubber sheet, with holes corresponding to each piezo element: This ensures enough free space to allow optimal mechanical deflection of the actuators, and also improves the overall flexibility of the construction. However thin, the addition of the actuators layer alters the overall thickness of the hardware. For this reason, we had to redesign the original top surface replacing it with a thinner version. As a result, the thickness of the new top surface plus the actuators layer matches that of the original surface. Figure 13.9 shows an exploded view of the HSoundplane construction, consisting of a total of nine layers.
Electronics
Based on off-the-shelf components, custom amplifying and routing electronics were designed to drive piezo elements with standard audio signals.
In order to provide effective vibrotactile feedback at the HSoundplane’s surface, some key considerations were made. Driving piezo actuators require voltage values (in our case up to \(200\,\mathrm {V_{pp}}\)) that are not compatible with standard audio equipment. This, together with the large number of actuators used in the HSoundplane (150), poses a non-trivial electrical challenge. Being in the analog domain, the use of a separate audio signal for each actuator would be overkill. Therefore, we considered using a maximum of one channel per column of pads, reducing the requirements to 30 separate audio channels. These are provided by a MADI system14 formed by a RME MADIface USB15 hooked to a D.O.TEC ANDIAMO 216 AD/DA converter. To comply with the electrical specifications of the piezo transducers, the analog audio signals produced by the MADI system—whose output sensitivity was set to \(9\,\mathrm {dBu}\) @ \(0\,\mathrm {dB_{FS}}\) (reference \(0.775\,\mathrm {V}\)),17 resulting in a maximum voltage of \(2.18\,\mathrm{{V}}\)—must be amplified by about a factor 50 using a balanced signal. Routing continuous analog signals is also a delicate issue, since the end user must not notice any disturbance or delay in the feedback.
To address all the issues pointed out above, a solution was designed based on three key integrated circuits components: (1) Texas Instruments DRV266718 piezo drivers that can amplify standard audio signals up to \(200\,\mathrm {V_{pp}}\); (2) serial-to-parallel shift registers with output latches of the 74HC595 family19; (3) high-voltage MOSFET relays. For the sake of simplicity, the whole output stage of the HSoundplane was divided into four identical sections, represented in Fig. 13.8, each consisting of (a) a flexible PCB with 40 piezo actuators, connected by a flat cable to (b) a driver PCB with eight audio-to-haptic amplifiers and routing electronics. In order to address the wanted actuators and synchronize their switching with audio signals, (c) a master controller parses the control data generated at the host computer and routes them to the appropriate slave drivers.
Figure 13.10 shows the detail of a slave driver board, which operates as follows: (a) Eight audio signals are routed to (b) the piezo drivers, where they are amplified to high voltage and sent to (c) a \(8 \times 5\) relay matrix that connects to each of the piezo actuators in the section. This 40-point matrix is addressed by (d) a chain of serial-to-parallel shift registers commanded by (e) a microcontroller. On start-up, the microcontroller initializes the piezo drivers, setting among other things their amplification level. When in running mode, the slave microcontrollers receive routing information from the master, set a corresponding 40-bit word—each bit corresponding to one actuator—and send it to the shift registers, which individually open or close the relays of the matrix. As shown in Fig. 13.10, each amplified audio signal feeds five points in the relay matrix; therefore, each signal path is hard-coded to five addresses. Such fixed addressing is the main limitation of the current HSoundplane prototype: Each column of five actuators can only be fed with a single vibrotactile signal.

13.3.3.2 Software Implementation

The original Soundplane comes with a client application for Mac OS, which receives multi-touch data sensed by the interface and transmits them as OSC messages according to an original format named ‘t3d’ (for touch-3d). The t3d data represent touch information for each contacting finger, reporting absolute x and y coordinates, and normal force along the z-axis.
In the HSoundplane prototype, these data are used in real time to generate audio and vibration signals and route the latter to the piezo actuators located at the corresponding x- and y-coordinates.
Relay Matrix Control
Synchronization between vibration signals and the four relay matrices happens at the host computer level. While vibrotactile signals are output by the MADI system, control messages are sent to the master controller via USB. The master controller parses the received messages and consequently addresses the slave driver boards on a serial bus, setting the state of the relay matrices.
The choice of using a master controller, rather than addressing each driver board directly, is motivated by the following observations: First, properly interfacing several external controllers with a host computer can be complex; second, the midterm perspective of developing the HSoundplane into a self-contained musical interface would eventually require to get rid of a controlling computer and work in closed loop. For that purpose, a main processing unit would be needed, which receives touch data, processes them, and generates vibrotactile information.
Rendering of Vibrotactile Feedback
Digital musical interfaces generally enable manifold mapping possibilities between the users’ gesture and audio output. In addition to what offered by common musical interfaces, the HSoundplane provides vibrotactile feedback to the user, and this requires to define a further mapping strategy. Since the actuators layer is part of the interface itself, we decided to provide the users with a selection of predefined vibrotactile feedback mapping strategies. Sound mapping is freely definable as in the original Soundplane. Three alternative mapping and vibration generation strategies are implemented in the current prototype:
1.
Audio signals controlled by the HSoundplane are used to feed the actuators layer. Filtering is available to make the signal dynamics and frequency range comply with the response of the piezo actuators (see Sect. 13.3.3.3). This approach is straightforward and ensures coherence between the musical output and the tactile feedback. In a way, this first strategy mimics what occurs on acoustic musical instruments, where the source of vibration coincides with that of sound.
 
2.
Sine wave signals are used, filtered as explained above. Their frequency follows the fundamental of the played tones, and their amplitude is set according to the intensity of the applied forces. When the frequency of the sine wave signals overlaps with the frequency range of the actuators, this approach results in a clear vibrotactile response of the interface.
 
3.
A simpler mapping makes use of a fixed frequency sine wave at \(250\,\mathrm {Hz}\) for all actuators. This solution maximizes perceptual effectiveness by using a stimuli resulting in peak tactile sensitivity [39]. On the other hand, the produced vibrotactile cues being independent from sound output, they may result in occasional perceptual mismatch between touch and audition. At the present time, this has still to be investigated.
 
In a midterm perspective, the last two mapping strategies could be implemented as a completely self-contained system by relying on the waveform memory provided by the chosen piezo drivers model.
Several other strategies for producing vibrotactile signals starting from the related audio are possible, some of which are described in Sect. 7.​3.

13.3.3.3 Characterization

Vibration measurements were performed with the same setup described in Sect. 13.3.1.4. Initially, four types of piezo actuators with different specifications were selected, each with a different frequency of resonance and capacitance. Since each piezo driver has to feed five actuators in parallel, particular attention was paid to current consumption and heat dissipation. A piezo actuator Murata Electronics 7BB-20-620 was eventually selected, for it had the smallest capacitance value among the considered actuators, and therefore lower current needs.
Once the piezo layer was finalized, vibrotactile cross talk was informally evaluated. Thanks to the holed rubber layer, which lets actuators vibrate while keeping them apart from each other, the HSoundplane is able to render localized vibrotactile feedback with unperceivable vibration spill at other locations, even when touching right next to the target feedback point.
Vibration frequency response was measured in the vibrotactile range as follows: The accelerometer was stuck with double-sided tape at several pads of the top surface, and the underlying piezo transducers were fed with a sinusoidal sweep [9] between 20 and \(1000\,\mathrm {Hz}\), at different amplitudes. Making use of the sensitivity specifications of the I/O chain, values of acceleration in \(\mathrm {m/s^2}\) and \(\mathrm {dB}\) (re \(10^{-6}\,\mathrm {m/s^2}\)) were obtained from the digital amplitude values in \(\mathrm {dB_{FS}}\). Figure 13.11 shows the results of measurements performed in correspondence of four exemplary piezo transducers, for the maximum vibration level achievable without apparent distortion. Such signals are well above the vibrotactile thresholds reported in Sect. 4.​2 for active touch, effectively resulting in intense tactile sensation. In general, the frequency responses measured at different locations over the surface are very similar in shape, with a pronounced peak at about \(40\,\mathrm {Hz}\). In some cases, they show minor amplitude offsets (see, e.g., the response of piezo 102 in Fig. 13.11) that can be
Further measurements are planned in the time domain to test synchronization between audio signals and relay control, and to quantify closed-loop latency from touch events to the onset of vibrotactile feedback. Also, similar to what was done for the Touch-Box (see Sect. 13.3.1.2), we plan to characterize finger pressing force as measured by the HSoundplane.

13.4 Conclusions

A few exemplary interfaces providing vibrotactile feedback were described, which have been recently developed by the authors for the purpose of conducting various perceptual experiments, and for musical applications. Details were given on the design process and on the technological solutions adopted for rendering accurate vibratory behavior. Measurements were performed to characterize the interfaces’ input (e.g., finger pressing force, or keyboard velocity) and output (vibratory cues).
It is suggested that the characterization and validation of self-developed haptic devices is especially important when employing them in psychophysical experiments, as well as in evaluation and performance assessments (see the studies reported in Chap. 4, Sect. 5.​3.​2.​2, and Chap. 7). One the one hand, as opposed to relying on assumptions based on components’ specifications, characterization offers objective, verified data to designers and experimenters, respectively, enabling them to refine the developed devices and to better interpret experimental results. For instance, characterization data describing the actual nature of rendered haptic feedback may offer a better understanding of its perceived qualities. On the other hand, the characterization of haptic prototypes—together with their technical documentation—allows reproducible implementations and enables other users and designers to carry on research and development, rather than resulting in one-of-a-kind devices.

Acknowledgements

The authors wish to thank Randy Jones, the inventor of the original Soundplane, for providing technical support during the development the HSoundplane prototype, and Andrea Ghirotto and Lorenzo Malavolta for their help in the preparation of the piano vibration samples. This research was pursued as part of project AHMI (Audio-Haptic modalities in Musical Interfaces, 2014–2016), funded by the Swiss National Science Foundation.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fußnoten
1
See, for instance, www.​ti.​com/​haptics (last accessed on Nov 29, 2017).
 
2
For example, see Immersion TouchSense technology: www.​immersion.​com (last accessed on Nov 29, 2017).
 
6
https://​buy.​wilcoxon.​com/​736t.​html (last accessed on Dec. 21, 2017).
 
8
http://​clarksynthesis.​com/​ (last accessed on Dec. 21, 2017).
 
9
https://​www.​pianoteq.​com/​ (last accessed on Dec. 21, 2017).
 
10
http://​kemar.​us/​ (last accessed on Dec. 21, 2017).
 
11
In the low range, audio amplifiers are usually meant to treat signals down to \(20\,\mathrm {Hz}\).
 
12
With the exception of the recent Force Touch technology by Apple.
 
13
www.​madronalabs.​com (last accessed on Nov 29, 2017).
 
14
Multichannel Audio Digital Interface: https://​www.​en.​wikipedia.​org/​wiki/​MADI (last accessed on Nov 29, 2017).
 
17
For further details, see https://​www.​en.​wikipedia.​org/​wiki/​Line_​level (last accessed on Nov 29, 2017).
 
18
http://​www.​ti.​com/​product/​drv2667 (last accessed on Dec. 21, 2017).
 
Literatur
1.
Zurück zum Zitat Askenfelt, A., Jansson, E.V.: From touch to string vibrations I: timing in the grand piano action. J. Acoust. Soc. Am. 88(1), 52–63 (1990)CrossRef Askenfelt, A., Jansson, E.V.: From touch to string vibrations I: timing in the grand piano action. J. Acoust. Soc. Am. 88(1), 52–63 (1990)CrossRef
2.
Zurück zum Zitat Bensmaïa, S.J., Hollins, M.: The vibrations of texture. Somatosens. Mot. Res. 20(1), 33–43 (2003)CrossRef Bensmaïa, S.J., Hollins, M.: The vibrations of texture. Somatosens. Mot. Res. 20(1), 33–43 (2003)CrossRef
3.
Zurück zum Zitat Brewster, S., Brown, L.M.: Tactons: structured tactile messages for non-visual information display. In: Proceedings of the Australas. User Interface Conference Dunedin, New Zealand (2004) Brewster, S., Brown, L.M.: Tactons: structured tactile messages for non-visual information display. In: Proceedings of the Australas. User Interface Conference Dunedin, New Zealand (2004)
4.
Zurück zum Zitat Cadoz, C., Lisowski, L., Florens, J.L.: A modular feedback keyboard design. Comput. Music J. 14(2), 47–51 (1990)CrossRef Cadoz, C., Lisowski, L., Florens, J.L.: A modular feedback keyboard design. Comput. Music J. 14(2), 47–51 (1990)CrossRef
5.
Zurück zum Zitat Choi, S., Kuchenbecker, K.J.: Vibrotactile display: perception, technology, and applications. Proc. IEEE 101(9), 2093–2104 (2013)CrossRef Choi, S., Kuchenbecker, K.J.: Vibrotactile display: perception, technology, and applications. Proc. IEEE 101(9), 2093–2104 (2013)CrossRef
6.
Zurück zum Zitat Civolani, M., Fontana, F., Papetti, S.: Efficient acquisition of force data in interactive shoe designs. In: Nordahl, R., Serafin, S., Fontana, F., Brewster, S. (eds) Haptic and Audio Interaction Design (HAID). Lecture Notes in Computer Science (LNCS), vol. 6306, pp. 129–138. Springer, Berlin, Heidelberg (2010)CrossRef Civolani, M., Fontana, F., Papetti, S.: Efficient acquisition of force data in interactive shoe designs. In: Nordahl, R., Serafin, S., Fontana, F., Brewster, S. (eds) Haptic and Audio Interaction Design (HAID). Lecture Notes in Computer Science (LNCS), vol. 6306, pp. 129–138. Springer, Berlin, Heidelberg (2010)CrossRef
7.
Zurück zum Zitat Dahl, S., Bresin, R.: Is the player more influenced by the auditory than the tactile feedback from the instrument? In: Proceedings of the Digital Audio Effects Conference (DAFx), pp. 6–9. Limerick, Ireland (2001) Dahl, S., Bresin, R.: Is the player more influenced by the auditory than the tactile feedback from the instrument? In: Proceedings of the Digital Audio Effects Conference (DAFx), pp. 6–9. Limerick, Ireland (2001)
8.
Zurück zum Zitat Deber, J., Araujo, B., Jota, R., Forlines, C., Leigh, D., Sanders, S., Wigdor, D.: Hammer time!: a low-cost, high precision, high accuracy tool to measure the latency of touchscreen devices. In: Proceedings of the CHI’16 Conference on Human Factors in Computing Systems, pp. 2857–2868. ACM Press, San Jose, CA, USA (2016) Deber, J., Araujo, B., Jota, R., Forlines, C., Leigh, D., Sanders, S., Wigdor, D.: Hammer time!: a low-cost, high precision, high accuracy tool to measure the latency of touchscreen devices. In: Proceedings of the CHI’16 Conference on Human Factors in Computing Systems, pp. 2857–2868. ACM Press, San Jose, CA, USA (2016)
9.
Zurück zum Zitat Farina, A.: Advancements in impulse response measurements by sine sweeps. In: Proceedings of the Audio Engineering Society Conference vol. 122. AES, Vienna, Austria (2007) Farina, A.: Advancements in impulse response measurements by sine sweeps. In: Proceedings of the Audio Engineering Society Conference vol. 122. AES, Vienna, Austria (2007)
10.
Zurück zum Zitat Fontana, F., Avanzini, F., Järveläinen, H., Papetti, S., Klauer, G., Malavolta, L.: Rendering and subjective evaluation of real versus synthetic vibrotactile cues on a digital piano keyboard. In: Proceedings of the Sound and Music Computing Conference (SMC), pp. 161–167. Maynooth, Ireland (2015) Fontana, F., Avanzini, F., Järveläinen, H., Papetti, S., Klauer, G., Malavolta, L.: Rendering and subjective evaluation of real versus synthetic vibrotactile cues on a digital piano keyboard. In: Proceedings of the Sound and Music Computing Conference (SMC), pp. 161–167. Maynooth, Ireland (2015)
11.
Zurück zum Zitat Fontana, F., Avanzini, F., Järveläinen, H., Papetti, S., Zanini, F., Zanini, V.: Perception of interactive vibrotactile cues on the acoustic grand and upright piano. In: Proceedings of the Joint International Computer Music Conference and Sound and Music Computing Conference (ICMC–SMC). Athens, Greece (2014) Fontana, F., Avanzini, F., Järveläinen, H., Papetti, S., Zanini, F., Zanini, V.: Perception of interactive vibrotactile cues on the acoustic grand and upright piano. In: Proceedings of the Joint International Computer Music Conference and Sound and Music Computing Conference (ICMC–SMC). Athens, Greece (2014)
12.
Zurück zum Zitat Fontana, F., Papetti, S., Civolani, M., dal Bello, V., Bank, B.: An exploration on the influence of vibrotactile cues during digital piano playing. In: Proceedings of the Sound and Music Computing Conference (SMC), pp. 273–278. Padua, Italy (2011) Fontana, F., Papetti, S., Civolani, M., dal Bello, V., Bank, B.: An exploration on the influence of vibrotactile cues during digital piano playing. In: Proceedings of the Sound and Music Computing Conference (SMC), pp. 273–278. Padua, Italy (2011)
13.
Zurück zum Zitat Galembo, A., Askenfelt, A.: Quality assessment of musical instruments–effects of multimodality. In: Proceedings of the 5th Triennial Conference of the European Society for the Cognitive Sciences of Music (ESCOM). Hannover, Germany (2003) Galembo, A., Askenfelt, A.: Quality assessment of musical instruments–effects of multimodality. In: Proceedings of the 5th Triennial Conference of the European Society for the Cognitive Sciences of Music (ESCOM). Hannover, Germany (2003)
14.
Zurück zum Zitat Giordano, M., Sinclair, S., Wanderley, M.M.: Bowing a vibration-enhanced force feedback device. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME). Ann Arbor, Michigan, USA (2012) Giordano, M., Sinclair, S., Wanderley, M.M.: Bowing a vibration-enhanced force feedback device. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME). Ann Arbor, Michigan, USA (2012)
15.
Zurück zum Zitat Goebl, W., Bresin, R.: Measurement and reproduction accuracy of computer-controlled grand pianos. J. Acoust. Soc. Am. 114(4), 2273 (2003)CrossRef Goebl, W., Bresin, R.: Measurement and reproduction accuracy of computer-controlled grand pianos. J. Acoust. Soc. Am. 114(4), 2273 (2003)CrossRef
16.
17.
Zurück zum Zitat Jack, R.H., Stockman, T., McPherson, A.: Effect of latency on performer interaction and subjective quality assessment of a digital musical instrument. In: Proceedings of the Audio Mostly, pp. 116–123. ACM Press, New York, USA (2016) Jack, R.H., Stockman, T., McPherson, A.: Effect of latency on performer interaction and subjective quality assessment of a digital musical instrument. In: Proceedings of the Audio Mostly, pp. 116–123. ACM Press, New York, USA (2016)
18.
Zurück zum Zitat Järveläinen, H., Papetti, S., Schiesser, S., Grosshauser, T.: Audio-tactile feedback in musical gesture primitives: finger pressing. In: Proceedings of the Sound and Music Computing Conference (SMC), pp. 109–114. Stockholm, Sweden (2013) Järveläinen, H., Papetti, S., Schiesser, S., Grosshauser, T.: Audio-tactile feedback in musical gesture primitives: finger pressing. In: Proceedings of the Sound and Music Computing Conference (SMC), pp. 109–114. Stockholm, Sweden (2013)
19.
Zurück zum Zitat Jones, R., Driessen, P., Schloss, A., Tzanetakis, G.: A force-sensitive surface for intimate control. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME). Pittsburgh, Pennsylvania, USA (2009) Jones, R., Driessen, P., Schloss, A., Tzanetakis, G.: A force-sensitive surface for intimate control. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME). Pittsburgh, Pennsylvania, USA (2009)
20.
Zurück zum Zitat Lee, J., Choi, S.: Real-time perception-level translation from audio signals to vibrotactile effects. In: Proceedings of the CHI’13 Conference on Human Factors in Computing Systems, p. 2567. ACM Press, New York, USA (2013) Lee, J., Choi, S.: Real-time perception-level translation from audio signals to vibrotactile effects. In: Proceedings of the CHI’13 Conference on Human Factors in Computing Systems, p. 2567. ACM Press, New York, USA (2013)
21.
Zurück zum Zitat Lylykangas, J., Surakka, V., Rantala, J., Raisamo, R.: Intuitiveness of vibrotactile speed regulation cues. ACM Trans. Appl. Percept. 10(4), 1–15 (2013)CrossRef Lylykangas, J., Surakka, V., Rantala, J., Raisamo, R.: Intuitiveness of vibrotactile speed regulation cues. ACM Trans. Appl. Percept. 10(4), 1–15 (2013)CrossRef
22.
Zurück zum Zitat Maclean, K., Enriquez, M.: Perceptual design of haptic icons. In: Proceedings of the Eurohaptics Conference pp. 351–363. Dublin, Ireland (2003) Maclean, K., Enriquez, M.: Perceptual design of haptic icons. In: Proceedings of the Eurohaptics Conference pp. 351–363. Dublin, Ireland (2003)
23.
Zurück zum Zitat Maeda, S., Griffin, M.J.: A comparison of vibrotactile thresholds on the finger obtained with different equipment. Ergonomics 37(8), 1391–1406 (1994)CrossRef Maeda, S., Griffin, M.J.: A comparison of vibrotactile thresholds on the finger obtained with different equipment. Ergonomics 37(8), 1391–1406 (1994)CrossRef
24.
Zurück zum Zitat Marshall, M.T., Wanderley, M.M.: Vibrotactile feedback in digital musical instruments. In: Proceedings of the Conference on New Interfaces for Musical Express (NIME), pp. 226–229. Paris, France (2006) Marshall, M.T., Wanderley, M.M.: Vibrotactile feedback in digital musical instruments. In: Proceedings of the Conference on New Interfaces for Musical Express (NIME), pp. 226–229. Paris, France (2006)
25.
Zurück zum Zitat Massimino, M.J.: Improved force perception through sensory substitution. Control Eng. Pract. 3(2), 215–222 (1995)CrossRef Massimino, M.J.: Improved force perception through sensory substitution. Control Eng. Pract. 3(2), 215–222 (1995)CrossRef
26.
Zurück zum Zitat McMahan, W., Romano, J.M., Abdul Rahuman, A.M., Kuchenbecker, K.J.: High frequency acceleration feedback significantly increases the realism of haptically rendered textured surfaces. In: Proceedings of the IEEE Haptics Symposium, pp. 141–148. Waltham, Massachusetts, USA (2010) McMahan, W., Romano, J.M., Abdul Rahuman, A.M., Kuchenbecker, K.J.: High frequency acceleration feedback significantly increases the realism of haptically rendered textured surfaces. In: Proceedings of the IEEE Haptics Symposium, pp. 141–148. Waltham, Massachusetts, USA (2010)
27.
Zurück zum Zitat Mortimer, B.J.P., Zets, G.A., Cholewiak, R.W.: Vibrotactile transduction and transducers. J. Acoust. Soc. Am. 121(5), 2970–2977 (2007)CrossRef Mortimer, B.J.P., Zets, G.A., Cholewiak, R.W.: Vibrotactile transduction and transducers. J. Acoust. Soc. Am. 121(5), 2970–2977 (2007)CrossRef
28.
Zurück zum Zitat Oboe, R., De Poli, G.: A multi-instrument force-feedback keyboard. Comput. Music J. 30(3), 38–52 (2006)CrossRef Oboe, R., De Poli, G.: A multi-instrument force-feedback keyboard. Comput. Music J. 30(3), 38–52 (2006)CrossRef
29.
Zurück zum Zitat Okamoto, S., Konyo, M., Tadokoro, S.: Vibrotactile stimuli applied to finger pads as biases for perceived inertial and viscous loads. IEEE Trans. Haptics 4(4), 307–315 (2011)CrossRef Okamoto, S., Konyo, M., Tadokoro, S.: Vibrotactile stimuli applied to finger pads as biases for perceived inertial and viscous loads. IEEE Trans. Haptics 4(4), 307–315 (2011)CrossRef
30.
Zurück zum Zitat Okamura, A.M., Dennerlein, J.T., Howe, R.D.: Vibration feedback models for virtual environments. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 1, pp. 674–679. Leuven, Belgium (1998) Okamura, A.M., Dennerlein, J.T., Howe, R.D.: Vibration feedback models for virtual environments. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 1, pp. 674–679. Leuven, Belgium (1998)
31.
Zurück zum Zitat Pacchierotti, C.: Cutaneous Haptic Feedback in Robotic Teleoperation. Springer Series on Touch and Haptic Systems. Springer Int. Publishing, Cham (2015)CrossRef Pacchierotti, C.: Cutaneous Haptic Feedback in Robotic Teleoperation. Springer Series on Touch and Haptic Systems. Springer Int. Publishing, Cham (2015)CrossRef
32.
Zurück zum Zitat Papetti, S., Fontana, F., Civolani, M., Berrezag, A., Hayward, V.: Audio-tactile display of ground properties using interactive shoes. In: Nordahl, R., Serafin, S., Fontana, F., Brewster, S. (eds.) Haptic and Audio Interaction Design (HAID). Lecture Notes in Computer Science (LNCS), 6306, pp. 117–128. Springer, Berlin, Heidelberg (2010)CrossRef Papetti, S., Fontana, F., Civolani, M., Berrezag, A., Hayward, V.: Audio-tactile display of ground properties using interactive shoes. In: Nordahl, R., Serafin, S., Fontana, F., Brewster, S. (eds.) Haptic and Audio Interaction Design (HAID). Lecture Notes in Computer Science (LNCS), 6306, pp. 117–128. Springer, Berlin, Heidelberg (2010)CrossRef
33.
Zurück zum Zitat Papetti, S., Järveläinen, H., Giordano, B.L., Schiesser, S., Fröhlich, M.: Vibrotactile sensitivity in active touch: effect of pressing force. IEEE Trans. Haptics 10(1), 113–122 (2017)CrossRef Papetti, S., Järveläinen, H., Giordano, B.L., Schiesser, S., Fröhlich, M.: Vibrotactile sensitivity in active touch: effect of pressing force. IEEE Trans. Haptics 10(1), 113–122 (2017)CrossRef
34.
Zurück zum Zitat Papetti, S., Schiesser, S., Fröhlich, M.: Multi-point vibrotactile feedback for an expressive musical interface. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), Baton Rouge, LA, USA (2015) Papetti, S., Schiesser, S., Fröhlich, M.: Multi-point vibrotactile feedback for an expressive musical interface. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), Baton Rouge, LA, USA (2015)
35.
Zurück zum Zitat Prattichizzo, D., Pacchierotti, C., Rosati, G.: Cutaneous force feedback as a sensory subtraction technique in Haptics. IEEE Trans. Haptics 5(4), 1–13 (2012)CrossRef Prattichizzo, D., Pacchierotti, C., Rosati, G.: Cutaneous force feedback as a sensory subtraction technique in Haptics. IEEE Trans. Haptics 5(4), 1–13 (2012)CrossRef
36.
Zurück zum Zitat Salisbury, C.M., Gillespie, R.B., Tan, H.Z., Barbagli, F., Salisbury, J.K.: What you can’t feel won’t hurt you: evaluating haptic hardware using a haptic contrast sensitivity function. IEEE Trans. Haptics 4(2), 134–146 (2011)CrossRef Salisbury, C.M., Gillespie, R.B., Tan, H.Z., Barbagli, F., Salisbury, J.K.: What you can’t feel won’t hurt you: evaluating haptic hardware using a haptic contrast sensitivity function. IEEE Trans. Haptics 4(2), 134–146 (2011)CrossRef
37.
Zurück zum Zitat Schroeder, M.R.: New method of measuring reverberation time. J. Acoust. Soc. Am. 37(6), 1187–1188 (1965)CrossRef Schroeder, M.R.: New method of measuring reverberation time. J. Acoust. Soc. Am. 37(6), 1187–1188 (1965)CrossRef
38.
Zurück zum Zitat Stepp, C.E., An, Q., Matsuoka, Y.: Repeated training with augmentative vibrotactile feedback increases object manipulation performance. PLoS One 7(2) (2012)CrossRef Stepp, C.E., An, Q., Matsuoka, Y.: Repeated training with augmentative vibrotactile feedback increases object manipulation performance. PLoS One 7(2) (2012)CrossRef
39.
Zurück zum Zitat Verrillo, R.T.: Vibration sensation in humans. Music Percept. 9(3), 281–302 (1992)CrossRef Verrillo, R.T.: Vibration sensation in humans. Music Percept. 9(3), 281–302 (1992)CrossRef
40.
Zurück zum Zitat Verrillo, T.: Vibrotactile thresholds measured at the finger. Percep. Psychophys. 9(4), 329–330 (1971)CrossRef Verrillo, T.: Vibrotactile thresholds measured at the finger. Percep. Psychophys. 9(4), 329–330 (1971)CrossRef
41.
Zurück zum Zitat Visell, Y., Giordano, B.L., Millet, G., Cooperstock, J.R.: Vibration influences haptic perception of surface compliance during walking. PLoS One 6(3), e17697 (2011)CrossRef Visell, Y., Giordano, B.L., Millet, G., Cooperstock, J.R.: Vibration influences haptic perception of surface compliance during walking. PLoS One 6(3), e17697 (2011)CrossRef
42.
Zurück zum Zitat Yamaoka, M., Yamamoto, A., Higuchi, T.: Basic analysis of stickiness sensation for tactile displays. In: Ferre, M. (ed.) Haptics: Perception, Devices and Scenarios. Lecture Notes in Computer Science (LNCS), 5024, 427–436. Springer, Berlin Heidelberg (2008) Yamaoka, M., Yamamoto, A., Higuchi, T.: Basic analysis of stickiness sensation for tactile displays. In: Ferre, M. (ed.) Haptics: Perception, Devices and Scenarios. Lecture Notes in Computer Science (LNCS), 5024, 427–436. Springer, Berlin Heidelberg (2008)
43.
Zurück zum Zitat Yao, H.Y., Hayward, V.: An experiment on length perception with a virtual rolling stone. In: Proceeding of the EuroHaptics Conference, pp. 275–278. Paris, France (2006) Yao, H.Y., Hayward, V.: An experiment on length perception with a virtual rolling stone. In: Proceeding of the EuroHaptics Conference, pp. 275–278. Paris, France (2006)
44.
Zurück zum Zitat Yao, H.Y., Hayward, V.: Design and analysis of a recoil-type vibrotactile transducer. J. Acoust. Soc. Am. 128(2), 619–627 (2010)CrossRef Yao, H.Y., Hayward, V.: Design and analysis of a recoil-type vibrotactile transducer. J. Acoust. Soc. Am. 128(2), 619–627 (2010)CrossRef
Metadaten
Titel
Implementation and Characterization of Vibrotactile Interfaces
verfasst von
Stefano Papetti
Martin Fröhlich
Federico Fontana
Sébastien Schiesser
Federico Avanzini
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-58316-7_13

Neuer Inhalt