Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter Mouton April 28, 2006

Decoding speech prosody in five languages

  • William Forde Thompson

    William Forde Thompson (b. 1957) is Director of Communication, Culture and Information Technology at the University of Toronto at Mississauga 〈b.thompson@utoronto.ca〉. His interests include music, gesture, and cognition. His recent publications include ‘Listening to Music’ (with G. Schellenberg, 2006); ‘A comparison of acoustic cues in music and speech for three dimensions of affect’ (with G. Ilie, 2006); and ‘The subjective size of melodic intervals over a two-octave range’ (with F. A. Russo, 2006).

    EMAIL logo
    and L-L Balkwill

    Laura-Lee Balkwill (b. 1965) is a Postdoctoral Fellow at Queen's University 〈balkwill@post.queensu.ca〉. Her research interests include the expression and recognition of emotion in music and speech across cultures, and the eect of emotive tone of voice on reasoning. Her recent major publications include ‘Expectancies generated by recent exposures to melodic sequences’ (with B. Thompson and R. Vernescu, 2000); ‘Recognition of emotion in Japanese, North Indian, and Western music by Japanese listeners’ (with B. Thompson and R. Matsunuga, 2004); and ‘Neuropsychological assessment of musical difficulties: A study of “tone deafness” among university students’ (with L. L. Cuddy, I. Peretz, and R. Holden, 2005).

From the journal Semiotica

Abstract

Twenty English-speaking listeners judged the emotive intent of utterances spoken by male and female speakers of English, German, Chinese, Japanese, and Tagalog. The verbal content of utterances was neutral but prosodic elements conveyed each of four emotions: joy, anger, sadness, and fear. Identification accuracy was above chance performance levels for all emotions in all languages. Across languages, sadness and anger were more accurately recognized than joy and fear. Listeners showed an in-group advantage for decoding emotional prosody, with highest recognition rates for English utterances and lowest recognition rates for Japanese and Chinese utterances. Acoustic properties of stimuli were correlated with the intended emotion expressed. Our results support the view that emotional prosody is decoded by a combination of universal and culture-specific cues.

About the authors

William Forde Thompson

William Forde Thompson (b. 1957) is Director of Communication, Culture and Information Technology at the University of Toronto at Mississauga 〈b.thompson@utoronto.ca〉. His interests include music, gesture, and cognition. His recent publications include ‘Listening to Music’ (with G. Schellenberg, 2006); ‘A comparison of acoustic cues in music and speech for three dimensions of affect’ (with G. Ilie, 2006); and ‘The subjective size of melodic intervals over a two-octave range’ (with F. A. Russo, 2006).

L-L Balkwill

Laura-Lee Balkwill (b. 1965) is a Postdoctoral Fellow at Queen's University 〈balkwill@post.queensu.ca〉. Her research interests include the expression and recognition of emotion in music and speech across cultures, and the eect of emotive tone of voice on reasoning. Her recent major publications include ‘Expectancies generated by recent exposures to melodic sequences’ (with B. Thompson and R. Vernescu, 2000); ‘Recognition of emotion in Japanese, North Indian, and Western music by Japanese listeners’ (with B. Thompson and R. Matsunuga, 2004); and ‘Neuropsychological assessment of musical difficulties: A study of “tone deafness” among university students’ (with L. L. Cuddy, I. Peretz, and R. Holden, 2005).

Published Online: 2006-04-28
Published in Print: 2006-02-20

© Walter de Gruyter

Downloaded on 27.4.2024 from https://www.degruyter.com/document/doi/10.1515/SEM.2006.017/html
Scroll to top button