ABSTRACT
Overall sound perception of a song is an important attribute of music. Several psychoacoustic models have been studied to extract perceptual sound qualities from audio signals. By means of listening tests, we investigate whether these sound models successfully reflect (inter-)subjective perception of sound resemblance in music. Preliminary results shows that psychoacoustic descriptors are in better accordance with subjective judgments than low-level features. As the psychoacoustic descriptors model auditory perception, we can assume causal relationships and directly draw conclusions on perception from the listening test results. We observed that roughness is the most crucial sound resemblance criteria, followed by sharpness, loudness, spaciousness and tonalness. The findings indicate that these psychoacoustic models may be suitable for music data mining, music browsing, automatic playlist generation and music recommendation tasks.
- C. Kaiser, 1001 Mixing Tipps. Heidelberg et al.: mitp, 2012.Google Scholar
- G. Tzanetakis, R. Jones, and K. McNally, "Stereo panning features for classifying recording production style," in ISMIR, Vienna, Sep 2007.Google Scholar
- E. Zwicker, "Über psychologische und methodische grundlagen der lautheit," Acustica, vol. 8, no. 4, pp. 237--258, 1958.Google Scholar
- W. Aures, "Ein berechnungsverfahren der rauhigkeit (a procedure for calculating auditory roughness)," Acta Acust united Ac, vol. 58, no. 5, pp. 268--281, 1985.Google Scholar
- P. Daniel and R. Weber, "Psychoacoustical roughness: Implementation of an optimized model," Acta Acust united Ac, vol. 83, no. 1, pp. 113--123, 1997.Google Scholar
- M. Leman, "Visualization and calculation of the roughness of acoustical musical signals using the synchronization index model (sim)," in DAFX, Dec 2000.Google Scholar
- E. Terhardt, "Algorithm for extraction of pitch and pitch salience from complex tonal signals," J. Acoust. Soc. Am, vol. 71, no. 3, pp. 679--688, 1982.Google ScholarCross Ref
- L. L. Beranek, Concert Halls and Opera Houses: Music, Acoustics, and Architecture, 2nd ed. New York: Springer, 2004.Google ScholarCross Ref
- T. Ziemer, "Exploring physical parameters explaining the apparent source width of direct sound of musical instruments," in DGM, Oldenburg, 2015, pp. 40--41.Google Scholar
- T. Ziemer, Source Width in Music Production. Methods in Stereo, Ambisonics, and Wave Field Synthesis, ser. Current Research in Systematic Musicoogy. Springer, 2017, vol. 4, ch. 10.Google Scholar
- A. Wilson and B. M. Fazenda, "Perception of audio quality in productions of popular music," J. Audio Eng. Soc, vol. 64, no. 1/2, pp. 23--34, 2016.Google ScholarCross Ref
- M. Leman, V. Vermeulen, L. D. Voogdt, D. Moelants, and M. Lesaffre, "Prediction of musical affect using a combination of acoustic structural cues," Journal of New Music Research, vol. 34, no. 1, pp. 39--67, 2005. {Online}. Available:Google ScholarCross Ref
- J. J. Deng and C. Leung, "Emotion-based music recommendation using audio features and user playlist," in ISSDM, Oct 2012, pp. 796--801.Google Scholar
- B. Shao, D. Wang, T. Li, and M. Ogihara, "Music recommendation based on acoustic features and user access patterns," IEEE Trans. Audio, Speech, Language Process, vol. 17, no. 8, pp. 1602--1611, 2009.Google ScholarDigital Library
- Z. Çataltepe and B. Altinel, "Music recommendation based on adaptive feature and user grouping," in ISCIS, 2007.Google Scholar
- G. Tzanetakis and P. Cook, "Musical genre classification of audio signals," IEEE Speech Audio Process., vol. 10, no. 5, pp. 293--302, July 2002.Google ScholarCross Ref
- L. Mion and G. de Poli, "Music expression understanding based on a joint semantic space," in AI*IA 2007: Artificial Intelligence and Human-Oriented Computing. Springer, 2007, pp. 614--625. Google ScholarDigital Library
- D. Bogdanov, M. Haro, F. Fuhrmann, E. Gómez, and P. Herrera, "Content-based music recommendation based on user preference example," in WOMRAD, 2010.Google Scholar
- D. Bogdanov and P. Herrera, "How much metadata do we need in music recommendations? a subjective evaluation using preference sets," in ISMIR, Miami (FL), Oct 2011, pp. 97--102.Google Scholar
- B. L. Sturm, "A simple method to determine if a music information retrieval system is a 'horse'," IEEE. Trans. Multimedia, vol. 16, no. 6, pp. 1636--1644, 2014.Google ScholarCross Ref
- J. Aucouturier and E. Bigand, "Mel cepstrum & ann ova: The difficult dialog between mir and music cognition," in ISMIR, Porto, Oct 2012, pp. 397--402.Google Scholar
- A. Lindau, V. Erbes, S. Lepa, H.-J. Maempel, F. Brinkman, and S. Weinzierl, "A spatial audio quality inventory (saqi)," Acta Acust united Ac, vol. 100, no. 5, pp. 984--994, 2014.Google ScholarCross Ref
- E. Zwicker and H. Fastl, Psychoacoustics. Facts and Models, 2nd ed. Berlin, Heidelberg: Springer, 1999.Google Scholar
- G. von Bismarck, "Sharpness as an attribute of the timbre of steady sounds," Acustica, vol. 30, no. 3, pp. 159--172, 1974.Google Scholar
- W. Aures, "Berechnungsverfahren für den sensorischen wohlklang beliebiger schallsignale (a model for calculating the sensory euphony of various sounds)," Acustica, vol. 59, no. 2, pp. 130--141, 1985.Google Scholar
- Y. Ando, Auditory and Visual Sensation. New York et al.: Springer, 2010.Google ScholarCross Ref
- B. Ross, K. Tremblay, and T. Picton, "Physiological detection of interaural phase differences," J. Acoust. Soc. Am, vol. 121, no. 2, pp. 1017--1027, 2007.Google ScholarCross Ref
- B. J. Baniya, D. Ghimire, and J. Lee, "Automatic music genre classification using timbral texture and rhythmic content features," ICACT Trans. Adv. Commun. Technol., vol. 3, no. 3, pp. 434--443, 2014.Google Scholar
- A. Rauber, E. Pampalk, and D. Merkl, "Using psycho-acoustic models and self-organizing maps to create a hierarchical structuring of music by sound similarity," in ISMIR, Paris, Oct 2002.Google Scholar
- Y. Panagakis, C. Kotropoulos, and G. Arce, "Music genre classification via sparse represenations of auditory temporal modulations," in EUSIPCO, Glasgow, Aug 2009.Google Scholar
- B. C. J. Moore and B. R. Glasberg, "Modeling binaural loudness," J. Acoust. Soc. Am., vol. 121, no. 3, pp. 1604--1612, 2007.Google ScholarCross Ref
- F. Rønne, T. Dau, J. Harte, and C. Elberling, "Modeling auditory evoked brainstem responses to transient stimuli," J. Acoust. Soc. Am, vol. 131, no. 5, pp. 3903--3913, 2012.Google ScholarCross Ref
- S. Fenton and H. Lee, "Towards a perceptual model of 'punch' in musical signals," in AES Convention 139, Oct 2015.Google Scholar
- M. Blaß, "Timbre-based rhythm analysis," in SysMus, London, Sept 2014.Google Scholar
Recommendations
Evaluation of psychoacoustic sound parameters for sonification
ICMI '17: Proceedings of the 19th ACM International Conference on Multimodal InteractionSonification designers have little theory or experimental evidence to guide the design of data-to-sound mappings. Many mappings use acoustic representations of data values which do not correspond with the listener's perception of how that data value ...
A Psychoacoustic-Based Methodology for Sound Mass Music Analysis
Music in the AI EraAbstractA sound mass is a specific state of the musical texture corresponding to a large number of sound events concentrated within a short time and/or frequency interval. Conceptually, it is associated with the work of György Ligeti, Krzysztof Penderecki,...
A perceptual measure for evaluating the resynthesis of automatic music transcriptions
AbstractThis study focuses on the perception of music performances when contextual factors, such as room acoustics and instrument, change. We propose to distinguish the concept of “performance” from the one of “interpretation”, which expresses the “...
Comments