Skip to main content

2018 | OriginalPaper | Buchkapitel

42. Music Learning: Automatic Music Composition and Singing Voice Assessment

verfasst von : Lorenzo J. Tardón, Isabel Barbancho, Carles Roig, Emilio Molina, Ana M. Barbancho

Erschienen in: Springer Handbook of Systematic Musicology

Verlag: Springer Berlin Heidelberg

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Traditionally, singing skills are learned and improved by means of the supervised rehearsal of a set of selected exercises. A music teacher evaluates the user's performance and recommends new exercises according to the user's evolution.
In this chapter, the goal is to describe a virtual environment that partially resembles the traditional music learning process and the music teacher's role, allowing for a complete interactive self-learning process.
An overview of the complete chain of an interactive singing-learning system including tools and concrete techniques will be presented. In brief, first, the system should provide a set of training exercises. Then, it should assess the user's performance. Finally, the system should be able to provide the user with new exercises selected or created according to the results of the evaluation.
Following this scheme, methods for the creation of user-adapted exercises and the automatic evaluation of singing skills will be presented. A technique for the dynamical generation of musically meaningful singing exercises, adapted to the user's level, will be shown. It will be based on the proper repetition of musical structures, while assuring the correctness of harmony and rhythm. Additionally, a module for singing assessment of the user's performance, in terms of intonation and rhythm, will be shown.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
42.1
Zurück zum Zitat M.P. Ryynänen, A.P. Klapuri: Automatic transcription of melody, bass line, and chords in polyphonic music, Comput. Music J. 32(3), 72–86 (2008)CrossRef M.P. Ryynänen, A.P. Klapuri: Automatic transcription of melody, bass line, and chords in polyphonic music, Comput. Music J. 32(3), 72–86 (2008)CrossRef
42.2
Zurück zum Zitat J. Serrá, E. Gómez, P. Herrera: Audio cover song identification and similiraty: Background, approaches, evaluation, and beyond. In: Advances in Music Information Retrieval, Vol. 274, ed. by Z.W. Ras, A.A. Wieczorkowska (Springer, Berlin, Heidelberg 2010) pp. 307–332CrossRef J. Serrá, E. Gómez, P. Herrera: Audio cover song identification and similiraty: Background, approaches, evaluation, and beyond. In: Advances in Music Information Retrieval, Vol. 274, ed. by Z.W. Ras, A.A. Wieczorkowska (Springer, Berlin, Heidelberg 2010) pp. 307–332CrossRef
42.3
Zurück zum Zitat S. Koelsch, W.A. Siebel: Towards a neural basis of music perception, Proc. TRENDS Cogn. Sci. 9(12), 578–584 (2005)CrossRef S. Koelsch, W.A. Siebel: Towards a neural basis of music perception, Proc. TRENDS Cogn. Sci. 9(12), 578–584 (2005)CrossRef
42.4
Zurück zum Zitat R.F. Goldman: Ionisation; Density, 21.5; Integrales; Octandre; Hyperprism; Poeme Electronique, Musical Q. 47(1), 133–134 (1961)CrossRef R.F. Goldman: Ionisation; Density, 21.5; Integrales; Octandre; Hyperprism; Poeme Electronique, Musical Q. 47(1), 133–134 (1961)CrossRef
42.5
Zurück zum Zitat G. Nierhaus: Algorithmic Composition: Paradigms of Automated Music Generation, Vol. 34 (Springer, Wien 2010)MATH G. Nierhaus: Algorithmic Composition: Paradigms of Automated Music Generation, Vol. 34 (Springer, Wien 2010)MATH
42.6
Zurück zum Zitat C. Uhle, J. Herre: Estimation of tempo, micro time and time signature from percussive music. In: Proc. Int. Conf. Digital Audio Effects (DAFx) (2003) C. Uhle, J. Herre: Estimation of tempo, micro time and time signature from percussive music. In: Proc. Int. Conf. Digital Audio Effects (DAFx) (2003)
42.7
Zurück zum Zitat F. Gouyon, P. Herrera, P. Cano: Pulse-dependent analyses of percussive music, Proc. ICASSP 4, 396–401 (2002) F. Gouyon, P. Herrera, P. Cano: Pulse-dependent analyses of percussive music, Proc. ICASSP 4, 396–401 (2002)
42.8
Zurück zum Zitat S. Tojo, K. Hirata: Structural similarity based on time-span tree. In: Proc. 9th Int. Symp. Comput. Music Model. Retriev. (CMMR) (2012) pp. 645–660 S. Tojo, K. Hirata: Structural similarity based on time-span tree. In: Proc. 9th Int. Symp. Comput. Music Model. Retriev. (CMMR) (2012) pp. 645–660
42.9
Zurück zum Zitat M. Müller, D.P.W. Ellis, A. Klapuri, G. Richard: Signal processing for music analysis, IEEE J. Sel. Top. Signal Process. 5(6), 1088–1110 (2011)CrossRef M. Müller, D.P.W. Ellis, A. Klapuri, G. Richard: Signal processing for music analysis, IEEE J. Sel. Top. Signal Process. 5(6), 1088–1110 (2011)CrossRef
42.10
Zurück zum Zitat A. Van Der Merwe, W. Schulze: Music generation with Markov models, IEEE Multimed. 18(3), 78–85 (2011)CrossRef A. Van Der Merwe, W. Schulze: Music generation with Markov models, IEEE Multimed. 18(3), 78–85 (2011)CrossRef
42.11
Zurück zum Zitat M. Pearce, G. Wiggins: Towards a framework for the evaluation of machine compositions. In: Proc. AISB’01 Symp. AI Creat. Arts Sci (2001) pp. 22–32 M. Pearce, G. Wiggins: Towards a framework for the evaluation of machine compositions. In: Proc. AISB’01 Symp. AI Creat. Arts Sci (2001) pp. 22–32
42.12
Zurück zum Zitat D. Conklin: Music generation from statistical models. In: Proc. Symp. Artif. Intell. Creat. Arts Sci. (AISB) (2003) pp. 30–35 D. Conklin: Music generation from statistical models. In: Proc. Symp. Artif. Intell. Creat. Arts Sci. (AISB) (2003) pp. 30–35
42.13
Zurück zum Zitat E.R. Miranda, J.A. Biles: Evolutionary Computer Music (Springer, London 2007)CrossRef E.R. Miranda, J.A. Biles: Evolutionary Computer Music (Springer, London 2007)CrossRef
42.14
Zurück zum Zitat D. Cope: Computer modeling of musical intelligence in EMI, Comput. Music J. 16(2), 69–83 (1992)CrossRef D. Cope: Computer modeling of musical intelligence in EMI, Comput. Music J. 16(2), 69–83 (1992)CrossRef
42.15
Zurück zum Zitat D. Cope: Computer Models of Musical Creativity (MIT Press, Cambridge 2005) D. Cope: Computer Models of Musical Creativity (MIT Press, Cambridge 2005)
42.16
Zurück zum Zitat M. Delgado, W. Fajardo, M. Molina-Solana: Inmamusys: Intelligent multiagent music system, Expert Syst. Appl. 36(3), 4574–4580 (2009)CrossRef M. Delgado, W. Fajardo, M. Molina-Solana: Inmamusys: Intelligent multiagent music system, Expert Syst. Appl. 36(3), 4574–4580 (2009)CrossRef
42.17
Zurück zum Zitat D.M. Howard, G. Welch, J. Brereton, E. Himonides, M. Decosta, J. Williams, A. Howard: WinSingad: A real-time display for the singing studio, Logop. Phoniatr. Vocology 29(3), 135–144 (2004)CrossRef D.M. Howard, G. Welch, J. Brereton, E. Himonides, M. Decosta, J. Williams, A. Howard: WinSingad: A real-time display for the singing studio, Logop. Phoniatr. Vocology 29(3), 135–144 (2004)CrossRef
42.19
Zurück zum Zitat O. Mayor, J. Bonada, A. Loscos: The singing tutor: Expression categorization and segmentation of the singing voice. In: Proc. AES 121st Convention (2006) O. Mayor, J. Bonada, A. Loscos: The singing tutor: Expression categorization and segmentation of the singing voice. In: Proc. AES 121st Convention (2006)
42.20
Zurück zum Zitat D. Rossiter, D.M. Howard: ALBERT: A real-time visual feedback computer tool for professional vocal development, J. Voice Off. J. Voice Found. 10(4), 321–336 (1996) D. Rossiter, D.M. Howard: ALBERT: A real-time visual feedback computer tool for professional vocal development, J. Voice Off. J. Voice Found. 10(4), 321–336 (1996)
42.21
Zurück zum Zitat Sony Computer Entertainment Europe: Singstar (SCEE London Studios 2004) Sony Computer Entertainment Europe: Singstar (SCEE London Studios 2004)
42.22
Zurück zum Zitat T. Nakano, M. Goto, Y. Hiraga: An automatic singing skill evaluation method for unknown melodies using pitch interval accuracy and vibrato features. In: Proc. INTERSPEECH (ICSLP) (2006) pp. 1706–1709 T. Nakano, M. Goto, Y. Hiraga: An automatic singing skill evaluation method for unknown melodies using pitch interval accuracy and vibrato features. In: Proc. INTERSPEECH (ICSLP) (2006) pp. 1706–1709
42.23
Zurück zum Zitat J. Callaghan, P. Wilson: How to Sing and See: Singing Pedagogy in the Digital Era (Cantare Systems, Surry Hills 2004) J. Callaghan, P. Wilson: How to Sing and See: Singing Pedagogy in the Digital Era (Cantare Systems, Surry Hills 2004)
42.24
Zurück zum Zitat D. Hoppe, M. Sadakata, P. Desain: Development of real-time visual feedback assistance in singing training: A review, J. Comput. Assist. Learn. 22(4), 308–316 (2006)CrossRef D. Hoppe, M. Sadakata, P. Desain: Development of real-time visual feedback assistance in singing training: A review, J. Comput. Assist. Learn. 22(4), 308–316 (2006)CrossRef
42.25
Zurück zum Zitat S. Grollmisch, E. Cano Cerón, C. Dittmar: Songs2see: Learn to play by playing. In: 41st Int. Audio Eng. Soc. Conf. (AES) (2011) S. Grollmisch, E. Cano Cerón, C. Dittmar: Songs2see: Learn to play by playing. In: 41st Int. Audio Eng. Soc. Conf. (AES) (2011)
42.26
Zurück zum Zitat Z. Jin, J. Jia, Y. Liu, Y. Wang, L. Cai: An automatic grading method for singing evaluation, Rec. Adv. Comput. Sci. Inf. Eng. 5, 691–696 (2012) Z. Jin, J. Jia, Y. Liu, Y. Wang, L. Cai: An automatic grading method for singing evaluation, Rec. Adv. Comput. Sci. Inf. Eng. 5, 691–696 (2012)
42.27
Zurück zum Zitat C. Dittmar, E. Cano, J. Abeßer, S. Grollmisch: Music information retrieval meets music education, Multimod. Music Process. 3, 95–120 (2012) C. Dittmar, E. Cano, J. Abeßer, S. Grollmisch: Music information retrieval meets music education, Multimod. Music Process. 3, 95–120 (2012)
42.28
Zurück zum Zitat E. Gómez, A. Klapuri, B. Meudic: Melody description and extraction in the context of music content processing, J. New Music Res. 32(1), 23–40 (2003)CrossRef E. Gómez, A. Klapuri, B. Meudic: Melody description and extraction in the context of music content processing, J. New Music Res. 32(1), 23–40 (2003)CrossRef
42.29
Zurück zum Zitat A. De Cheveigné, H. Kawahara: YIN, a fundamental frequency estimator for speech and music, J. Acoust. Soc. Am. 111(4), 1917 (2002)CrossRef A. De Cheveigné, H. Kawahara: YIN, a fundamental frequency estimator for speech and music, J. Acoust. Soc. Am. 111(4), 1917 (2002)CrossRef
42.30
Zurück zum Zitat T. Viitaniemi, A. Klapuri, A. Eronen: A probabilistic model for the transcription of single-voice melodies. In: Proc. 2003 Finn. Signal Process. Symp. FINSIG'03 (2003) pp. 59–63 T. Viitaniemi, A. Klapuri, A. Eronen: A probabilistic model for the transcription of single-voice melodies. In: Proc. 2003 Finn. Signal Process. Symp. FINSIG'03 (2003) pp. 59–63
42.31
Zurück zum Zitat M. Ryynänen, A. Klapuri: Modelling of Note Events for Singing Transcription. In: Proc. ISCA Tutor. Res. Workshop Stat. Percept. Audio Process. (SAPA) (2004) M. Ryynänen, A. Klapuri: Modelling of Note Events for Singing Transcription. In: Proc. ISCA Tutor. Res. Workshop Stat. Percept. Audio Process. (SAPA) (2004)
42.32
Zurück zum Zitat G.E. Poliner, D.P.W. Ellis, A.F. Ehmann, E. Gómez, S. Streich, B. Ong: Melody transcription from music audio: Approaches and evaluation, IEEE Trans. Audio Speech Lang. Process. 15(4), 1247–1256 (2007)CrossRef G.E. Poliner, D.P.W. Ellis, A.F. Ehmann, E. Gómez, S. Streich, B. Ong: Melody transcription from music audio: Approaches and evaluation, IEEE Trans. Audio Speech Lang. Process. 15(4), 1247–1256 (2007)CrossRef
42.33
Zurück zum Zitat E. Molina: Automatic Scoring of Signing Voice Based on Melodic Similarity Measures (Universitat Pompeu Fabra, Barcelona 2012) E. Molina: Automatic Scoring of Signing Voice Based on Melodic Similarity Measures (Universitat Pompeu Fabra, Barcelona 2012)
42.34
Zurück zum Zitat R.J. McNab, L.A. Smith, I.H. Witten: Signal processing for melody transcription, Proc. 19th Australas. Comput. Sci. Conf. 18(4), 301–307 (1996) R.J. McNab, L.A. Smith, I.H. Witten: Signal processing for melody transcription, Proc. 19th Australas. Comput. Sci. Conf. 18(4), 301–307 (1996)
42.35
Zurück zum Zitat M. Ryynänen: Singing transcription. In: Signal Processing Methods for Music Transcription, ed. by A. Klapuri, M. Davy (Springer Science/Business Media LLC, New York 2006) pp. 361–390CrossRef M. Ryynänen: Singing transcription. In: Signal Processing Methods for Music Transcription, ed. by A. Klapuri, M. Davy (Springer Science/Business Media LLC, New York 2006) pp. 361–390CrossRef
42.36
Zurück zum Zitat J.J. Mestres, J.B. Sanjaume, M. De Boer, A.L. Mira: Audio Recording Analysis and Rating, US Patent 8158871 (2012) J.J. Mestres, J.B. Sanjaume, M. De Boer, A.L. Mira: Audio Recording Analysis and Rating, US Patent 8158871 (2012)
42.37
Zurück zum Zitat G. Haus, E. Pollastri: An audio front end for query-by-humming systems. In: Proc. 2nd Int. Symp. Music Inf. Retriev. (ISMIR) (2001) pp. 65–72 G. Haus, E. Pollastri: An audio front end for query-by-humming systems. In: Proc. 2nd Int. Symp. Music Inf. Retriev. (ISMIR) (2001) pp. 65–72
42.38
Zurück zum Zitat W. Krige, T. Herbst, T. Niesler: Explicit transition modelling for automatic singing transcription, J. New Music Res. 37(4), 311–324 (2008)CrossRef W. Krige, T. Herbst, T. Niesler: Explicit transition modelling for automatic singing transcription, J. New Music Res. 37(4), 311–324 (2008)CrossRef
42.39
Zurück zum Zitat E. Molina: Hacer música… para aprender a componer, Eufonia, Didáct. Músic. 51, 53–64 (2011) E. Molina: Hacer música… para aprender a componer, Eufonia, Didáct. Músic. 51, 53–64 (2011)
42.40
Zurück zum Zitat M.K. Shan, S.C. Chiu: Algorithmic compositions based on discovered musical patterns, Multimed. Tools Appl. 46(1), 1–23 (2010)CrossRef M.K. Shan, S.C. Chiu: Algorithmic compositions based on discovered musical patterns, Multimed. Tools Appl. 46(1), 1–23 (2010)CrossRef
42.41
Zurück zum Zitat P.J. Ponce de León: Statistical description models for melody analysis and characterization. In: Proc. Int. Comput. Music Conf., ed. by J.M. Iñesta (2004) pp. 149–156 P.J. Ponce de León: Statistical description models for melody analysis and characterization. In: Proc. Int. Comput. Music Conf., ed. by J.M. Iñesta (2004) pp. 149–156
42.42
Zurück zum Zitat Association MIDI Manufacturers: The Complete MIDI 1.0 Detailed Specification (The MIDI Manufacturers Association, Los Angeles 1996) Association MIDI Manufacturers: The Complete MIDI 1.0 Detailed Specification (The MIDI Manufacturers Association, Los Angeles 1996)
42.43
Zurück zum Zitat R.S. Brindle: Musical Composition (Oxford Univ. Press, Oxford 1986) R.S. Brindle: Musical Composition (Oxford Univ. Press, Oxford 1986)
42.44
Zurück zum Zitat F. Lerdahl, R. Jackendoff: A Generative Theory of Tonal Music (MIT Press, Cambridge 1983) F. Lerdahl, R. Jackendoff: A Generative Theory of Tonal Music (MIT Press, Cambridge 1983)
42.45
Zurück zum Zitat W.T. Fitch, A.J. Rosenfeld: Perception and production of syncopated rhythms, Music Percept. 25, 43–58 (2007)CrossRef W.T. Fitch, A.J. Rosenfeld: Perception and production of syncopated rhythms, Music Percept. 25, 43–58 (2007)CrossRef
42.46
Zurück zum Zitat W. Appel: Harvard Dictionary of Music, 2nd edn. (The Belknap Press of Harvard Univ., Cambridge, London 2000) W. Appel: Harvard Dictionary of Music, 2nd edn. (The Belknap Press of Harvard Univ., Cambridge, London 2000)
42.47
Zurück zum Zitat K. Seyerlehner, G. Widmer, D. Schnitzer: From rhythm patterns to perceived tempo. In: Int. Soc. Music Inf. Retriev. (ISMIR) (2007) pp. 519–524 K. Seyerlehner, G. Widmer, D. Schnitzer: From rhythm patterns to perceived tempo. In: Int. Soc. Music Inf. Retriev. (ISMIR) (2007) pp. 519–524
42.48
Zurück zum Zitat M.F. McKinney, D. Moelants: Ambiguity in Tempo Perception: What Draws Listeners to Different Metrical Levels? (Univ. of California Press, Oakland 2006) pp. 155–166 M.F. McKinney, D. Moelants: Ambiguity in Tempo Perception: What Draws Listeners to Different Metrical Levels? (Univ. of California Press, Oakland 2006) pp. 155–166
42.49
Zurück zum Zitat M. Gainza, D. Barry, E. Coyle: Automatic bar line segmentation. In: 123rd Convent. Audio Eng. Soc. Convent. Paper (2007) M. Gainza, D. Barry, E. Coyle: Automatic bar line segmentation. In: 123rd Convent. Audio Eng. Soc. Convent. Paper (2007)
42.50
Zurück zum Zitat M. Gainza, E. Coyle: Time signature detection by using a multi resolution audio similarity matrix. In: 122nd Convent. Audio Eng. Soc. Convent. Paper (2007) M. Gainza, E. Coyle: Time signature detection by using a multi resolution audio similarity matrix. In: 122nd Convent. Audio Eng. Soc. Convent. Paper (2007)
42.51
Zurück zum Zitat J. Foote, M. Cooper: Visualizing musical structure and rhythm via self-similarity. In: Proc. 2001 Int. Comput. Music Conf. (2001) pp. 419–422 J. Foote, M. Cooper: Visualizing musical structure and rhythm via self-similarity. In: Proc. 2001 Int. Comput. Music Conf. (2001) pp. 419–422
42.52
Zurück zum Zitat J.R. Quinlan: C4.5: Programs for Machine Learning (Morgan Kaufmann, San Francisco 1993) J.R. Quinlan: C4.5: Programs for Machine Learning (Morgan Kaufmann, San Francisco 1993)
42.53
Zurück zum Zitat J. Platt: Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines (Microsoft Research, Redmond 1998) J. Platt: Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines (Microsoft Research, Redmond 1998)
42.54
Zurück zum Zitat M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, I.H. Witten: The WEKA data mining software: An update, SIGKDD Explor. 11(1), 10–18 (2009)CrossRef M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, I.H. Witten: The WEKA data mining software: An update, SIGKDD Explor. 11(1), 10–18 (2009)CrossRef
42.55
Zurück zum Zitat W.J. Downling, D.S. Fujitani: Contour, interval and pitch recognition in memory for melodies, J. Acoust. Soc. Am. 49, 524–531 (1971)CrossRef W.J. Downling, D.S. Fujitani: Contour, interval and pitch recognition in memory for melodies, J. Acoust. Soc. Am. 49, 524–531 (1971)CrossRef
42.56
Zurück zum Zitat E. Schellenberg: Simplifying the implication-realization model of musical expectancy, Music Percept. 14(3), 295–318 (1997)CrossRef E. Schellenberg: Simplifying the implication-realization model of musical expectancy, Music Percept. 14(3), 295–318 (1997)CrossRef
42.57
Zurück zum Zitat E. Narmour: The Analysis and Cognition of Melodic Complexity: The Implication-Realization Model (Univ. of Chicago Press, Chicago, London 1992) E. Narmour: The Analysis and Cognition of Melodic Complexity: The Implication-Realization Model (Univ. of Chicago Press, Chicago, London 1992)
42.58
Zurück zum Zitat D. Roca, E. Molina (Eds.): Vademecum Musical (Enclave Creativa, Madrid 2006) D. Roca, E. Molina (Eds.): Vademecum Musical (Enclave Creativa, Madrid 2006)
42.59
Zurück zum Zitat B. Benward: Music: In Theory and Practice, Vol. 1, 7th edn. (McGraw-Hill, New York 2003) B. Benward: Music: In Theory and Practice, Vol. 1, 7th edn. (McGraw-Hill, New York 2003)
42.60
Zurück zum Zitat R.W. Ottman: Elementary Harmony: Theory and Practice, 5th edn. (Prentice Hall, Englewood Cliffs 1989) R.W. Ottman: Elementary Harmony: Theory and Practice, 5th edn. (Prentice Hall, Englewood Cliffs 1989)
42.61
Zurück zum Zitat A.E. Yilmaz, Z. Telatar: Note-against-note two-voice counterpoint by means of fuzzy logic, Knowl.-Based Syst. 23(3), 256–266 (2010)CrossRef A.E. Yilmaz, Z. Telatar: Note-against-note two-voice counterpoint by means of fuzzy logic, Knowl.-Based Syst. 23(3), 256–266 (2010)CrossRef
42.62
Zurück zum Zitat E. Molina, I. Barbancho, E. Gomez, A.M. Barbancho, L.J. Tardon: Fundamental frequency alignment vs. note-based melodic similarity for singing voice assessment. In: IEEE Int. Conf. on Acoust. Speech Signal Process. (ICASSP) (2013) pp. 744–748 E. Molina, I. Barbancho, E. Gomez, A.M. Barbancho, L.J. Tardon: Fundamental frequency alignment vs. note-based melodic similarity for singing voice assessment. In: IEEE Int. Conf. on Acoust. Speech Signal Process. (ICASSP) (2013) pp. 744–748
42.63
Zurück zum Zitat J. Wapnick, E. Ekholm: Expert consensus in solo voice performance evaluation, J. Voice 11(4), 429–436 (1997)CrossRef J. Wapnick, E. Ekholm: Expert consensus in solo voice performance evaluation, J. Voice 11(4), 429–436 (1997)CrossRef
42.64
Zurück zum Zitat L.R. Rabiner, R.W. Schafer: Digital Processing of Speech Signals, Prentice-Hall Series in Signal Processing No. 7, Vol. 25 (Prentice Hall, Englewood Cliffs 1978) p. 290 L.R. Rabiner, R.W. Schafer: Digital Processing of Speech Signals, Prentice-Hall Series in Signal Processing No. 7, Vol. 25 (Prentice Hall, Englewood Cliffs 1978) p. 290
42.66
Zurück zum Zitat H. Sakoe: Dynamic programming algorithm optimization for spoken word recognition, IEEE Trans. Acoust. Speech Signal Process. 26, 43–49 (1978)CrossRef H. Sakoe: Dynamic programming algorithm optimization for spoken word recognition, IEEE Trans. Acoust. Speech Signal Process. 26, 43–49 (1978)CrossRef
42.67
Zurück zum Zitat C.A. Ratanamahatana, E. Keogh: Everything you know about dynamic time warping is wrong. In: 3rd Workshop Min. Tempor. Seq. Data, 10th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. (KDD-2004) (2004) C.A. Ratanamahatana, E. Keogh: Everything you know about dynamic time warping is wrong. In: 3rd Workshop Min. Tempor. Seq. Data, 10th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. (KDD-2004) (2004)
Metadaten
Titel
Music Learning: Automatic Music Composition and Singing Voice Assessment
verfasst von
Lorenzo J. Tardón
Isabel Barbancho
Carles Roig
Emilio Molina
Ana M. Barbancho
Copyright-Jahr
2018
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-662-55004-5_42

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.