Skip to main content

2018 | OriginalPaper | Buchkapitel

41. Music Technology and Education

verfasst von : Estefanía Cano, Christian Dittmar, Jakob Abeßer, Christian Kehling, Sascha Grollmisch

Erschienen in: Springer Handbook of Systematic Musicology

Verlag: Springer Berlin Heidelberg

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this chapter, the application of music information retrieval (MIR) technologies in the development of music education tools is addressed. First, the relationship between technology and music education is described from a historical point of view, starting with the earliest attempts to use audio technology for education and ending with the latest developments and current research conducted in the field. Second, three MIR technologies used within a music education context are presented:
1.
The use of pitch-informed solo and accompaniment separation as a tool for the creation of practice content
 
2.
Drum transcription for real-time music practice
 
3.
Guitar transcription with plucking style and expression style detection.
 
In each case, proposed methods are clearly described and evaluated. Objective perceptual quality metrics were used to evaluate the proposed method for solo/accompaniment separation. Mean overall perceptual scores (OPS) of 24.68 and 34.68 were obtained for the solo and accompaniment tracks respectively. These scores are on par with the state-of-the-art methods with respect to perceptual quality of separated music signals. A dataset of 17 real-world multitrack recordings was used for evaluation. In the drum sound detection task, an F-measure of 0.96 was obtained for snare drum, kick drum, and hi-hat detection. For this evaluation, a dataset of 30 manually annotated real-world drum loops with an onset tolerance of 50 ms was used. For the guitar plucking style and guitar expression style detection tasks, F-measures of 0.93 and 0.83 were obtained respectively. For this evaluation, a dataset containing 261 recordings of both isolated notes as well as monophonic and polyphonic melodies with note-wise annotations was used. To conclude the chapter, the remaining challenges that need to be addressed to more effectively use MIR technologies in the development of music education applications are described.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
41.1
Zurück zum Zitat C. Dittmar, E. Cano, J. Abeßer, S. Grollmisch: Music information retrieval meets music education. In: Multimodal Music Process. Dagstuhl Follow-Ups, ed. by M. Müller, M. Goto, M. Schedl (Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Wadern 2012) pp. 95–120 C. Dittmar, E. Cano, J. Abeßer, S. Grollmisch: Music information retrieval meets music education. In: Multimodal Music Process. Dagstuhl Follow-Ups, ed. by M. Müller, M. Goto, M. Schedl (Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Wadern 2012) pp. 95–120
41.20
Zurück zum Zitat E. Cano, C. Dittmar, S. Grollmisch: Songs2See: Learn to play by playing. In: 12th Int. Soc. Music Inf. Retr. Conf. (ISMIR) (2011) E. Cano, C. Dittmar, S. Grollmisch: Songs2See: Learn to play by playing. In: 12th Int. Soc. Music Inf. Retr. Conf. (ISMIR) (2011)
41.24
Zurück zum Zitat Vemus: Virtual european music school, http:// cordis.europa.eu/projects/rcn/80596_en.html (2014) Vemus: Virtual european music school, http:// cordis.europa.eu/projects/rcn/80596_en.html (2014)
41.26
Zurück zum Zitat A. Cont: ANTESCOFO: Anticipatory synchronization and control of interactive parameters in computer music. In: Proc. Int. Comput. Music Conf. (ICMC) (2008) A. Cont: ANTESCOFO: Anticipatory synchronization and control of interactive parameters in computer music. In: Proc. Int. Comput. Music Conf. (ICMC) (2008)
41.27
Zurück zum Zitat R. Christopher: Music plus one and machine learning. In: 27th Int. Conf. Mach. Learn. (2010) R. Christopher: Music plus one and machine learning. In: 27th Int. Conf. Mach. Learn. (2010)
41.28
Zurück zum Zitat J. Abeßer, J. Hasselhorn, C. Dittmar, A. Lehmann, S. Grollmisch: Automatic quality assessment of vocal and instrumental performances of 9th-grade and 10th-grade pupils. In: Proc. 10th Int. Symp. Comput. Music Multidiscip. Res. (CMMR) (2013) J. Abeßer, J. Hasselhorn, C. Dittmar, A. Lehmann, S. Grollmisch: Automatic quality assessment of vocal and instrumental performances of 9th-grade and 10th-grade pupils. In: Proc. 10th Int. Symp. Comput. Music Multidiscip. Res. (CMMR) (2013)
41.30
Zurück zum Zitat A. Liutkus, J. Durrieu, L. Daudet, G. Richard: An overview of informed audio source separation. In: Proc. 14th Int. Workshop Image Audio Anal. Multimed. Interact. Serv. (2013) pp. 3–6 A. Liutkus, J. Durrieu, L. Daudet, G. Richard: An overview of informed audio source separation. In: Proc. 14th Int. Workshop Image Audio Anal. Multimed. Interact. Serv. (2013) pp. 3–6
41.31
Zurück zum Zitat K. Dressler: An auditory streaming approach for melody extraction from polyphonic music. In: Proc. 12th Int. Soc. Music Inf. Retr. Conf. (ISMIR) (2011) pp. 19–24 K. Dressler: An auditory streaming approach for melody extraction from polyphonic music. In: Proc. 12th Int. Soc. Music Inf. Retr. Conf. (ISMIR) (2011) pp. 19–24
41.32
Zurück zum Zitat J. Salamon, E. Gómez, D.P. Ellis, G. Richard: Melody extraction from polyphonic music signals: Approaches, applications and challenges, IEEE Signal Process. Mag. 31(2), 118–134 (2014)CrossRef J. Salamon, E. Gómez, D.P. Ellis, G. Richard: Melody extraction from polyphonic music signals: Approaches, applications and challenges, IEEE Signal Process. Mag. 31(2), 118–134 (2014)CrossRef
41.33
Zurück zum Zitat E. Cano, C. Dittmar, G. Schuller: Efficient implementation of a system for solo and accompaniment separation in polyphonic music. In: Proc. 10th Eur. Signal Process. Conf. (EUSIPCO) (2012) pp. 285–289 E. Cano, C. Dittmar, G. Schuller: Efficient implementation of a system for solo and accompaniment separation in polyphonic music. In: Proc. 10th Eur. Signal Process. Conf. (EUSIPCO) (2012) pp. 285–289
41.34
Zurück zum Zitat B. Fuentes, R. Badeau, G. Richard: Blind harmonic adaptive decomposition applied to supervised source separation. In: Proc. 20th Eur. Signal Process. Conf. (EUSIPCO) (2012) pp. 2654–2658 B. Fuentes, R. Badeau, G. Richard: Blind harmonic adaptive decomposition applied to supervised source separation. In: Proc. 20th Eur. Signal Process. Conf. (EUSIPCO) (2012) pp. 2654–2658
41.35
Zurück zum Zitat R. Marxer, J. Janer, J. Bonada: Low-latency instrument separation in polyphonic audio using timbre models, Latent Var. Anal. Signal Sep. 7191, 314–321 (2012)CrossRef R. Marxer, J. Janer, J. Bonada: Low-latency instrument separation in polyphonic audio using timbre models, Latent Var. Anal. Signal Sep. 7191, 314–321 (2012)CrossRef
41.36
Zurück zum Zitat J. Fritsch, M.D. Plumbley: Score informed audio source separation using constrained non-negative matrix factorization and score synthesis. In: IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP) (2013) pp. 888–891 J. Fritsch, M.D. Plumbley: Score informed audio source separation using constrained non-negative matrix factorization and score synthesis. In: IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP) (2013) pp. 888–891
41.37
Zurück zum Zitat J. Ganseman, P. Scheunders, G.J. Mysore, J.S. Abel: Evaluation of a score-informed source separation system. In: 11th Int. Soc. Music Inf. Retr. Conf. (ISMIR) (2010) J. Ganseman, P. Scheunders, G.J. Mysore, J.S. Abel: Evaluation of a score-informed source separation system. In: 11th Int. Soc. Music Inf. Retr. Conf. (ISMIR) (2010)
41.38
Zurück zum Zitat C. Joder, B. Schuller: Score-informed leading voice separation from monaural audio. In: 13th Int. Soc. Music Inf. Retr. Conf. (ISMIR) (2012) pp. 277–282 C. Joder, B. Schuller: Score-informed leading voice separation from monaural audio. In: 13th Int. Soc. Music Inf. Retr. Conf. (ISMIR) (2012) pp. 277–282
41.39
Zurück zum Zitat M. Coic, J.J. Burred: Bayesian non-negative matrix factorization with learned temporal smoothness priors. In: Int. Conf. Latent Var. Anal. Signal Sep. (LVA/ICA) (2012) pp. 280–287CrossRef M. Coic, J.J. Burred: Bayesian non-negative matrix factorization with learned temporal smoothness priors. In: Int. Conf. Latent Var. Anal. Signal Sep. (LVA/ICA) (2012) pp. 280–287CrossRef
41.40
Zurück zum Zitat J.J. Burred, A. Röbel: A segmental spectro-temporal model of musical timbre. In: 13th Int. Conf. Dig. Audio Eff. (DAFx-10) (2010) pp. 1–7 J.J. Burred, A. Röbel: A segmental spectro-temporal model of musical timbre. In: 13th Int. Conf. Dig. Audio Eff. (DAFx-10) (2010) pp. 1–7
41.41
Zurück zum Zitat Y. Li, J. Woodruff, D. Wang: Monaural musical sound separation based on pitch and common amplitude modulation, IEEE Trans. Acoust. Speech Signal Process. 17(7), 1361–1371 (2009) Y. Li, J. Woodruff, D. Wang: Monaural musical sound separation based on pitch and common amplitude modulation, IEEE Trans. Acoust. Speech Signal Process. 17(7), 1361–1371 (2009)
41.42
Zurück zum Zitat J. Bosch, K. Kondo, R. Marxer, J. Janer: Score-informed and timbre independent lead instrument separation in real-world scenarios. In: Proc. 20th Eur. Signal Process. Conf. (EUSIPCO) (2012) pp. 2417–2421 J. Bosch, K. Kondo, R. Marxer, J. Janer: Score-informed and timbre independent lead instrument separation in real-world scenarios. In: Proc. 20th Eur. Signal Process. Conf. (EUSIPCO) (2012) pp. 2417–2421
41.43
Zurück zum Zitat J. Durrieu, B. David, G. Richard: A musically motivated mid-level representation for pitch estimation and musical audio source separation, IEEE J. Sel. Top. Signal Process. 5(6), 1180–1191 (2011)CrossRef J. Durrieu, B. David, G. Richard: A musically motivated mid-level representation for pitch estimation and musical audio source separation, IEEE J. Sel. Top. Signal Process. 5(6), 1180–1191 (2011)CrossRef
41.44
Zurück zum Zitat P. Huang, S. Chen, P. Smaragdis, M. Hasegawa-Johnson: Singing-voice separation from monaural recordings using robust principal component analysis. In: Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP) (2012) pp. 57–60 P. Huang, S. Chen, P. Smaragdis, M. Hasegawa-Johnson: Singing-voice separation from monaural recordings using robust principal component analysis. In: Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP) (2012) pp. 57–60
41.45
Zurück zum Zitat Z. Rafii, B. Pardo: Repeating pattern extraction technique (REPET): A simple method for music/voice separation, IEEE Trans. Audio Speech Lang. Process. 21, 73–84 (2013)CrossRef Z. Rafii, B. Pardo: Repeating pattern extraction technique (REPET): A simple method for music/voice separation, IEEE Trans. Audio Speech Lang. Process. 21, 73–84 (2013)CrossRef
41.46
Zurück zum Zitat J. Janer, R. Marxer: Separation of unvoiced fricatives in singing voice mixtures with music semi-supervised NMF. In: Proc. 16th Int. Conf. Dig. Audio Eff. (DAFx-13) (2013) pp. 1–4 J. Janer, R. Marxer: Separation of unvoiced fricatives in singing voice mixtures with music semi-supervised NMF. In: Proc. 16th Int. Conf. Dig. Audio Eff. (DAFx-13) (2013) pp. 1–4
41.47
Zurück zum Zitat C. Févotte, N. Bertin, J.L. Durrieu: Nonnegative matrix factorization with the Itakura–Saito divergence: With application to music analysis, Neural Comput. 21(3), 793–830 (2009)CrossRef C. Févotte, N. Bertin, J.L. Durrieu: Nonnegative matrix factorization with the Itakura–Saito divergence: With application to music analysis, Neural Comput. 21(3), 793–830 (2009)CrossRef
41.48
Zurück zum Zitat J.-L. Durrieu, J.-P. Thiran: Musical audio source separation based on user-selected F0 track, Latent Var. Anal. Signal Sep. 7191, 438–445 (2012)CrossRef J.-L. Durrieu, J.-P. Thiran: Musical audio source separation based on user-selected F0 track, Latent Var. Anal. Signal Sep. 7191, 438–445 (2012)CrossRef
41.49
Zurück zum Zitat U. Simsekli, A. Cemgil: Score guided musical source separation using generalized coupled tensor factorization. In: Proc. 20th Eur. Signal Process. Conf. (EUSIPCO) (2012) pp. 2639–2643 U. Simsekli, A. Cemgil: Score guided musical source separation using generalized coupled tensor factorization. In: Proc. 20th Eur. Signal Process. Conf. (EUSIPCO) (2012) pp. 2639–2643
41.50
Zurück zum Zitat D. FitzGerald: User assisted separation using tensor factorisations. In: 20th Eur. Signal Process. Conf. (EUSIPCO) (2012) pp. 2412–2416 D. FitzGerald: User assisted separation using tensor factorisations. In: 20th Eur. Signal Process. Conf. (EUSIPCO) (2012) pp. 2412–2416
41.51
Zurück zum Zitat L. Benaroya, L. Donagh, F. Bimbot, R. Gribonval: Non-negative sparse representation for Wiener based source separation with single sensor. In: Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP) (2003) pp. 613–616 L. Benaroya, L. Donagh, F. Bimbot, R. Gribonval: Non-negative sparse representation for Wiener based source separation with single sensor. In: Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP) (2003) pp. 613–616
41.52
Zurück zum Zitat J. Le Roux, E. Vincent, Y. Mizuno, K. Hirokazu, N. Ono, S. Sagayama: Consistent Wiener filtering: Generalized time-frequency masking respecting spectrogram consistency. In: Int. Conf. Latent Var. Anal. Signal Sep. (LVA/ICA) (2010) J. Le Roux, E. Vincent, Y. Mizuno, K. Hirokazu, N. Ono, S. Sagayama: Consistent Wiener filtering: Generalized time-frequency masking respecting spectrogram consistency. In: Int. Conf. Latent Var. Anal. Signal Sep. (LVA/ICA) (2010)
41.53
Zurück zum Zitat D. Fitzgerald: Harmonic/percussive separation using median filtering. In: 13th Int. Conf. Dig. Audio Eff. (DAFx-10) (2010) p. 10 D. Fitzgerald: Harmonic/percussive separation using median filtering. In: 13th Int. Conf. Dig. Audio Eff. (DAFx-10) (2010) p. 10
41.54
Zurück zum Zitat A. Bregman: Auditory Scene Analysis (MIT Press, Cambridge 1990) A. Bregman: Auditory Scene Analysis (MIT Press, Cambridge 1990)
41.55
Zurück zum Zitat V. Emiya, E. Vincent, N. Harlander, V. Hohmann: Subjective and objective quality assessment of audio source separation, IEEE Trans. Audio Speech Lang. Process. 19(7), 2046–2057 (2011)CrossRef V. Emiya, E. Vincent, N. Harlander, V. Hohmann: Subjective and objective quality assessment of audio source separation, IEEE Trans. Audio Speech Lang. Process. 19(7), 2046–2057 (2011)CrossRef
41.57
Zurück zum Zitat E. Cano, G. Schuller, C. Dittmar: Pitch-informed solo and accompaniment separation towards its use in music education applications, EURASIP J. Adv. Signal Process. 23, 1–19 (2014) E. Cano, G. Schuller, C. Dittmar: Pitch-informed solo and accompaniment separation towards its use in music education applications, EURASIP J. Adv. Signal Process. 23, 1–19 (2014)
41.58
Zurück zum Zitat J. Paulus: Signal Processing Methods for Drum Transciption and Music Structure Analysis, Ph.D. Thesis (Tampere Univ. Technol., Tampere 2009) J. Paulus: Signal Processing Methods for Drum Transciption and Music Structure Analysis, Ph.D. Thesis (Tampere Univ. Technol., Tampere 2009)
41.59
Zurück zum Zitat M. Casey: Separation of mixed audio sources by independent subspace analysis. In: Proc. Int. Comput. Music Conf. (2000) M. Casey: Separation of mixed audio sources by independent subspace analysis. In: Proc. Int. Comput. Music Conf. (2000)
41.60
Zurück zum Zitat C. Uhle, C. Dittmar, T. Sporer: Extraction of drum tracks from polyphonic music using independent subspace analysis. In: Proc. 4th Int. Symp. Indep. Compon. Anal. Blind Signal Sep. (2003) C. Uhle, C. Dittmar, T. Sporer: Extraction of drum tracks from polyphonic music using independent subspace analysis. In: Proc. 4th Int. Symp. Indep. Compon. Anal. Blind Signal Sep. (2003)
41.61
Zurück zum Zitat M. Plumbley: Algorithms for non-negative independent component analysis, IEEE Trans. Neural Netw. 14, 30–37 (2003)CrossRef M. Plumbley: Algorithms for non-negative independent component analysis, IEEE Trans. Neural Netw. 14, 30–37 (2003)CrossRef
41.62
Zurück zum Zitat C. Dittmar, C. Uhle: Further steps towards drum transcription of polyphonic music. In: Proc. AES 116th Conv. (2004) C. Dittmar, C. Uhle: Further steps towards drum transcription of polyphonic music. In: Proc. AES 116th Conv. (2004)
41.63
Zurück zum Zitat D. FitzGerald, B. Lawlor, E. Coyle: Prior subspace analysis for drum transcription. In: Proc. AES 114th Conv. (2003) D. FitzGerald, B. Lawlor, E. Coyle: Prior subspace analysis for drum transcription. In: Proc. AES 114th Conv. (2003)
41.64
Zurück zum Zitat A. Spich, M. Zanoni, A. Sarti, S. Tubaro: Drum music transcription using prior subspace analysis and pattern recognition. In: Proc. 13th Int. Conf. Dig. Audio Eff. (DAFx) (2010) A. Spich, M. Zanoni, A. Sarti, S. Tubaro: Drum music transcription using prior subspace analysis and pattern recognition. In: Proc. 13th Int. Conf. Dig. Audio Eff. (DAFx) (2010)
41.65
Zurück zum Zitat M. Helén, T. Virtanen: Separation of drums from polyphonic music using non-negative matrix factorization and support vector machine. In: Proc. 13th Eur. Signal Process. Conf. (EUSIPCO) (2005) M. Helén, T. Virtanen: Separation of drums from polyphonic music using non-negative matrix factorization and support vector machine. In: Proc. 13th Eur. Signal Process. Conf. (EUSIPCO) (2005)
41.66
Zurück zum Zitat J. Paulus, A. Klapuri: Drum transcription with non-negative spectrogram factorisation. In: Proc. 13th Eur. Signal Process. Conf. (EUSIPCO) (2005) J. Paulus, A. Klapuri: Drum transcription with non-negative spectrogram factorisation. In: Proc. 13th Eur. Signal Process. Conf. (EUSIPCO) (2005)
41.67
Zurück zum Zitat E. Battenberg, V. Huang, D. Wessel: Live drum separation using probabilistic spectral clustering based on the Itakura-Saito divergence. In: Proc. AES 45th Conf. Time-Freq. Process. Audio (2012) E. Battenberg, V. Huang, D. Wessel: Live drum separation using probabilistic spectral clustering based on the Itakura-Saito divergence. In: Proc. AES 45th Conf. Time-Freq. Process. Audio (2012)
41.68
Zurück zum Zitat K. Yoshii, M. Goto, H. Okuno: Automatic drum sound description for real-world music using template adaption and matching methods. In: Proc. 5th Int. Conf. Music Inf. Retr. (ISMIR) (2004) K. Yoshii, M. Goto, H. Okuno: Automatic drum sound description for real-world music using template adaption and matching methods. In: Proc. 5th Int. Conf. Music Inf. Retr. (ISMIR) (2004)
41.69
Zurück zum Zitat C. Dittmar, D. Wagner, D. Gärtner: Drumloop separation using adaptive spectrogram templates. In: Proc. 36th Jahrestag. Akust. (DAGA) (2010) C. Dittmar, D. Wagner, D. Gärtner: Drumloop separation using adaptive spectrogram templates. In: Proc. 36th Jahrestag. Akust. (DAGA) (2010)
41.70
Zurück zum Zitat A. Maximos, A. Floros, M. Vrahatis, N. Kanellopoulos: Real-time drums transcription with characteristic bandpass filtering. In: Proc. 7th Audio Mostly Conf. (2012) A. Maximos, A. Floros, M. Vrahatis, N. Kanellopoulos: Real-time drums transcription with characteristic bandpass filtering. In: Proc. 7th Audio Mostly Conf. (2012)
41.71
Zurück zum Zitat O. Gillet, G. Richard: Automatic transcription of drum loops. In: Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP) (2004) O. Gillet, G. Richard: Automatic transcription of drum loops. In: Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP) (2004)
41.72
Zurück zum Zitat K. Tanghe, S. Degroeve, B. De Baets: An algorithm for detecting and labeling drum events in polyphonic music. In: Proc. 1st Ann. Music Inf. Retr. Eval. eXchange (MIREX ’05) (2005) K. Tanghe, S. Degroeve, B. De Baets: An algorithm for detecting and labeling drum events in polyphonic music. In: Proc. 1st Ann. Music Inf. Retr. Eval. eXchange (MIREX ’05) (2005)
41.73
Zurück zum Zitat P. Grosche, M. Müller: Extracting predominant local pulse information from music recordings, IEEE Trans. Audio Speech Lang. Process. 19(6), 1688–1701 (2011)CrossRef P. Grosche, M. Müller: Extracting predominant local pulse information from music recordings, IEEE Trans. Audio Speech Lang. Process. 19(6), 1688–1701 (2011)CrossRef
41.74
Zurück zum Zitat C. Dittmar, A. Männchen, J. Abeßer: Real-time guitar string detection for music education software. In: Proc. 14th Int. Workshop Image Anal. Multimed. Interact. Serv. (WIAMIS) (2013) pp. 1–4 C. Dittmar, A. Männchen, J. Abeßer: Real-time guitar string detection for music education software. In: Proc. 14th Int. Workshop Image Anal. Multimed. Interact. Serv. (WIAMIS) (2013) pp. 1–4
41.75
Zurück zum Zitat A. Carrillo, M. Wanderley: Learning and extraction of violin instrumental controls from audio signal. In: Proc. 2nd Int. ACM Workshop Music Inf. Retr. User-Centered Multimodal Strateg. (MIRUM) (2012) pp. 25–30 A. Carrillo, M. Wanderley: Learning and extraction of violin instrumental controls from audio signal. In: Proc. 2nd Int. ACM Workshop Music Inf. Retr. User-Centered Multimodal Strateg. (MIRUM) (2012) pp. 25–30
41.76
Zurück zum Zitat P. O’Grady, S. Rickard: Automatic hexaphonic guitar transcription using non-negative constraints. In: Proc. IET Ir. Signals Syst. Conf. (ISSC) (2009) P. O’Grady, S. Rickard: Automatic hexaphonic guitar transcription using non-negative constraints. In: Proc. IET Ir. Signals Syst. Conf. (ISSC) (2009)
41.77
Zurück zum Zitat A. Hrybyk, Y. Kim: Combined audio and video for guitar chord identification. In: Proc. 11th Int. Soc. Music Inf. Retr. Conf. (ISMIR) (2010) pp. 159–164 A. Hrybyk, Y. Kim: Combined audio and video for guitar chord identification. In: Proc. 11th Int. Soc. Music Inf. Retr. Conf. (ISMIR) (2010) pp. 159–164
41.78
Zurück zum Zitat I. Barbancho, A. Barbancho, L. Tardón, S. Sammartino, L. Tardón: Pitch and played string estimation in classic and acoustic guitars. In: Proc. 126th Audio Eng. Soc. (AES) Conv. (2009) I. Barbancho, A. Barbancho, L. Tardón, S. Sammartino, L. Tardón: Pitch and played string estimation in classic and acoustic guitars. In: Proc. 126th Audio Eng. Soc. (AES) Conv. (2009)
41.79
Zurück zum Zitat J. Abeßer: Automatic string detection for bass guitar and electric guitar. In: Sounds Music Emot. – 9th Int. Symp., CMMR, London 2012 (2013) pp. 333–352 J. Abeßer: Automatic string detection for bass guitar and electric guitar. In: Sounds Music Emot. – 9th Int. Symp., CMMR, London 2012 (2013) pp. 333–352
41.80
Zurück zum Zitat A. Barbancho, A. Klapuri, L. Tardòn, I. Barbancho: Automatic transcription of guitar chords and fingering from audio, IEEE Trans. Audio Speech Lang. Process. 20, 915–921 (2011)CrossRef A. Barbancho, A. Klapuri, L. Tardòn, I. Barbancho: Automatic transcription of guitar chords and fingering from audio, IEEE Trans. Audio Speech Lang. Process. 20, 915–921 (2011)CrossRef
41.81
Zurück zum Zitat X. Fiss, A. Kwasinski: Automatic real-time electric guitar audio transcription. In: Proc. IEEE Conf. Acoust. Speech Signal Process. (ICASSP) (2011) pp. 373–376 X. Fiss, A. Kwasinski: Automatic real-time electric guitar audio transcription. In: Proc. IEEE Conf. Acoust. Speech Signal Process. (ICASSP) (2011) pp. 373–376
41.82
Zurück zum Zitat K. Yazawa, D. Sakaue, K. Nagira, K. Itoyama, H. Okuno: Audio-based guitar tablature transcription using multipitch analysis and playability constraints. In: Proc. 38th IEEE Conf. Acoust. Speech Signal Process. (ICASSP) (2013) pp. 196–200 K. Yazawa, D. Sakaue, K. Nagira, K. Itoyama, H. Okuno: Audio-based guitar tablature transcription using multipitch analysis and playability constraints. In: Proc. 38th IEEE Conf. Acoust. Speech Signal Process. (ICASSP) (2013) pp. 196–200
41.83
Zurück zum Zitat A. Burns, M. Wanderley: Visual methods for the retrieval of guitarist fingering. In: Proc. 2006 Int. Conf. New Interfaces Music. Expr. (NIME06) (2006) pp. 196–199 A. Burns, M. Wanderley: Visual methods for the retrieval of guitarist fingering. In: Proc. 2006 Int. Conf. New Interfaces Music. Expr. (NIME06) (2006) pp. 196–199
41.84
Zurück zum Zitat C. Kerdvibulvech, H. Saito: Vision-based guitarist fingering tracking using a Bayesian classifier and particle filters, Lect. Notes Comput. Sci. 4872, 625–638 (2007)CrossRef C. Kerdvibulvech, H. Saito: Vision-based guitarist fingering tracking using a Bayesian classifier and particle filters, Lect. Notes Comput. Sci. 4872, 625–638 (2007)CrossRef
41.85
Zurück zum Zitat M. Paleari, B. Huet, A. Schutz, D. Slock: A multimodal approach to music transcription. In: Proc. 15th IEEE Int. Conf. Image Process. (ICIP) (2008) pp. 93–96 M. Paleari, B. Huet, A. Schutz, D. Slock: A multimodal approach to music transcription. In: Proc. 15th IEEE Int. Conf. Image Process. (ICIP) (2008) pp. 93–96
41.86
Zurück zum Zitat J. Abeßer, C. Dittmar, G. Schuller: Automatic Recognition and parametrization of frequency modulation techniques in bass guitar recordings. In: Proc. 42nd Audio Eng. Soc. (AES) Conf. Semant. Audio (2011) J. Abeßer, C. Dittmar, G. Schuller: Automatic Recognition and parametrization of frequency modulation techniques in bass guitar recordings. In: Proc. 42nd Audio Eng. Soc. (AES) Conf. Semant. Audio (2011)
41.87
Zurück zum Zitat C. Erkut, M. Karjalainen, M. Laurson: Extraction of physical and expressive parameters for model-based sound synthesis of the classical guitar. In: Proc. 108th Audio Eng. Soc. (AES) Conv. (2000) pp. 19–22 C. Erkut, M. Karjalainen, M. Laurson: Extraction of physical and expressive parameters for model-based sound synthesis of the classical guitar. In: Proc. 108th Audio Eng. Soc. (AES) Conv. (2000) pp. 19–22
41.88
Zurück zum Zitat E. Guaus, T. Özaslan, E. Palacios, J. Arcos: A left hand gesture caption system for guitar based on capacitive sensors. In: Proc. 10th Int. Conf. New Interfaces Music. Expr. (NIME) (2010) pp. 238–243 E. Guaus, T. Özaslan, E. Palacios, J. Arcos: A left hand gesture caption system for guitar based on capacitive sensors. In: Proc. 10th Int. Conf. New Interfaces Music. Expr. (NIME) (2010) pp. 238–243
41.89
Zurück zum Zitat L. Reboursière, O. Lähdeoja, T. Drugman, S. Dupont, C. Picard-Limpens, N. Riche: Left and right-hand guitar playing techniques detection. In: Proc. Int. Conf. New Interfaces Music. Expr. (NIME) (2012) pp. 1–4 L. Reboursière, O. Lähdeoja, T. Drugman, S. Dupont, C. Picard-Limpens, N. Riche: Left and right-hand guitar playing techniques detection. In: Proc. Int. Conf. New Interfaces Music. Expr. (NIME) (2012) pp. 1–4
41.90
Zurück zum Zitat T. Özaslan, E. Guaus, E. Palacios, J. Arcos: Attack based articulation analysis of nylon string guitar. In: Proc. 7th Int. Symp. Comput. Music Model. Retr. (CMMR) (2010) pp. 285–298 T. Özaslan, E. Guaus, E. Palacios, J. Arcos: Attack based articulation analysis of nylon string guitar. In: Proc. 7th Int. Symp. Comput. Music Model. Retr. (CMMR) (2010) pp. 285–298
41.91
Zurück zum Zitat T. Özaslan, J. Arcos: Legato and glissando identification in classical guitar. In: Proc. Sound Music Comput. Conf. (SMC), Barcelona (2010) pp. 457–463 T. Özaslan, J. Arcos: Legato and glissando identification in classical guitar. In: Proc. Sound Music Comput. Conf. (SMC), Barcelona (2010) pp. 457–463
41.92
Zurück zum Zitat J. Abeßer, G. Schuller: Instrument-centered music transcription of bass guitar tracks. In: Proc. AES 53rd Conf. Semant. Audio (2014) J. Abeßer, G. Schuller: Instrument-centered music transcription of bass guitar tracks. In: Proc. AES 53rd Conf. Semant. Audio (2014)
41.93
Zurück zum Zitat C. Kehling: Entwicklung eines parametrischen Instrumentencoders basierend auf Analyse und Re-Synthese von Gitarrenaufnahmen, Diploma Thesis (Technische Universität Ilmenau, Ilmenau 2013) C. Kehling: Entwicklung eines parametrischen Instrumentencoders basierend auf Analyse und Re-Synthese von Gitarrenaufnahmen, Diploma Thesis (Technische Universität Ilmenau, Ilmenau 2013)
Metadaten
Titel
Music Technology and Education
verfasst von
Estefanía Cano
Christian Dittmar
Jakob Abeßer
Christian Kehling
Sascha Grollmisch
Copyright-Jahr
2018
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-662-55004-5_41

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.