Skip to main content
Top

2020 | OriginalPaper | Chapter

7. Dynamic Content Mining

Authors : Akka Zemmari, Jenny Benois-Pineau

Published in: Deep Learning in Mining of Visual Content

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Neural networks and convolutional neural networks can be considered as functions which take as input a vector and compute a distribution over the set of possible classes. Such networks have no notion of order in time nor in memory. That is they are not suitable for dynamic content mining like speech recognition, video processing, etc. In this chapter we introduce models able to handle temporality of visual content.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
BPSW70.
go back to reference Leonard E. Baum, Ted Petrie, George Soules, and Norman Weiss. A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann. Math. Statist., 41(1):164–171, 02 1970. Leonard E. Baum, Ted Petrie, George Soules, and Norman Weiss. A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann. Math. Statist., 41(1):164–171, 02 1970.
CVMBB14.
go back to reference Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014. Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
DLR77.
go back to reference A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B, 39(1):1–38, 1977.MathSciNetMATH A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B, 39(1):1–38, 1977.MathSciNetMATH
HS97.
go back to reference Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.CrossRef Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.CrossRef
JH08.
go back to reference Orlando De Jesus and Martin T. Hagan. Backpropagation through time for general dynamic networks. In Hamid R. Arabnia and Youngsong Mun, editors, Proceedings of the 2008 International Conference on Artificial Intelligence, ICAI 2008, July 14–17, 2008, Las Vegas, Nevada, USA, 2 Volumes (includes the 2008 International Conference on Machine Learning; Models, Technologies and Applications), pages 45–51. CSREA Press, 2008. Orlando De Jesus and Martin T. Hagan. Backpropagation through time for general dynamic networks. In Hamid R. Arabnia and Youngsong Mun, editors, Proceedings of the 2008 International Conference on Artificial Intelligence, ICAI 2008, July 14–17, 2008, Las Vegas, Nevada, USA, 2 Volumes (includes the 2008 International Conference on Machine Learning; Models, Technologies and Applications), pages 45–51. CSREA Press, 2008.
KBD+14.
go back to reference Svebor Karaman, Jenny Benois-Pineau, Vladislavs Dovgalecs, Rémi Mégret, Julien Pinquier, Régine André-Obrecht, Yann Gaëstel, and Jean-François Dartigues. Hierarchical hidden Markov model in detecting activities of daily living in wearable videos for studies of dementia. Multimedia Tools Appl., 69(3):743–771, 2014.CrossRef Svebor Karaman, Jenny Benois-Pineau, Vladislavs Dovgalecs, Rémi Mégret, Julien Pinquier, Régine André-Obrecht, Yann Gaëstel, and Jean-François Dartigues. Hierarchical hidden Markov model in detecting activities of daily living in wearable videos for studies of dementia. Multimedia Tools Appl., 69(3):743–771, 2014.CrossRef
KOG03.
go back to reference Ewa Kijak, Lionel Oisel, and Patrick Gros. Temporal structure analysis of broadcast tennis video using hidden Markov models. In Minerva M. Yeung, Rainer Lienhart, and Chung-Sheng Li, editors, Storage and Retrieval for Media Databases 2003, Santa Clara, CA, USA, January 22, 2003, volume 5021 of SPIE Proceedings, pages 289–299. SPIE, 2003. Ewa Kijak, Lionel Oisel, and Patrick Gros. Temporal structure analysis of broadcast tennis video using hidden Markov models. In Minerva M. Yeung, Rainer Lienhart, and Chung-Sheng Li, editors, Storage and Retrieval for Media Databases 2003, Santa Clara, CA, USA, January 22, 2003, volume 5021 of SPIE Proceedings, pages 289–299. SPIE, 2003.
MJ98.
go back to reference Sheng Ma and Chuanyi Ji. A unified approach on fast training of feedforward and recurrent networks using EM algorithm. IEEE Trans. Signal Processing, 46(8):2270–2274, 1998.CrossRef Sheng Ma and Chuanyi Ji. A unified approach on fast training of feedforward and recurrent networks using EM algorithm. IEEE Trans. Signal Processing, 46(8):2270–2274, 1998.CrossRef
Neu75.
go back to reference David L. Neuhoff. The Viterbi algorithm as an aid in text recognition (corresp.). IEEE Trans. Information Theory, 21(2):222–226, 1975.MathSciNetCrossRef David L. Neuhoff. The Viterbi algorithm as an aid in text recognition (corresp.). IEEE Trans. Information Theory, 21(2):222–226, 1975.MathSciNetCrossRef
PMB13.
go back to reference Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16–21 June 2013 [DBL13], pages 1310–1318. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16–21 June 2013 [DBL13], pages 1310–1318.
Rab89.
go back to reference L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. In Proceedings of the IEEE, volume 77, pages 257–286, 1989.CrossRef L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. In Proceedings of the IEEE, volume 77, pages 257–286, 1989.CrossRef
SMDH13b.
go back to reference Ilya Sutskever, James Martens, George E. Dahl, and Geoffrey E. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16–21 June 2013 [DBL13], pages 1139–1147. Ilya Sutskever, James Martens, George E. Dahl, and Geoffrey E. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16–21 June 2013 [DBL13], pages 1139–1147.
Vit67.
go back to reference Andrew J. Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans. Information Theory, 13(2):260–269, 1967.CrossRef Andrew J. Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans. Information Theory, 13(2):260–269, 1967.CrossRef
Metadata
Title
Dynamic Content Mining
Authors
Akka Zemmari
Jenny Benois-Pineau
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-34376-7_7

Premium Partner