Skip to main content
Erschienen in: Artificial Intelligence Review 8/2020

13.05.2020

A review on the long short-term memory model

verfasst von: Greg Van Houdt, Carlos Mosquera, Gonzalo Nápoles

Erschienen in: Artificial Intelligence Review | Ausgabe 8/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Long short-term memory (LSTM) has transformed both machine learning and neurocomputing fields. According to several online sources, this model has improved Google’s speech recognition, greatly improved machine translations on Google Translate, and the answers of Amazon’s Alexa. This neural system is also employed by Facebook, reaching over 4 billion LSTM-based translations per day as of 2017. Interestingly, recurrent neural networks had shown a rather discrete performance until LSTM showed up. One reason for the success of this recurrent network lies in its ability to handle the exploding/vanishing gradient problem, which stands as a difficult issue to be circumvented when training recurrent or very deep neural networks. In this paper, we present a comprehensive review that covers LSTM’s formulation and training, relevant applications reported in the literature and code resources implementing this model for a toy example.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Alahi A, Goel K, Ramanathan V, Robicquet A, Fei-Fei L, Savarese S (2016) Social LSTM: human trajectory prediction in crowded spaces. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 961–971 Alahi A, Goel K, Ramanathan V, Robicquet A, Fei-Fei L, Savarese S (2016) Social LSTM: human trajectory prediction in crowded spaces. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 961–971
Zurück zum Zitat Álvaro F, Sánchez JA, Benedí JM (2016) An integrated grammar-based approach for mathematical expression recognition. Pattern Recognit 51:135–147MATH Álvaro F, Sánchez JA, Benedí JM (2016) An integrated grammar-based approach for mathematical expression recognition. Pattern Recognit 51:135–147MATH
Zurück zum Zitat Andersen RS, Peimankar A, Puthusserypady S (2019) A deep learning approach for real-time detection of atrial fibrillation. Expert Syst Appl 115:465–473 Andersen RS, Peimankar A, Puthusserypady S (2019) A deep learning approach for real-time detection of atrial fibrillation. Expert Syst Appl 115:465–473
Zurück zum Zitat Baddar WJ, Ro YM (2019) Mode variational LSTM robust to unseen modes of variation: application to facial expression recognition. Proc AAAI Conf Artif Intell 33:3215–3223 Baddar WJ, Ro YM (2019) Mode variational LSTM robust to unseen modes of variation: application to facial expression recognition. Proc AAAI Conf Artif Intell 33:3215–3223
Zurück zum Zitat Bao J, Liu P, Ukkusuri SV (2019) A spatiotemporal deep learning approach for citywide short-term crash risk prediction with multi-source data. Accid Anal Prev 122:239–254 Bao J, Liu P, Ukkusuri SV (2019) A spatiotemporal deep learning approach for citywide short-term crash risk prediction with multi-source data. Accid Anal Prev 122:239–254
Zurück zum Zitat Barbieri F, Anke LE, Camacho-Collados J, Schockaert S, Saggion H (2018) Interpretable emoji prediction via label-wise attention LSTMs. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 4766–4771 Barbieri F, Anke LE, Camacho-Collados J, Schockaert S, Saggion H (2018) Interpretable emoji prediction via label-wise attention LSTMs. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 4766–4771
Zurück zum Zitat Bayer J, Wierstra D, Togelius J, Schmidhuber J (2009) Evolving memory cell structures for sequence learning. In: International conference on artificial neural networks, pp 755–764. Springer Bayer J, Wierstra D, Togelius J, Schmidhuber J (2009) Evolving memory cell structures for sequence learning. In: International conference on artificial neural networks, pp 755–764. Springer
Zurück zum Zitat Bellec G, Salaj D, Subramoney A, Legenstein R, Maass W (2018) Long short-term memory and learning-to-learn in networks of spiking neurons. In: Advances in neural information processing systems, pp 787–797 Bellec G, Salaj D, Subramoney A, Legenstein R, Maass W (2018) Long short-term memory and learning-to-learn in networks of spiking neurons. In: Advances in neural information processing systems, pp 787–797
Zurück zum Zitat Bhunia AK, Konwer A, Bhunia AK, Bhowmick A, Roy PP, Pal U (2019) Script identification in natural scene image and video frames using an attention based convolutional-LSTM network. Pattern Recognit 85:172–184 Bhunia AK, Konwer A, Bhunia AK, Bhowmick A, Roy PP, Pal U (2019) Script identification in natural scene image and video frames using an attention based convolutional-LSTM network. Pattern Recognit 85:172–184
Zurück zum Zitat Bilakhia S, Petridis S, Nijholt A, Pantic M (2015) The MAHNOB mimicry database: a database of naturalistic human interactions. Pattern Recognit Lett 66:52–61 Pattern Recognition in Human Computer Interaction Bilakhia S, Petridis S, Nijholt A, Pantic M (2015) The MAHNOB mimicry database: a database of naturalistic human interactions. Pattern Recognit Lett 66:52–61 Pattern Recognition in Human Computer Interaction
Zurück zum Zitat Brattoli B, Buchler U, Wahl AS, Schwab ME, Ommer B (2017) LSTM self-supervision for detailed behavior analysis. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6466–6475 Brattoli B, Buchler U, Wahl AS, Schwab ME, Ommer B (2017) LSTM self-supervision for detailed behavior analysis. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6466–6475
Zurück zum Zitat Cai AZ, Li BL, Hu CY, Luo DW, Lin EC (2019) Automated groove identification and measurement using long short-term memory unit. Measurement 141:152–161 Cai AZ, Li BL, Hu CY, Luo DW, Lin EC (2019) Automated groove identification and measurement using long short-term memory unit. Measurement 141:152–161
Zurück zum Zitat Cen Z, Wang J (2019) Crude oil price prediction model with long short term memory deep learning based on prior knowledge data transfer. Energy 169:160–171 Cen Z, Wang J (2019) Crude oil price prediction model with long short term memory deep learning based on prior knowledge data transfer. Energy 169:160–171
Zurück zum Zitat Chen L, He Y, Fan L (2017a) Let the robot tell: describe car image with natural language via LSTM. Pattern Recognit Lett 98:75–82 Chen L, He Y, Fan L (2017a) Let the robot tell: describe car image with natural language via LSTM. Pattern Recognit Lett 98:75–82
Zurück zum Zitat Chen M, Ding G, Zhao S, Chen H, Liu Q, Han J (2017b) Reference based LSTM for image captioning. In: 31st AAAI conference on artificial intelligence Chen M, Ding G, Zhao S, Chen H, Liu Q, Han J (2017b) Reference based LSTM for image captioning. In: 31st AAAI conference on artificial intelligence
Zurück zum Zitat Chen Y, Yang J, Qian J (2017c) Recurrent neural network for facial landmark detection. Neurocomputing 219:26–38 Chen Y, Yang J, Qian J (2017c) Recurrent neural network for facial landmark detection. Neurocomputing 219:26–38
Zurück zum Zitat Chen B, Li P, Sun C, Wang D, Yang G, Lu H (2019a) Multi attention module for visual tracking. Pattern Recognit 87:80–93 Chen B, Li P, Sun C, Wang D, Yang G, Lu H (2019a) Multi attention module for visual tracking. Pattern Recognit 87:80–93
Zurück zum Zitat Chen Y, Zhang S, Zhang W, Peng J, Cai Y (2019b) Multifactor spatio-temporal correlation model based on a combination of convolutional neural network and long short-term memory neural network for wind speed forecasting. Energy Convers Manag 185:783–799 Chen Y, Zhang S, Zhang W, Peng J, Cai Y (2019b) Multifactor spatio-temporal correlation model based on a combination of convolutional neural network and long short-term memory neural network for wind speed forecasting. Energy Convers Manag 185:783–799
Zurück zum Zitat Chowdhury GG (2003) Natural language processing. Ann Rev Inf Sci Technol 37(1):51–89MathSciNet Chowdhury GG (2003) Natural language processing. Ann Rev Inf Sci Technol 37(1):51–89MathSciNet
Zurück zum Zitat Dabiri S, Heaslip K (2019) Developing a twitter-based traffic event detection model using deep learning architectures. Expert Syst Appl 118:425–439 Dabiri S, Heaslip K (2019) Developing a twitter-based traffic event detection model using deep learning architectures. Expert Syst Appl 118:425–439
Zurück zum Zitat D’Andrea E, Ducange P, Bechini A, Renda A, Marcelloni F (2019) Monitoring the public opinion about the vaccination topic from tweets analysis. Expert Syst Appl 116:209–226 D’Andrea E, Ducange P, Bechini A, Renda A, Marcelloni F (2019) Monitoring the public opinion about the vaccination topic from tweets analysis. Expert Syst Appl 116:209–226
Zurück zum Zitat Eck D, Schmidhuber J (2002) Finding temporal structure in music: blues improvisation with LSTM recurrent networks. In: Proceedings of the 12th IEEE workshop on neural networks for signal processing. IEEE, pp 747–756 Eck D, Schmidhuber J (2002) Finding temporal structure in music: blues improvisation with LSTM recurrent networks. In: Proceedings of the 12th IEEE workshop on neural networks for signal processing. IEEE, pp 747–756
Zurück zum Zitat Elsheikh A, Yacout S, Ouali MS (2019) Bidirectional handshaking LSTM for remaining useful life prediction. Neurocomputing 323:148–156 Elsheikh A, Yacout S, Ouali MS (2019) Bidirectional handshaking LSTM for remaining useful life prediction. Neurocomputing 323:148–156
Zurück zum Zitat Fan H, Zhu L, Yang Y (2019) Cubic LSTMs for video prediction. In: Proceedings of the AAAI conference on artificial intelligence, vol 33 Fan H, Zhu L, Yang Y (2019) Cubic LSTMs for video prediction. In: Proceedings of the AAAI conference on artificial intelligence, vol 33
Zurück zum Zitat Fayek HM, Lech M, Cavedon L (2017) Evaluating deep learning architectures for speech emotion recognition. Neural Netw 92:60–68 Advances in Cognitive Engineering Using Neural Networks Fayek HM, Lech M, Cavedon L (2017) Evaluating deep learning architectures for speech emotion recognition. Neural Netw 92:60–68 Advances in Cognitive Engineering Using Neural Networks
Zurück zum Zitat Feng F, Liu X, Yong B, Zhou R, Zhou Q (2019a) Anomaly detection in ad-hoc networks based on deep learning model: a plug and play device. Ad Hoc Netw 84:82–89 Feng F, Liu X, Yong B, Zhou R, Zhou Q (2019a) Anomaly detection in ad-hoc networks based on deep learning model: a plug and play device. Ad Hoc Netw 84:82–89
Zurück zum Zitat Feng Y, Ma L, Liu W, Luo J (2019b) Spatio-temporal video re-localization by warp LSTM. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1288–1297 Feng Y, Ma L, Liu W, Luo J (2019b) Spatio-temporal video re-localization by warp LSTM. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1288–1297
Zurück zum Zitat Fernández S, Graves A, Schmidhuber J (2007) An application of recurrent neural networks to discriminative keyword spotting. In: International conference on artificial neural networks. Springer, pp 220–229 Fernández S, Graves A, Schmidhuber J (2007) An application of recurrent neural networks to discriminative keyword spotting. In: International conference on artificial neural networks. Springer, pp 220–229
Zurück zum Zitat Fischer T, Krauss C (2018) Deep learning with long short-term memory networks for financial market predictions. Eur J Oper Res 270(2):654–669MathSciNetMATH Fischer T, Krauss C (2018) Deep learning with long short-term memory networks for financial market predictions. Eur J Oper Res 270(2):654–669MathSciNetMATH
Zurück zum Zitat Frinken V, Fischer A, Baumgartner M, Bunke H (2014) Keyword spotting for self-training of BLSTM NN based handwriting recognition systems. Pattern Recognit 47(3):1073–1082 Handwriting Recognition and other PR Applications Frinken V, Fischer A, Baumgartner M, Bunke H (2014) Keyword spotting for self-training of BLSTM NN based handwriting recognition systems. Pattern Recognit 47(3):1073–1082 Handwriting Recognition and other PR Applications
Zurück zum Zitat Gao H, Mao J, Zhou J, Huang Z, Wang L, Xu W (2015) Are you talking to a machine? Dataset and methods for multilingual image question. In: Advances in neural information processing systems, pp 2296–2304 Gao H, Mao J, Zhou J, Huang Z, Wang L, Xu W (2015) Are you talking to a machine? Dataset and methods for multilingual image question. In: Advances in neural information processing systems, pp 2296–2304
Zurück zum Zitat Gers F, Schmidhuber J (2000) Recurrent nets that time and count. Proc Int Joint Conf Neural Netw 3:189–194 Gers F, Schmidhuber J (2000) Recurrent nets that time and count. Proc Int Joint Conf Neural Netw 3:189–194
Zurück zum Zitat Gers FA, Schmidhuber E (2001) LSTM recurrent networks learn simple context-free and context-sensitive languages. IEEE Trans Neural Netw 12(6):1333–1340 Gers FA, Schmidhuber E (2001) LSTM recurrent networks learn simple context-free and context-sensitive languages. IEEE Trans Neural Netw 12(6):1333–1340
Zurück zum Zitat Gers F, Schmidhuber J, Cummins F (2000) Learning to forget: continual prediction with LSTM. Neural Comput 12:2451–71 Gers F, Schmidhuber J, Cummins F (2000) Learning to forget: continual prediction with LSTM. Neural Comput 12:2451–71
Zurück zum Zitat Gers FA, Pérez-Ortiz JA, Eck D, Schmidhuber J (2002) Learning context sensitive languages with LSTM trained with Kalman filters. In: International conference on artificial neural networks. Springer, pp 655–660 Gers FA, Pérez-Ortiz JA, Eck D, Schmidhuber J (2002) Learning context sensitive languages with LSTM trained with Kalman filters. In: International conference on artificial neural networks. Springer, pp 655–660
Zurück zum Zitat Gong J, Chen X, Gui T, Qiu X (2019) Switch-lstms for multi-criteria Chinese word segmentation. Proc AAAI Conf Artif Intell 33:6457–6464 Gong J, Chen X, Gui T, Qiu X (2019) Switch-lstms for multi-criteria Chinese word segmentation. Proc AAAI Conf Artif Intell 33:6457–6464
Zurück zum Zitat Graves A, Schmidhuber J (2005a) Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw 18(5):602–610 (IJCNN 2005) Graves A, Schmidhuber J (2005a) Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw 18(5):602–610 (IJCNN 2005)
Zurück zum Zitat Graves A, Schmidhuber J (2009) Offline handwriting recognition with multidimensional recurrent neural networks. In: Koller D, Schuurmans D, Bengio Y, Bottou L (eds) Advances in neural information processing systems, vol 21. Curran Associates, Inc, Red Hook, pp 545–552 Graves A, Schmidhuber J (2009) Offline handwriting recognition with multidimensional recurrent neural networks. In: Koller D, Schuurmans D, Bengio Y, Bottou L (eds) Advances in neural information processing systems, vol 21. Curran Associates, Inc, Red Hook, pp 545–552
Zurück zum Zitat Graves A, Eck D, Beringer N, Schmidhuber J (2004) Biologically plausible speech recognition with LSTM neural nets. In: International workshop on biologically inspired approaches to advanced information technology. Springer, pp 127–136 Graves A, Eck D, Beringer N, Schmidhuber J (2004) Biologically plausible speech recognition with LSTM neural nets. In: International workshop on biologically inspired approaches to advanced information technology. Springer, pp 127–136
Zurück zum Zitat Graves A, Fernández S, Schmidhuber J (2007) Multi-dimensional recurrent neural networks. In: International conference on artificial neural networks. Springer, pp 549–558 Graves A, Fernández S, Schmidhuber J (2007) Multi-dimensional recurrent neural networks. In: International conference on artificial neural networks. Springer, pp 549–558
Zurück zum Zitat Greff K, Srivastava RK, Koutnik J, Steunebrink BR, Schmidhuber J (2017) LSTM: a search space odyssey. IEEE Trans Neural Netw Learn Syst 28(10):2222–2232MathSciNet Greff K, Srivastava RK, Koutnik J, Steunebrink BR, Schmidhuber J (2017) LSTM: a search space odyssey. IEEE Trans Neural Netw Learn Syst 28(10):2222–2232MathSciNet
Zurück zum Zitat Guo D, Zhou W, Li H, Wang M (2018) Hierarchical LSTM for sign language translation. In: 32nd AAAI conference on artificial intelligence Guo D, Zhou W, Li H, Wang M (2018) Hierarchical LSTM for sign language translation. In: 32nd AAAI conference on artificial intelligence
Zurück zum Zitat He Z, Gao S, Xiao L, Liu D, He H, Barber D (2017) Wider and deeper, cheaper and faster: tensorized LSTMs for sequence learning. In: Advances in neural information processing systems, pp 1–11 He Z, Gao S, Xiao L, Liu D, He H, Barber D (2017) Wider and deeper, cheaper and faster: tensorized LSTMs for sequence learning. In: Advances in neural information processing systems, pp 1–11
Zurück zum Zitat He X, Shi B, Bai X, Xia GS, Zhang Z, Dong W (2019) Image caption generation with part of speech guidance. Pattern Recognit Lett 119:229–237 Deep Learning for Pattern Recognition He X, Shi B, Bai X, Xia GS, Zhang Z, Dong W (2019) Image caption generation with part of speech guidance. Pattern Recognit Lett 119:229–237 Deep Learning for Pattern Recognition
Zurück zum Zitat Hochreiter S (1991) Untersuchungen zu dynamischen neuronalen netzen, vol 91, no 1. Diploma, Technische Universität München Hochreiter S (1991) Untersuchungen zu dynamischen neuronalen netzen, vol 91, no 1. Diploma, Technische Universität München
Zurück zum Zitat Hochreiter S, Schmidhuber J (1997a) Long short-term memory. Neural Comput 9:1735–80 Hochreiter S, Schmidhuber J (1997a) Long short-term memory. Neural Comput 9:1735–80
Zurück zum Zitat Hochreiter S, Schmidhuber J (1997b) LSTM can solve hard long time lag problems. In: Mozer MC, Jordan MI, Petsche T (eds) Advances in neural information processing systems, vol 9, pp 473–479. MIT Press, Cambridge Hochreiter S, Schmidhuber J (1997b) LSTM can solve hard long time lag problems. In: Mozer MC, Jordan MI, Petsche T (eds) Advances in neural information processing systems, vol 9, pp 473–479. MIT Press, Cambridge
Zurück zum Zitat Homayoun S, Dehghantanha A, Ahmadzadeh M, Hashemi S, Khayami R, Choo KKR, Newton DE (2019) DRTHIS: deep ransomware threat hunting and intelligence system at the fog layer. Future Gener Comput Syst 90:94–104 Homayoun S, Dehghantanha A, Ahmadzadeh M, Hashemi S, Khayami R, Choo KKR, Newton DE (2019) DRTHIS: deep ransomware threat hunting and intelligence system at the fog layer. Future Gener Comput Syst 90:94–104
Zurück zum Zitat Hong J, Wang Z, Yao Y (2019) Fault prognosis of battery system based on accurate voltage abnormity prognosis using long short-term memory neural networks. Appl Energy 251:113381 Hong J, Wang Z, Yao Y (2019) Fault prognosis of battery system based on accurate voltage abnormity prognosis using long short-term memory neural networks. Appl Energy 251:113381
Zurück zum Zitat Hori T, Wang W, Koji Y, Hori C, Harsham B, Hershey JR (2019) Adversarial training and decoding strategies for end-to-end neural conversation models. Comput Speech Lang 54:122–139 Hori T, Wang W, Koji Y, Hori C, Harsham B, Hershey JR (2019) Adversarial training and decoding strategies for end-to-end neural conversation models. Comput Speech Lang 54:122–139
Zurück zum Zitat Horsmann T, Zesch T (2017) Do LSTMs really work so well for pos tagging?—a replication study. In: Proceedings of the 2017 conference on empirical methods in natural language processing, pp 727–736 Horsmann T, Zesch T (2017) Do LSTMs really work so well for pos tagging?—a replication study. In: Proceedings of the 2017 conference on empirical methods in natural language processing, pp 727–736
Zurück zum Zitat Hou Q, Wang J, Bai R, Zhou S, Gong Y (2018) Face alignment recurrent network. Pattern Recognit 74:448–458 Hou Q, Wang J, Bai R, Zhou S, Gong Y (2018) Face alignment recurrent network. Pattern Recognit 74:448–458
Zurück zum Zitat Huang KY, Wu CH, Su MH (2019a) Attention-based convolutional neural network and long short-term memory for short-term detection of mood disorders based on elicited speech responses. Pattern Recognit 88:668–678 Huang KY, Wu CH, Su MH (2019a) Attention-based convolutional neural network and long short-term memory for short-term detection of mood disorders based on elicited speech responses. Pattern Recognit 88:668–678
Zurück zum Zitat Huang Y, Shen L, Liu H (2019b) Grey relational analysis, principal component analysis and forecasting of carbon emissions based on long short-term memory in China. J Clean Prod 209:415–423 Huang Y, Shen L, Liu H (2019b) Grey relational analysis, principal component analysis and forecasting of carbon emissions based on long short-term memory in China. J Clean Prod 209:415–423
Zurück zum Zitat Kadari R, Zhang Y, Zhang W, Liu T (2018) CCG supertagging via bidirectional LSTM-CRF neural architecture. Neurocomputing 283:31–37 Kadari R, Zhang Y, Zhang W, Liu T (2018) CCG supertagging via bidirectional LSTM-CRF neural architecture. Neurocomputing 283:31–37
Zurück zum Zitat Kafle K, Kanan C (2017) Visual question answering: datasets, algorithms, and future challenges. Comput Vis Image Underst 163:3–20 Language in Vision Kafle K, Kanan C (2017) Visual question answering: datasets, algorithms, and future challenges. Comput Vis Image Underst 163:3–20 Language in Vision
Zurück zum Zitat Kang J, Jang S, Li S, Jeong YS, Sung Y (2019) Long short-term memory-based malware classification method for information security. Comput Electr Eng 77:366–375 Kang J, Jang S, Li S, Jeong YS, Sung Y (2019) Long short-term memory-based malware classification method for information security. Comput Electr Eng 77:366–375
Zurück zum Zitat Kanjo E, Younis EM, Ang CS (2019) Deep learning analysis of mobile physiological, environmental and location sensor data for emotion detection. Inf Fusion 49:46–56 Kanjo E, Younis EM, Ang CS (2019) Deep learning analysis of mobile physiological, environmental and location sensor data for emotion detection. Inf Fusion 49:46–56
Zurück zum Zitat Kartsaklis D, Pilehvar MT, Collier N (2018) Mapping text to knowledge graph entities using multi-sense LSTMs. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 1959–1970 Kartsaklis D, Pilehvar MT, Collier N (2018) Mapping text to knowledge graph entities using multi-sense LSTMs. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 1959–1970
Zurück zum Zitat Kim B, Chung K, Lee J, Seo J, Koo MW (2019a) A bi-LSTM memory network for end-to-end goal-oriented dialog learning. Comput Speech Lang 53:217–230 Kim B, Chung K, Lee J, Seo J, Koo MW (2019a) A bi-LSTM memory network for end-to-end goal-oriented dialog learning. Comput Speech Lang 53:217–230
Zurück zum Zitat Kim S, Kang S, Ryu KR, Song G (2019b) Real-time occupancy prediction in a large exhibition hall using deep learning approach. Energy Build 199:216–222 Kim S, Kang S, Ryu KR, Song G (2019b) Real-time occupancy prediction in a large exhibition hall using deep learning approach. Energy Build 199:216–222
Zurück zum Zitat Kinghorn P, Zhang L, Shao L (2019) A hierarchical and regional deep learning architecture for image description generation. Pattern Recognit Lett 119:77–85 Deep Learning for Pattern Recognition Kinghorn P, Zhang L, Shao L (2019) A hierarchical and regional deep learning architecture for image description generation. Pattern Recognit Lett 119:77–85 Deep Learning for Pattern Recognition
Zurück zum Zitat Kraus M, Feuerriegel S (2019) Sentiment analysis based on rhetorical structure theory: learning deep neural networks from discourse trees. Expert Syst Appl 118:65–79 Kraus M, Feuerriegel S (2019) Sentiment analysis based on rhetorical structure theory: learning deep neural networks from discourse trees. Expert Syst Appl 118:65–79
Zurück zum Zitat Kumar Srivastava R, Greff K, Schmidhuber J (2015) Training very deep networks. In: Neural information processing systems (NIPS 2015 Spotlight) Kumar Srivastava R, Greff K, Schmidhuber J (2015) Training very deep networks. In: Neural information processing systems (NIPS 2015 Spotlight)
Zurück zum Zitat Laffitte P, Wang Y, Sodoyer D, Girin L (2019) Assessing the performances of different neural network architectures for the detection of screams and shouts in public transportation. Expert Syst Appl 117:29–41 Laffitte P, Wang Y, Sodoyer D, Girin L (2019) Assessing the performances of different neural network architectures for the detection of screams and shouts in public transportation. Expert Syst Appl 117:29–41
Zurück zum Zitat Lei J, Liu C, Jiang D (2019) Fault diagnosis of wind turbine based on long short-term memory networks. Renew Energy 133:422–432 Lei J, Liu C, Jiang D (2019) Fault diagnosis of wind turbine based on long short-term memory networks. Renew Energy 133:422–432
Zurück zum Zitat Li H, Xu H (2019) Video-based sentiment analysis with hvnLBP-TOP feature and bi-LSTM. Proc AAAI Conf Artif Intell 33:9963–9964 Li H, Xu H (2019) Video-based sentiment analysis with hvnLBP-TOP feature and bi-LSTM. Proc AAAI Conf Artif Intell 33:9963–9964
Zurück zum Zitat Li P, Li Y, Xiong Q, Chai Y, Zhang Y (2014) Application of a hybrid quantized elman neural network in short-term load forecasting. Int J Electr Power Energy Syst 55:749–759 Li P, Li Y, Xiong Q, Chai Y, Zhang Y (2014) Application of a hybrid quantized elman neural network in short-term load forecasting. Int J Electr Power Energy Syst 55:749–759
Zurück zum Zitat Li X, Ye M, Liu Y, Zhang F, Liu D, Tang S (2017) Accurate object detection using memory-based models in surveillance scenes. Pattern Recognit 67:73–84 Li X, Ye M, Liu Y, Zhang F, Liu D, Tang S (2017) Accurate object detection using memory-based models in surveillance scenes. Pattern Recognit 67:73–84
Zurück zum Zitat Li F, Zhang M, Tian B, Chen B, Fu G, Ji D (2018a) Recognizing irregular entities in biomedical text via deep neural networks. Pattern Recognit Lett 105:105–113 Machine Learning and Applications in Artificial Intelligence Li F, Zhang M, Tian B, Chen B, Fu G, Ji D (2018a) Recognizing irregular entities in biomedical text via deep neural networks. Pattern Recognit Lett 105:105–113 Machine Learning and Applications in Artificial Intelligence
Zurück zum Zitat Li Z, Gavrilyuk K, Gavves E, Jain M, Snoek CG (2018b) Videolstm convolves, attends and flows for action recognition. Comput Vis Image Underst 166:41–50 Li Z, Gavrilyuk K, Gavves E, Jain M, Snoek CG (2018b) Videolstm convolves, attends and flows for action recognition. Comput Vis Image Underst 166:41–50
Zurück zum Zitat Li X, Zhang L, Wang Z, Dong P (2019) Remaining useful life prediction for lithium-ion batteries based on a hybrid model combining the long short-term memory and elman neural networks. J Energy Storage 21:510–518 Li X, Zhang L, Wang Z, Dong P (2019) Remaining useful life prediction for lithium-ion batteries based on a hybrid model combining the long short-term memory and elman neural networks. J Energy Storage 21:510–518
Zurück zum Zitat Liu Y (2019) Novel volatility forecasting using deep learning-long short term memory recurrent neural networks. Expert Syst Appl 132:99–109 Liu Y (2019) Novel volatility forecasting using deep learning-long short term memory recurrent neural networks. Expert Syst Appl 132:99–109
Zurück zum Zitat Liu AA, Xu N, Wong Y, Li J, Su YT, Kankanhalli M (2017a) Hierarchical & multimodal video captioning: discovering and transferring multimodal knowledge for vision to language. Comput Vis Image Underst 163:113–125 Language in Vision Liu AA, Xu N, Wong Y, Li J, Su YT, Kankanhalli M (2017a) Hierarchical & multimodal video captioning: discovering and transferring multimodal knowledge for vision to language. Comput Vis Image Underst 163:113–125 Language in Vision
Zurück zum Zitat Liu J, Wang G, Hu P, Duan LY, Kot AC (2017b) Global context-aware attention LSTM networks for 3d action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1647–1656 Liu J, Wang G, Hu P, Duan LY, Kot AC (2017b) Global context-aware attention LSTM networks for 3d action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1647–1656
Zurück zum Zitat Liu Y, Jin X, Shen H (2019) Towards early identification of online rumors based on long short-term memory networks. Inf Process Manag 56(4):1457–1467 Liu Y, Jin X, Shen H (2019) Towards early identification of online rumors based on long short-term memory networks. Inf Process Manag 56(4):1457–1467
Zurück zum Zitat Liwicki M, Bunke H (2009) Combining diverse on-line and off-line systems for handwritten text line recognition. Pattern Recognit 42(12):3254–3263 New Frontiers in Handwriting RecognitionMATH Liwicki M, Bunke H (2009) Combining diverse on-line and off-line systems for handwritten text line recognition. Pattern Recognit 42(12):3254–3263 New Frontiers in Handwriting RecognitionMATH
Zurück zum Zitat Lu Z, Tan H, Li W (2019) An evolutionary context-aware sequential model for topic evolution of text stream. Inf Sci 473:166–177 Lu Z, Tan H, Li W (2019) An evolutionary context-aware sequential model for topic evolution of text stream. Inf Sci 473:166–177
Zurück zum Zitat Luo Y, Ren J, Wang Z, Sun W, Pan J, Liu J, Pang J, Lin L (2018) LSTM pose machines. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5207–5215 Luo Y, Ren J, Wang Z, Sun W, Pan J, Liu J, Pang J, Lin L (2018) LSTM pose machines. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5207–5215
Zurück zum Zitat Lyu C, Liu Z, Yu L (2019) Block-sparsity recovery via recurrent neural network. Signal Proc 154:129–135 Lyu C, Liu Z, Yu L (2019) Block-sparsity recovery via recurrent neural network. Signal Proc 154:129–135
Zurück zum Zitat Ma S, Sigal L, Sclaroff S (2016) Learning activity progression in LSTMs for activity detection and early detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1942–1950 Ma S, Sigal L, Sclaroff S (2016) Learning activity progression in LSTMs for activity detection and early detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1942–1950
Zurück zum Zitat Ma J, Ganchev K, Weiss D (2018a) State-of-the-art Chinese word segmentation with bi-LSTMs. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 4902–4908 Ma J, Ganchev K, Weiss D (2018a) State-of-the-art Chinese word segmentation with bi-LSTMs. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 4902–4908
Zurück zum Zitat Ma Y, Peng H, Cambria E (2018b) Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive LSTM. In: 32nd AAAI conference on artificial intelligence Ma Y, Peng H, Cambria E (2018b) Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive LSTM. In: 32nd AAAI conference on artificial intelligence
Zurück zum Zitat Manashty A, Light J (2019) Life model: a novel representation of life-long temporal sequences in health predictive analytics. Future Gener Comput Syst 92:141–156 Manashty A, Light J (2019) Life model: a novel representation of life-long temporal sequences in health predictive analytics. Future Gener Comput Syst 92:141–156
Zurück zum Zitat McCarthy N, Karzand M, Lecue F (2019) Amsterdam to Dublin eventually delayed? LSTM and transfer learning for predicting delays of low cost airlines. Proc AAAI Conf Artif Intell 33:9541–9546 McCarthy N, Karzand M, Lecue F (2019) Amsterdam to Dublin eventually delayed? LSTM and transfer learning for predicting delays of low cost airlines. Proc AAAI Conf Artif Intell 33:9541–9546
Zurück zum Zitat Naz S, Umar AI, Ahmad R, Ahmed SB, Shirazi SH, Siddiqi I, Razzak MI (2016) Offline cursive Urdu-Nastaliq script recognition using multidimensional recurrent neural networks. Neurocomputing 177:228–241 Naz S, Umar AI, Ahmad R, Ahmed SB, Shirazi SH, Siddiqi I, Razzak MI (2016) Offline cursive Urdu-Nastaliq script recognition using multidimensional recurrent neural networks. Neurocomputing 177:228–241
Zurück zum Zitat Naz S, Umar AI, Ahmad R, Siddiqi I, Ahmed SB, Razzak MI, Shafait F (2017) Urdu Nastaliq recognition using convolutional-recursive deep learning. Neurocomputing 243:80–87 Naz S, Umar AI, Ahmad R, Siddiqi I, Ahmed SB, Razzak MI, Shafait F (2017) Urdu Nastaliq recognition using convolutional-recursive deep learning. Neurocomputing 243:80–87
Zurück zum Zitat Nguyen DC, Bailly G, Elisei F (2017) Learning off-line vs. on-line models of interactive multimodal behaviors with recurrent neural networks. Pattern Recognit Lett 100:29–36 Nguyen DC, Bailly G, Elisei F (2017) Learning off-line vs. on-line models of interactive multimodal behaviors with recurrent neural networks. Pattern Recognit Lett 100:29–36
Zurück zum Zitat Núñez JC, Cabido R, Pantrigo JJ, Montemayor AS, Vélez JF (2018) Convolutional neural networks and long short-term memory for skeleton-based human activity and hand gesture recognition. Pattern Recognit 76:80–94 Núñez JC, Cabido R, Pantrigo JJ, Montemayor AS, Vélez JF (2018) Convolutional neural networks and long short-term memory for skeleton-based human activity and hand gesture recognition. Pattern Recognit 76:80–94
Zurück zum Zitat Núñez JC, Cabido R, Vélez JF, Montemayor AS, Pantrigo JJ (2019) Multiview 3d human pose estimation using improved least-squares and LSTM networks. Neurocomputing 323:335–343 Núñez JC, Cabido R, Vélez JF, Montemayor AS, Pantrigo JJ (2019) Multiview 3d human pose estimation using improved least-squares and LSTM networks. Neurocomputing 323:335–343
Zurück zum Zitat Pei M, Wu X, Guo Y, Fujita H (2017) Small bowel motility assessment based on fully convolutional networks and long short-term memory. Knowl Based Syst 121:163–172 Pei M, Wu X, Guo Y, Fujita H (2017) Small bowel motility assessment based on fully convolutional networks and long short-term memory. Knowl Based Syst 121:163–172
Zurück zum Zitat Pei Z, Qi X, Zhang Y, Ma M, Yang YH (2019) Human trajectory prediction in crowded scene using social-affinity long short-term memory. Pattern Recognit 93:273–282 Pei Z, Qi X, Zhang Y, Ma M, Yang YH (2019) Human trajectory prediction in crowded scene using social-affinity long short-term memory. Pattern Recognit 93:273–282
Zurück zum Zitat Perrett T, Damen D (2019) DDLSTM: dual-domain LSTM for cross-dataset action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7852–7861 Perrett T, Damen D (2019) DDLSTM: dual-domain LSTM for cross-dataset action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7852–7861
Zurück zum Zitat Plank B, Søgaard A, Goldberg Y (2016) Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In: Proceedings of the 54th annual meeting of the association for computational linguistics, vol 2: short papers, pp 412–418 Plank B, Søgaard A, Goldberg Y (2016) Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In: Proceedings of the 54th annual meeting of the association for computational linguistics, vol 2: short papers, pp 412–418
Zurück zum Zitat Portegys TE (2010) A maze learning comparison of Elman, long short-term memory, and Mona neural networks. Neural Netw 23(2):306–313 Portegys TE (2010) A maze learning comparison of Elman, long short-term memory, and Mona neural networks. Neural Netw 23(2):306–313
Zurück zum Zitat Rabiner LR (1986) An introduction to hidden Markov models. IEEE ASSP Mag 3(1):4–16 Rabiner LR (1986) An introduction to hidden Markov models. IEEE ASSP Mag 3(1):4–16
Zurück zum Zitat Ren J, Hu Y, Tai YW, Wang C, Xu L, Sun W, Yan Q (2016) Look, listen and learn—a multimodal LSTM for speaker identification. In: 30th AAAI conference on artificial intelligence Ren J, Hu Y, Tai YW, Wang C, Xu L, Sun W, Yan Q (2016) Look, listen and learn—a multimodal LSTM for speaker identification. In: 30th AAAI conference on artificial intelligence
Zurück zum Zitat Ringeval F, Eyben F, Kroupi E, Yuce A, Thiran JP, Ebrahimi T, Lalanne D, Schuller B (2015) Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data. Pattern Recognit Lett 66:22–30 Pattern Recognition in Human Computer Interaction Ringeval F, Eyben F, Kroupi E, Yuce A, Thiran JP, Ebrahimi T, Lalanne D, Schuller B (2015) Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data. Pattern Recognit Lett 66:22–30 Pattern Recognition in Human Computer Interaction
Zurück zum Zitat Rodrigues F, Markou I, Pereira FC (2019) Combining time-series and textual data for taxi demand prediction in event areas: a deep learning approach. Inf Fusion 49:120–129 Rodrigues F, Markou I, Pereira FC (2019) Combining time-series and textual data for taxi demand prediction in event areas: a deep learning approach. Inf Fusion 49:120–129
Zurück zum Zitat Ryu S, Kim S, Choi J, Yu H, Lee GG (2017) Neural sentence embedding using only in-domain sentences for out-of-domain sentence detection in dialog systems. Pattern Recognit Lett 88:26–32 Ryu S, Kim S, Choi J, Yu H, Lee GG (2017) Neural sentence embedding using only in-domain sentences for out-of-domain sentence detection in dialog systems. Pattern Recognit Lett 88:26–32
Zurück zum Zitat Sachan DS, Zaheer M, Salakhutdinov R (2019) Revisiting LSTM networks for semi-supervised text classification via mixed objective function. Proc AAAI Conf Artif Intell 33:6940–6948 Sachan DS, Zaheer M, Salakhutdinov R (2019) Revisiting LSTM networks for semi-supervised text classification via mixed objective function. Proc AAAI Conf Artif Intell 33:6940–6948
Zurück zum Zitat Saeed HA, jun Peng M, Wang H, wen Zhang B (2020) Novel fault diagnosis scheme utilizing deep learning networks. Prog in Nuclear Energy 118:103066 Saeed HA, jun Peng M, Wang H, wen Zhang B (2020) Novel fault diagnosis scheme utilizing deep learning networks. Prog in Nuclear Energy 118:103066
Zurück zum Zitat Sagheer A, Kotb M (2019) Time series forecasting of petroleum production using deep LSTM recurrent networks. Neurocomputing 323:203–213 Sagheer A, Kotb M (2019) Time series forecasting of petroleum production using deep LSTM recurrent networks. Neurocomputing 323:203–213
Zurück zum Zitat Sang C, Pierro MD (2019) Improving trading technical analysis with tensorflow long short-term memory (LSTM) neural network. J Finance Data Sci 5(1):1–11 Sang C, Pierro MD (2019) Improving trading technical analysis with tensorflow long short-term memory (LSTM) neural network. J Finance Data Sci 5(1):1–11
Zurück zum Zitat Schmid H (1994) Part-of-speech tagging with neural networks. In: Proceedings of the 15th conference on Computational linguistics, vol 1, pp 172–176. Association for Computational Linguistics Schmid H (1994) Part-of-speech tagging with neural networks. In: Proceedings of the 15th conference on Computational linguistics, vol 1, pp 172–176. Association for Computational Linguistics
Zurück zum Zitat Schmidhuber J, Wierstra D, Gagliolo M, Gomez F (2007) Training recurrent networks by Evolino. Neural Comput 19(3):757–779MATH Schmidhuber J, Wierstra D, Gagliolo M, Gomez F (2007) Training recurrent networks by Evolino. Neural Comput 19(3):757–779MATH
Zurück zum Zitat Si C, Chen W, Wang W, Wang L, Tan T (2019) An attention enhanced graph convolutional LSTM network for skeleton-based action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1227–1236 Si C, Chen W, Wang W, Wang L, Tan T (2019) An attention enhanced graph convolutional LSTM network for skeleton-based action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1227–1236
Zurück zum Zitat Song L, Zhang Y, Wang Z, Gildea D (2018) N-ary relation extraction using graph-state LSTM. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 2226–2235 Song L, Zhang Y, Wang Z, Gildea D (2018) N-ary relation extraction using graph-state LSTM. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 2226–2235
Zurück zum Zitat Song M, Park H, shik Shin K (2019) Attention-based long short-term memory network using sentiment lexicon embedding for aspect-level sentiment analysis in Korean. Inf Process Manag 56(3):637–653 Song M, Park H, shik Shin K (2019) Attention-based long short-term memory network using sentiment lexicon embedding for aspect-level sentiment analysis in Korean. Inf Process Manag 56(3):637–653
Zurück zum Zitat Steenkiste TV, Ruyssinck J, Baets LD, Decruyenaere J, Turck FD, Ongenae F, Dhaene T (2019) Accurate prediction of blood culture outcome in the intensive care unit using long short-term memory neural networks. Artif Intell Med 97:38–43 Steenkiste TV, Ruyssinck J, Baets LD, Decruyenaere J, Turck FD, Ongenae F, Dhaene T (2019) Accurate prediction of blood culture outcome in the intensive care unit using long short-term memory neural networks. Artif Intell Med 97:38–43
Zurück zum Zitat Stollenga MF, Byeon W, Liwicki M, Schmidhuber J (2015) Parallel multi-dimensional LSTM, with application to fast biomedical volumetric image segmentation. In: Advances in neural information processing systems, pp 2998–3006 Stollenga MF, Byeon W, Liwicki M, Schmidhuber J (2015) Parallel multi-dimensional LSTM, with application to fast biomedical volumetric image segmentation. In: Advances in neural information processing systems, pp 2998–3006
Zurück zum Zitat Su Y, Kuo CCJ (2019) On extended long short-term memory and dependent bidirectional recurrent neural network. Neurocomputing 356:151–161 Su Y, Kuo CCJ (2019) On extended long short-term memory and dependent bidirectional recurrent neural network. Neurocomputing 356:151–161
Zurück zum Zitat Sukhbaatar S, Weston J, Fergus R et al (2015) End-to-end memory networks. In: Advances in neural information processing systems, pp 2440–2448 Sukhbaatar S, Weston J, Fergus R et al (2015) End-to-end memory networks. In: Advances in neural information processing systems, pp 2440–2448
Zurück zum Zitat Sun Y, Ji Z, Lin L, Tang D, Wang X (2017) Entity disambiguation with decomposable neural networks. Wiley Interdiscip Rev Data Min Knowl Discov 7(5):e1215 Sun Y, Ji Z, Lin L, Tang D, Wang X (2017) Entity disambiguation with decomposable neural networks. Wiley Interdiscip Rev Data Min Knowl Discov 7(5):e1215
Zurück zum Zitat Sun Y, Ji Z, Lin L, Wang X, Tang D (2018) Entity disambiguation with memory network. Neurocomputing 275:2367–2373 Sun Y, Ji Z, Lin L, Wang X, Tang D (2018) Entity disambiguation with memory network. Neurocomputing 275:2367–2373
Zurück zum Zitat Sun X, Zhang C, Li L (2019) Dynamic emotion modelling and anomaly detection in conversation based on emotional transition tensor. Inf Fusion 46:11–22 Sun X, Zhang C, Li L (2019) Dynamic emotion modelling and anomaly detection in conversation based on emotional transition tensor. Inf Fusion 46:11–22
Zurück zum Zitat Sutskever I, Vinyals O, Le Q (2014) Sequence to sequence learning with neural networks. In: Advances in neural information processing systems Sutskever I, Vinyals O, Le Q (2014) Sequence to sequence learning with neural networks. In: Advances in neural information processing systems
Zurück zum Zitat Takayama J, Nomoto E, Arase Y (2019) Dialogue breakdown detection robust to variations in annotators and dialogue systems. Comput Speech Lang 54:31–43 Takayama J, Nomoto E, Arase Y (2019) Dialogue breakdown detection robust to variations in annotators and dialogue systems. Comput Speech Lang 54:31–43
Zurück zum Zitat Toledo JI, Carbonell M, Fornés A, Lladós J (2019) Information extraction from historical handwritten document images with a context-aware neural model. Pattern Recognit 86:27–36 Toledo JI, Carbonell M, Fornés A, Lladós J (2019) Information extraction from historical handwritten document images with a context-aware neural model. Pattern Recognit 86:27–36
Zurück zum Zitat Turan M, Almalioglu Y, Araujo H, Konukoglu E, Sitti M (2018) Deep endovo: a recurrent convolutional neural network (RCNN) based visual odometry approach for endoscopic capsule robots. Neurocomputing 275:1861–1870 Turan M, Almalioglu Y, Araujo H, Konukoglu E, Sitti M (2018) Deep endovo: a recurrent convolutional neural network (RCNN) based visual odometry approach for endoscopic capsule robots. Neurocomputing 275:1861–1870
Zurück zum Zitat Uddin MZ (2019) A wearable sensor-based activity prediction system to facilitate edge computing in smart healthcare system. J Parallel Distrib Comput 123:46–53 Uddin MZ (2019) A wearable sensor-based activity prediction system to facilitate edge computing in smart healthcare system. J Parallel Distrib Comput 123:46–53
Zurück zum Zitat Van Phan T, Nakagawa M (2016) Combination of global and local contexts for text/non-text classification in heterogeneous online handwritten documents. Pattern Recognit 51:112–124 Van Phan T, Nakagawa M (2016) Combination of global and local contexts for text/non-text classification in heterogeneous online handwritten documents. Pattern Recognit 51:112–124
Zurück zum Zitat Venugopalan S, Hendricks LA, Mooney R, Saenko K (2016) Improving LSTM-based video description with linguistic knowledge mined from text. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp 1961–1966 Venugopalan S, Hendricks LA, Mooney R, Saenko K (2016) Improving LSTM-based video description with linguistic knowledge mined from text. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp 1961–1966
Zurück zum Zitat Wang L, Cao Z, Xia Y, De Melo G (2016) Morphological segmentation with window lstm neural networks. In: 30th AAAI conference on artificial intelligence Wang L, Cao Z, Xia Y, De Melo G (2016) Morphological segmentation with window lstm neural networks. In: 30th AAAI conference on artificial intelligence
Zurück zum Zitat Wang Y, Long M, Wang J, Gao Z, Philip SY (2017) PREDRNN: recurrent neural networks for predictive learning using spatiotemporal LSTMs. In: Advances in neural information processing systems, pp 879–888 Wang Y, Long M, Wang J, Gao Z, Philip SY (2017) PREDRNN: recurrent neural networks for predictive learning using spatiotemporal LSTMs. In: Advances in neural information processing systems, pp 879–888
Zurück zum Zitat Wang Q, Du P, Yang J, Wang G, Lei J, Hou C (2019a) Transferred deep learning based waveform recognition for cognitive passive radar. Signal Process 155:259–267 Wang Q, Du P, Yang J, Wang G, Lei J, Hou C (2019a) Transferred deep learning based waveform recognition for cognitive passive radar. Signal Process 155:259–267
Zurück zum Zitat Wang W, Hong T, Xu X, Chen J, Liu Z, Xu N (2019b) Forecasting district-scale energy dynamics through integrating building network and long short-term memory learning algorithm. Appl Energy 248:217–230 Wang W, Hong T, Xu X, Chen J, Liu Z, Xu N (2019b) Forecasting district-scale energy dynamics through integrating building network and long short-term memory learning algorithm. Appl Energy 248:217–230
Zurück zum Zitat Wang Z, Wang Z, Long Y, Wang J, Xu Z, Wang B (2019c) Enhancing generative conversational service agents with dialog history and external knowledge. Comput Speech Lang 54:71–85 Wang Z, Wang Z, Long Y, Wang J, Xu Z, Wang B (2019c) Enhancing generative conversational service agents with dialog history and external knowledge. Comput Speech Lang 54:71–85
Zurück zum Zitat Wen J, Tu H, Cheng X, Xie R, Yin W (2019) Joint modeling of users, questions and answers for answer selection in CQA. Expert Syst Appl 118:563–572 Wen J, Tu H, Cheng X, Xie R, Yin W (2019) Joint modeling of users, questions and answers for answer selection in CQA. Expert Syst Appl 118:563–572
Zurück zum Zitat Werbos PJ et al (1990) Backpropagation through time: what it does and how to do it. Proc IEEE 78(10):1550–1560 Werbos PJ et al (1990) Backpropagation through time: what it does and how to do it. Proc IEEE 78(10):1550–1560
Zurück zum Zitat Wierstra D, Gomez FJ, Schmidhuber J (2005) Modeling systems with internal state using Evolino. In: Proceedings of the 7th annual conference on Genetic and evolutionary computation. ACM, pp 1795–1802 Wierstra D, Gomez FJ, Schmidhuber J (2005) Modeling systems with internal state using Evolino. In: Proceedings of the 7th annual conference on Genetic and evolutionary computation. ACM, pp 1795–1802
Zurück zum Zitat Wöllmer M, Schuller B (2014) Probabilistic speech feature extraction with context-sensitive bottleneck neural networks. Neurocomputing 132:113–120 Innovations in Nature Inspired Optimization and Learning Methods Machines learning for Non-Linear Processing Wöllmer M, Schuller B (2014) Probabilistic speech feature extraction with context-sensitive bottleneck neural networks. Neurocomputing 132:113–120 Innovations in Nature Inspired Optimization and Learning Methods Machines learning for Non-Linear Processing
Zurück zum Zitat Wu Y, Schuster M, Chen Z, Le QV, Norouzi M, Macherey W, Krikun M, Cao Y, Gao Q, Macherey K et al (2016) Google’s neural machine translation system: bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 Wu Y, Schuster M, Chen Z, Le QV, Norouzi M, Macherey W, Krikun M, Cao Y, Gao Q, Macherey K et al (2016) Google’s neural machine translation system: bridging the gap between human and machine translation. arXiv preprint arXiv:​1609.​08144
Zurück zum Zitat Wu YX, Wu QB, Zhu JQ (2019) Improved EEMD-based crude oil price forecasting using LSTM networks. Phys A Stat Mech Appl 516:114–124 Wu YX, Wu QB, Zhu JQ (2019) Improved EEMD-based crude oil price forecasting using LSTM networks. Phys A Stat Mech Appl 516:114–124
Zurück zum Zitat Xingjian S, Chen Z, Wang H, Yeung DY, Wong WK, Woo Wc (2015) Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In: Advances in neural information processing systems, pp 802–810 Xingjian S, Chen Z, Wang H, Yeung DY, Wong WK, Woo Wc (2015) Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In: Advances in neural information processing systems, pp 802–810
Zurück zum Zitat Yan H, Ouyang H (2017) Financial time series prediction based on deep learning. Wirel Pers Commun 102:1–18 Yan H, Ouyang H (2017) Financial time series prediction based on deep learning. Wirel Pers Commun 102:1–18
Zurück zum Zitat Yang M, Tu W, Wang J, Xu F, Chen X (2017) Attention based LSTM for target dependent sentiment classification. In: 31st AAAI conference on artificial intelligence Yang M, Tu W, Wang J, Xu F, Chen X (2017) Attention based LSTM for target dependent sentiment classification. In: 31st AAAI conference on artificial intelligence
Zurück zum Zitat Yang J, Guo Y, Zhao W (2019) Long short-term memory neural network based fault detection and isolation for electro-mechanical actuators. Neurocomputing 360:85–96 Yang J, Guo Y, Zhao W (2019) Long short-term memory neural network based fault detection and isolation for electro-mechanical actuators. Neurocomputing 360:85–96
Zurück zum Zitat Yi HC, You ZH, Zhou X, Cheng L, Li X, Jiang TH, Chen ZH (2019) ACP-DL: a deep learning long short-term memory model to predict anticancer peptides using high-efficiency feature representation. Mol Ther Nucl Acids 17:1–9 Yi HC, You ZH, Zhou X, Cheng L, Li X, Jiang TH, Chen ZH (2019) ACP-DL: a deep learning long short-term memory model to predict anticancer peptides using high-efficiency feature representation. Mol Ther Nucl Acids 17:1–9
Zurück zum Zitat Yousfi S, Berrani SA, Garcia C (2017) Contribution of recurrent connectionist language models in improving LSTM-based Arabic text recognition in videos. Pattern Recognit 64:245–254 Yousfi S, Berrani SA, Garcia C (2017) Contribution of recurrent connectionist language models in improving LSTM-based Arabic text recognition in videos. Pattern Recognit 64:245–254
Zurück zum Zitat Yu Y, Si X, Hu C, Zhang J (2019) A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput 31(7):1235–1270MathSciNetMATH Yu Y, Si X, Hu C, Zhang J (2019) A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput 31(7):1235–1270MathSciNetMATH
Zurück zum Zitat Zamora-Martínez F, Frinken V, España-Boquera S, Castro-Bleda M, Fischer A, Bunke H (2014) Neural network language models for off-line handwriting recognition. Pattern Recognit 47(4):1642–1652 Zamora-Martínez F, Frinken V, España-Boquera S, Castro-Bleda M, Fischer A, Bunke H (2014) Neural network language models for off-line handwriting recognition. Pattern Recognit 47(4):1642–1652
Zurück zum Zitat Zhang L, Zhu G, Mei L, Shen P, Shah SAA, Bennamoun M (2018) Attention in convolutional LSTM for gesture recognition. In: Advances in neural information processing systems, pp 1953–1962 Zhang L, Zhu G, Mei L, Shen P, Shah SAA, Bennamoun M (2018) Attention in convolutional LSTM for gesture recognition. In: Advances in neural information processing systems, pp 1953–1962
Zurück zum Zitat Zhang M, Wang Q, Fu G (2019a) End-to-end neural opinion extraction with a transition-based model. Inf Syst 80:56–63 Zhang M, Wang Q, Fu G (2019a) End-to-end neural opinion extraction with a transition-based model. Inf Syst 80:56–63
Zurück zum Zitat Zhang P, Ouyang W, Zhang P, Xue J, Zheng N (2019b) SR-LSTM: state refinement for LSTM towards pedestrian trajectory prediction. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 12085–12094 Zhang P, Ouyang W, Zhang P, Xue J, Zheng N (2019b) SR-LSTM: state refinement for LSTM towards pedestrian trajectory prediction. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 12085–12094
Zurück zum Zitat Zhang W, Han J, Deng S (2019c) Abnormal heart sound detection using temporal quasi-periodic features and long short-term memory without segmentation. Biomed Signal Process Control 53:101560 Zhang W, Han J, Deng S (2019c) Abnormal heart sound detection using temporal quasi-periodic features and long short-term memory without segmentation. Biomed Signal Process Control 53:101560
Zurück zum Zitat Zhang W, Li Y, Wang S (2019d) Learning document representation via topic-enhanced LSTM model. Knowl Based Syst 174:194–204 Zhang W, Li Y, Wang S (2019d) Learning document representation via topic-enhanced LSTM model. Knowl Based Syst 174:194–204
Zurück zum Zitat Zhang Z, Li H, Zhang L, Zheng T, Zhang T, Hao X, Chen X, Chen M, Xiao F, Zhou W (2019e) Hierarchical reinforcement learning for multi-agent moba game. arXiv preprint arXiv:1901.08004 Zhang Z, Li H, Zhang L, Zheng T, Zhang T, Hao X, Chen X, Chen M, Xiao F, Zhou W (2019e) Hierarchical reinforcement learning for multi-agent moba game. arXiv preprint arXiv:​1901.​08004
Zurück zum Zitat Zhao Z, Song Y, Su F (2016) Specific video identification via joint learning of latent semantic concept, scene and temporal structure. Neurocomputing 208:378–386 SI: BridgingSemantic Zhao Z, Song Y, Su F (2016) Specific video identification via joint learning of latent semantic concept, scene and temporal structure. Neurocomputing 208:378–386 SI: BridgingSemantic
Zurück zum Zitat Zhao J, Deng F, Cai Y, Chen J (2019a) Long short-term memory-fully connected (LSTM-FC) neural network for PM2. 5 concentration prediction. Chemosphere 220:486–492 Zhao J, Deng F, Cai Y, Chen J (2019a) Long short-term memory-fully connected (LSTM-FC) neural network for PM2. 5 concentration prediction. Chemosphere 220:486–492
Zurück zum Zitat Zhao J, Mao X, Chen L (2019b) Speech emotion recognition using deep 1d & 2d CNN LSTM networks. Biomed Signal Process Control 47:312–323 Zhao J, Mao X, Chen L (2019b) Speech emotion recognition using deep 1d & 2d CNN LSTM networks. Biomed Signal Process Control 47:312–323
Zurück zum Zitat Zhou X, Wan X, Xiao J (2016) Attention-based LSTM network for cross-lingual sentiment classification. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp 247–256 Zhou X, Wan X, Xiao J (2016) Attention-based LSTM network for cross-lingual sentiment classification. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp 247–256
Zurück zum Zitat Zhu W, Lan C, Xing J, Zeng W, Li Y, Shen L, Xie X (2016) Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In: 30th AAAI conference on artificial intelligence Zhu W, Lan C, Xing J, Zeng W, Li Y, Shen L, Xie X (2016) Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In: 30th AAAI conference on artificial intelligence
Zurück zum Zitat Zuo Y, Wu Y, Min G, Cui L (2019) Learning-based network path planning for traffic engineering. Future Gener Comput Syst 92:59–67 Zuo Y, Wu Y, Min G, Cui L (2019) Learning-based network path planning for traffic engineering. Future Gener Comput Syst 92:59–67
Metadaten
Titel
A review on the long short-term memory model
verfasst von
Greg Van Houdt
Carlos Mosquera
Gonzalo Nápoles
Publikationsdatum
13.05.2020
Verlag
Springer Netherlands
Erschienen in
Artificial Intelligence Review / Ausgabe 8/2020
Print ISSN: 0269-2821
Elektronische ISSN: 1573-7462
DOI
https://doi.org/10.1007/s10462-020-09838-1

Weitere Artikel der Ausgabe 8/2020

Artificial Intelligence Review 8/2020 Zur Ausgabe

Premium Partner