Skip to main content
Erschienen in: Neural Processing Letters 3/2021

24.03.2021

Attention-Based Deep Gated Fully Convolutional End-to-End Architectures for Time Series Classification

verfasst von: Mehak Khan, Hongzhi Wang, Alladoumbaye Ngueilbaye

Erschienen in: Neural Processing Letters | Ausgabe 3/2021

Einloggen

Aktivieren Sie unsere intelligente Suche um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Time series classification (TSC) is one of the significant problems in the data mining community due to the wide class of domains involving the time series data. The TSC problem is being studied individually for univariate and multivariate using different datasets and methods. Subsequently, deep learning methods are more robust than other techniques and revolutionized many areas, including TSC. Therefore, in this study, we exploit the performance of attention mechanism, deep Gated Recurrent Unit (dGRU), Squeeze-and-Excitation (SE) block, and Fully Convolutional Network (FCN) in two end-to-end hybrid deep learning architectures, Att-dGRU-FCN and Att-dGRU-SE-FCN. The performance of the proposed models is evaluated in terms of classification testing error and f1-score. Extensive experiments and ablation study is carried out on multiple univariate and multivariate datasets from different domains to acquire the best performance of the proposed models. The proposed models show effective performance over other published methods, also do not require heavy data pre-processing, and small enough to be deployed on real-time systems.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Zhang J, Li Y, Xiao W, Zhang Z (2020) Non-iterative and fast deep learning: multilayer extreme learning machines. J Frankl Inst 357:8925–8955MathSciNetCrossRef Zhang J, Li Y, Xiao W, Zhang Z (2020) Non-iterative and fast deep learning: multilayer extreme learning machines. J Frankl Inst 357:8925–8955MathSciNetCrossRef
2.
Zurück zum Zitat Zhang J, Xiao W, Li Y, Zhang S, Zhang Z (2020) Multilayer probability extreme learning machine for device-free localization. Neurocomputing 396:383–393CrossRef Zhang J, Xiao W, Li Y, Zhang S, Zhang Z (2020) Multilayer probability extreme learning machine for device-free localization. Neurocomputing 396:383–393CrossRef
3.
Zurück zum Zitat Zhang J, Xiao W, Li Y, Zhang S (2018) Residual compensation extreme learning machine for regression. Neurocomputing 311:126–136CrossRef Zhang J, Xiao W, Li Y, Zhang S (2018) Residual compensation extreme learning machine for regression. Neurocomputing 311:126–136CrossRef
4.
Zurück zum Zitat Aswolinskiy W, Reinhart RF, Steil J (2018) Time series classification in reservoir-and model-space. Neural Process Lett 48:789–809CrossRef Aswolinskiy W, Reinhart RF, Steil J (2018) Time series classification in reservoir-and model-space. Neural Process Lett 48:789–809CrossRef
5.
Zurück zum Zitat Keogh E, Ratanamahatana CA (2005) Exact indexing of dynamic time warping. Knowl Inf Syst 7:358–386CrossRef Keogh E, Ratanamahatana CA (2005) Exact indexing of dynamic time warping. Knowl Inf Syst 7:358–386CrossRef
6.
Zurück zum Zitat Lin J, Keogh E, Wei L, Lonardi S (2007) Experiencing SAX: a novel symbolic representation of time series. Data Min Knowl Discov 15:107–144MathSciNetCrossRef Lin J, Keogh E, Wei L, Lonardi S (2007) Experiencing SAX: a novel symbolic representation of time series. Data Min Knowl Discov 15:107–144MathSciNetCrossRef
7.
Zurück zum Zitat Baydogan MG, Runger G, Tuv E (2013) A bag-of-features framework to classify time series. IEEE Trans Pattern Anal Mach Intell 35:2796–2802CrossRef Baydogan MG, Runger G, Tuv E (2013) A bag-of-features framework to classify time series. IEEE Trans Pattern Anal Mach Intell 35:2796–2802CrossRef
8.
Zurück zum Zitat Schäfer P (2015) The BOSS is concerned with time series classification in the presence of noise. Data Min Knowl Discov 29:1505–1530MathSciNetCrossRef Schäfer P (2015) The BOSS is concerned with time series classification in the presence of noise. Data Min Knowl Discov 29:1505–1530MathSciNetCrossRef
10.
Zurück zum Zitat Schäfer P, Leser U (2017) Fast and accurate time series classification with weasel. In: Proceedings of the 2017 ACM on conference on information and knowledge management, pp 637–646 Schäfer P, Leser U (2017) Fast and accurate time series classification with weasel. In: Proceedings of the 2017 ACM on conference on information and knowledge management, pp 637–646
11.
Zurück zum Zitat Lines J, Bagnall A (2015) Time series classification with ensembles of elastic distance measures. Data Min Knowl Discov 29:565–592MathSciNetCrossRef Lines J, Bagnall A (2015) Time series classification with ensembles of elastic distance measures. Data Min Knowl Discov 29:565–592MathSciNetCrossRef
12.
Zurück zum Zitat Bagnall A, Lines J, Hills J, Bostrom A (2015) Time-series classification with COTE: the collective of transformation-based ensembles. IEEE Trans Knowl Data Eng 27:2522–2535CrossRef Bagnall A, Lines J, Hills J, Bostrom A (2015) Time-series classification with COTE: the collective of transformation-based ensembles. IEEE Trans Knowl Data Eng 27:2522–2535CrossRef
13.
Zurück zum Zitat Lines J, Taylor S, Bagnall A (2016) Hive-cote: the hierarchical vote collective of transformation-based ensembles for time series classification. In: 2016 IEEE 16th international conference on data mining (ICDM), pp 1041–1046 Lines J, Taylor S, Bagnall A (2016) Hive-cote: the hierarchical vote collective of transformation-based ensembles for time series classification. In: 2016 IEEE 16th international conference on data mining (ICDM), pp 1041–1046
16.
Zurück zum Zitat Zheng Q, Yang M, Yang J, Zhang Q, Zhang X (2018) Improvement of generalization ability of deep CNN via implicit regularization in two-stage training process. IEEE Access 6:15844–15869CrossRef Zheng Q, Yang M, Yang J, Zhang Q, Zhang X (2018) Improvement of generalization ability of deep CNN via implicit regularization in two-stage training process. IEEE Access 6:15844–15869CrossRef
19.
Zurück zum Zitat Zheng Q, Tian X, Jiang N, Yang M (2019) Layer-wise learning based stochastic gradient descent method for the optimization of deep convolutional neural network. J Intell Fuzzy Syst 37:5641–5654CrossRef Zheng Q, Tian X, Jiang N, Yang M (2019) Layer-wise learning based stochastic gradient descent method for the optimization of deep convolutional neural network. J Intell Fuzzy Syst 37:5641–5654CrossRef
23.
Zurück zum Zitat Wang Z, Yan W, Oates T (2017) Time series classification from scratch with deep neural networks: a strong baseline. In: 2017 international joint conference on neural networks (IJCNN), pp 1578–-1585 Wang Z, Yan W, Oates T (2017) Time series classification from scratch with deep neural networks: a strong baseline. In: 2017 international joint conference on neural networks (IJCNN), pp 1578–-1585
24.
Zurück zum Zitat Karim F, Majumdar S, Darabi H, Chen S (2018) LSTM fully convolutional networks for time series classification. IEEE Access 6:1662–1669CrossRef Karim F, Majumdar S, Darabi H, Chen S (2018) LSTM fully convolutional networks for time series classification. IEEE Access 6:1662–1669CrossRef
25.
Zurück zum Zitat Elsayed N, Maida AS, Bayoumi M (2018) Deep gated recurrent and convolutional network hybrid model for univariate time series classification. arXiv:1812.07683 Elsayed N, Maida AS, Bayoumi M (2018) Deep gated recurrent and convolutional network hybrid model for univariate time series classification. arXiv:1812.07683
26.
Zurück zum Zitat Fawaz HI, Lucas B, Forestier G, Pelletier C, Schmidt DF, Weber J et al (2020) Inceptiontime: finding alexnet for time series classification. Data Min Knowl Discov 34:1936–1962MathSciNetCrossRef Fawaz HI, Lucas B, Forestier G, Pelletier C, Schmidt DF, Weber J et al (2020) Inceptiontime: finding alexnet for time series classification. Data Min Knowl Discov 34:1936–1962MathSciNetCrossRef
27.
Zurück zum Zitat Dempster A, Petitjean F, Webb GI (2020) ROCKET: exceptionally fast and accurate time series classification using random convolutional kernels. Data Min Knowl Discov 34:1454–1495MathSciNetCrossRef Dempster A, Petitjean F, Webb GI (2020) ROCKET: exceptionally fast and accurate time series classification using random convolutional kernels. Data Min Knowl Discov 34:1454–1495MathSciNetCrossRef
28.
Zurück zum Zitat Seto S, Zhang W, Zhou Y (2015) Multivariate time series classification using dynamic time warping template selection for human activity recognition. In: 2015 IEEE symposium series on computational intelligence, pp 1399–-1406 Seto S, Zhang W, Zhou Y (2015) Multivariate time series classification using dynamic time warping template selection for human activity recognition. In: 2015 IEEE symposium series on computational intelligence, pp 1399–-1406
29.
Zurück zum Zitat Schäfer P, Leser U (2017) Multivariate time series classification with WEASEL+ MUSE. arXiv:1711.11343 Schäfer P, Leser U (2017) Multivariate time series classification with WEASEL+ MUSE. arXiv:1711.11343
30.
Zurück zum Zitat Baydogan MG, Runger G (2015) Learning a symbolic representation for multivariate time series classification. Data Min Knowl Discov 29:400–422MathSciNetCrossRef Baydogan MG, Runger G (2015) Learning a symbolic representation for multivariate time series classification. Data Min Knowl Discov 29:400–422MathSciNetCrossRef
31.
Zurück zum Zitat Zheng Y, Liu Q, Chen E, Ge Y, Zhao JL (2014) Time series classification using multi-channels deep convolutional neural networks. In: International conference on web-age information management, pp 298–310 Zheng Y, Liu Q, Chen E, Ge Y, Zhao JL (2014) Time series classification using multi-channels deep convolutional neural networks. In: International conference on web-age information management, pp 298–310
32.
Zurück zum Zitat Karim F, Majumdar S, Darabi H, Harford S (2019) Multivariate lstm-fcns for time series classification. Neural Netw 116:237–245CrossRef Karim F, Majumdar S, Darabi H, Harford S (2019) Multivariate lstm-fcns for time series classification. Neural Netw 116:237–245CrossRef
33.
Zurück zum Zitat Zhang X, Gao Y, Lin J, Lu C-T (2020) TapNet: multivariate time series classification with attentional prototypical network. In: AAAI, pp 6845–6852 Zhang X, Gao Y, Lin J, Lu C-T (2020) TapNet: multivariate time series classification with attentional prototypical network. In: AAAI, pp 6845–6852
34.
Zurück zum Zitat Bahdanau D, Cho K, Bengio Y (2014) Neural machine translation by jointly learning to align and translate. arXiv:1409.0473 Bahdanau D, Cho K, Bengio Y (2014) Neural machine translation by jointly learning to align and translate. arXiv:1409.0473
36.
Zurück zum Zitat Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167 Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167
37.
Zurück zum Zitat Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML-10), pp 807–814 Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML-10), pp 807–814
38.
Zurück zum Zitat Chung J, Gulcehre C, Cho K, Bengio Y (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv:1412.3555 Chung J, Gulcehre C, Cho K, Bengio Y (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv:1412.3555
39.
Zurück zum Zitat Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958MathSciNetMATH Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958MathSciNetMATH
40.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision, pp 1026–1034 He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision, pp 1026–1034
41.
Zurück zum Zitat Lin M, Chen Q, Yan S (2013) Network in network. arXiv:1312.4400 Lin M, Chen Q, Yan S (2013) Network in network. arXiv:1312.4400
43.
Zurück zum Zitat Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141 Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141
45.
Zurück zum Zitat Pei W, Dibeklioğlu H, Tax DM, van der Maaten L (2017) Multivariate time-series classification using the hidden-unit logistic model. IEEE Trans Neural Netw Learn Syst 29:920–931CrossRef Pei W, Dibeklioğlu H, Tax DM, van der Maaten L (2017) Multivariate time-series classification using the hidden-unit logistic model. IEEE Trans Neural Netw Learn Syst 29:920–931CrossRef
46.
Zurück zum Zitat Dua D, Graff C (2017) UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences Dua D, Graff C (2017) UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences
47.
Zurück zum Zitat Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv:1412.6980 Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv:1412.6980
48.
Zurück zum Zitat Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J et al (2016) Tensorflow: a system for large-scale machine learning. In: 12th {USENIX} symposium on operating systems design and implementation ({OSDI} 16), pp 265–283 Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J et al (2016) Tensorflow: a system for large-scale machine learning. In: 12th {USENIX} symposium on operating systems design and implementation ({OSDI} 16), pp 265–283
Metadaten
Titel
Attention-Based Deep Gated Fully Convolutional End-to-End Architectures for Time Series Classification
verfasst von
Mehak Khan
Hongzhi Wang
Alladoumbaye Ngueilbaye
Publikationsdatum
24.03.2021
Verlag
Springer US
Erschienen in
Neural Processing Letters / Ausgabe 3/2021
Print ISSN: 1370-4621
Elektronische ISSN: 1573-773X
DOI
https://doi.org/10.1007/s11063-021-10484-z

Weitere Artikel der Ausgabe 3/2021

Neural Processing Letters 3/2021 Zur Ausgabe

Neuer Inhalt