Skip to main content
Erschienen in: Neural Computing and Applications 1/2021

21.05.2020 | Original Article

A deeply coupled ConvNet for human activity recognition using dynamic and RGB images

verfasst von: Tej Singh, Dinesh Kumar Vishwakarma

Erschienen in: Neural Computing and Applications | Ausgabe 1/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This work is motivated by the tremendous achievement of deep learning models for computer vision tasks, particularly for human activity recognition. It is gaining more attention due to the numerous applications in real life, for example smart surveillance system, human–computer interaction, sports action analysis, elderly healthcare, etc. Recent days, the acquisition and interface of multimodal data are straightforward due to the invention of low-cost depth devices. Several approaches have been developed based on RGB-D (depth) evidence at the cost of additional equipment’s setup and high complexity. Contrarily, the methods that utilize RGB frames provide inferior performance due to the absence of depth evidence and these approaches require to less hardware, simple and easy to generalize using only color cameras. In this work, a deeply coupled ConvNet for human activity recognition proposed that utilizes the RGB frames at the top layer with bi-directional long short-term memory (Bi-LSTM). At the bottom layer, the CNN model is trained with a single dynamic motion image. For the RGB frames, the CNN-Bi-LSTM model is trained end-to-end learning to refine the feature of the pre-trained CNN, while dynamic images stream is fine-tuned with the top layers of the pre-trained model to extract temporal information in videos. The features obtained from both the data streams are fused at decision level after the softmax layer with different late fusion techniques and achieved high accuracy with max fusion. The performance accuracy of the model is assessed using four standard single as well as multiple person activities RGB-D (depth) datasets. The highest classification accuracies achieved on human action datasets are compared with similar state of the art and found significantly higher margin such as 2% on SBU Interaction, 4% on MIVIA Action, 1% on MSR Action Pair, and 4% on MSR Daily Activity.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Aggarwal JK, Xia L (2013) Human activity recognition from 3D data—a review. Pattern Recognit Lett 48:70–80 Aggarwal JK, Xia L (2013) Human activity recognition from 3D data—a review. Pattern Recognit Lett 48:70–80
2.
Zurück zum Zitat Dhiman C, Vishwakarma DK (2018) A review of state-of-the-art techniques for abnormal human activity recognition. Eng Appl Artif Intell 77:21–45CrossRef Dhiman C, Vishwakarma DK (2018) A review of state-of-the-art techniques for abnormal human activity recognition. Eng Appl Artif Intell 77:21–45CrossRef
3.
Zurück zum Zitat Suto J, Oniga S, Lung C, Orha I (2018) Comparison of offline and real-time human activity recognition results using machine learning techniques. Neural Comput Appl 1–14 Suto J, Oniga S, Lung C, Orha I (2018) Comparison of offline and real-time human activity recognition results using machine learning techniques. Neural Comput Appl 1–14
4.
Zurück zum Zitat Vishwakarma DK, Kapoor R, Maheshwari R, Kapoor V, Raman S (2015) Recognition of abnormal human activity using the changes in orientation of silhouette in key frames. In: IEEE international conference on computing for sustainable global development (INDIACom), New Delhi Vishwakarma DK, Kapoor R, Maheshwari R, Kapoor V, Raman S (2015) Recognition of abnormal human activity using the changes in orientation of silhouette in key frames. In: IEEE international conference on computing for sustainable global development (INDIACom), New Delhi
5.
Zurück zum Zitat Schuldt C, Laptev I, Caputo B (2004) Recognizing human actions: a local SVM approach. In: 17th International conference on pattern recognition Schuldt C, Laptev I, Caputo B (2004) Recognizing human actions: a local SVM approach. In: 17th International conference on pattern recognition
6.
Zurück zum Zitat Vishwakarma DK, Kapoor R (2015) Integrated approach for human action recognition using edge spatial distribution, direction pixel, and R-transform. Adv Robot 29(23):1551–1561CrossRef Vishwakarma DK, Kapoor R (2015) Integrated approach for human action recognition using edge spatial distribution, direction pixel, and R-transform. Adv Robot 29(23):1551–1561CrossRef
7.
Zurück zum Zitat Singh T, Vishwakarma DK (2018) Video benchmarks of human action datasets: a review. Artif Intell Rev 52(2):1107–1154CrossRef Singh T, Vishwakarma DK (2018) Video benchmarks of human action datasets: a review. Artif Intell Rev 52(2):1107–1154CrossRef
8.
Zurück zum Zitat Zhang J, Li W, Ogunbona PO, Wang P, Tang C (2016) RGB-D based action recognition datasets: a survey. Pattern Recognit 60:86–105CrossRef Zhang J, Li W, Ogunbona PO, Wang P, Tang C (2016) RGB-D based action recognition datasets: a survey. Pattern Recognit 60:86–105CrossRef
9.
Zurück zum Zitat Bilen H, Fernando B, Gavves E, Vedaldi A, Gould S (2016) Dynamic image networks for action recognition. In: IEEE international conference on computer vision and pattern recognition (CVPR), Las Vegas, NV, pp 3034–3042 Bilen H, Fernando B, Gavves E, Vedaldi A, Gould S (2016) Dynamic image networks for action recognition. In: IEEE international conference on computer vision and pattern recognition (CVPR), Las Vegas, NV, pp 3034–3042
10.
Zurück zum Zitat Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2015) Rethinking the inception architecture for computer vision, arXiv:1512.00567 [cs.CV] Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2015) Rethinking the inception architecture for computer vision, arXiv:​1512.​00567 [cs.CV]
11.
Zurück zum Zitat Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 1097–1105 Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 1097–1105
12.
13.
Zurück zum Zitat Herath S, Harandi M, Porikli F (2017) Going deeper into action recognition: a survey. Image Vis Comput 60:4–21CrossRef Herath S, Harandi M, Porikli F (2017) Going deeper into action recognition: a survey. Image Vis Comput 60:4–21CrossRef
14.
Zurück zum Zitat Ladjailia A, Bouchrika I, Merouani H, Harrati N, Mahfouf Z (2019) Human activity recognition via optical flow: decomposing activities into basic actions. Neural Comput Appl 1–14 Ladjailia A, Bouchrika I, Merouani H, Harrati N, Mahfouf Z (2019) Human activity recognition via optical flow: decomposing activities into basic actions. Neural Comput Appl 1–14
15.
Zurück zum Zitat Wang H, Klaeser A, Schmid C, Liu C-L (2013) Dense trajectories and motion boundary descriptors for action recognition. IJCV 103:60–79 Wang H, Klaeser A, Schmid C, Liu C-L (2013) Dense trajectories and motion boundary descriptors for action recognition. IJCV 103:60–79
16.
Zurück zum Zitat Liu J, Luo J, Shah M (2009) Recognizing realistic actions from videos “in the Wild”. In: IEEE international conference on computer vision and pattern recognition (CVPR) Liu J, Luo J, Shah M (2009) Recognizing realistic actions from videos “in the Wild”. In: IEEE international conference on computer vision and pattern recognition (CVPR)
17.
Zurück zum Zitat Vishwakarma DK, Singh K (2016) Human activity recognition based on spatial distribution of gradients at sub-levels of average energy silhouette images. IEEE Trans Cogn Dev Syst 99:1 Vishwakarma DK, Singh K (2016) Human activity recognition based on spatial distribution of gradients at sub-levels of average energy silhouette images. IEEE Trans Cogn Dev Syst 99:1
18.
Zurück zum Zitat Dhiman C, Vishwakarma DK (2019) A robust framework for abnormal human action recognition using R-transform and Zernike moments in depth videos. IEEE Sens J 19(13):5195–5203CrossRef Dhiman C, Vishwakarma DK (2019) A robust framework for abnormal human action recognition using R-transform and Zernike moments in depth videos. IEEE Sens J 19(13):5195–5203CrossRef
19.
Zurück zum Zitat Baccouche M, Mamalet F, Wolf C, Garcia C, Baskurt A (2011) Sequential deep learning for human action recognition. In: Proceedings of the second international conference on human behavior understanding Baccouche M, Mamalet F, Wolf C, Garcia C, Baskurt A (2011) Sequential deep learning for human action recognition. In: Proceedings of the second international conference on human behavior understanding
20.
Zurück zum Zitat Ji S, Xu W, Yang M, Yu K (2013) 3D convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell 35(1):221–231CrossRef Ji S, Xu W, Yang M, Yu K (2013) 3D convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell 35(1):221–231CrossRef
21.
Zurück zum Zitat Simonyan K, Zisserman A (2014) Two-stream convolutional networks for action recognition in videos. In: Proceedings of the advances in neural information processing systems Simonyan K, Zisserman A (2014) Two-stream convolutional networks for action recognition in videos. In: Proceedings of the advances in neural information processing systems
22.
Zurück zum Zitat Ji X, Cheng J, Feng W, Tao D (2017) Skeleton embedded motion body partition for human action recognition using depth sequences. Sig Process 143:56–68CrossRef Ji X, Cheng J, Feng W, Tao D (2017) Skeleton embedded motion body partition for human action recognition using depth sequences. Sig Process 143:56–68CrossRef
23.
Zurück zum Zitat Ji Y, Yang Y, Xu X, Shen HT (2018) One-shot learning based pattern transition map for action early recognition. Sig Process 143:364–370CrossRef Ji Y, Yang Y, Xu X, Shen HT (2018) One-shot learning based pattern transition map for action early recognition. Sig Process 143:364–370CrossRef
24.
Zurück zum Zitat Fernando B, Gavves E, Oramas M, Ghodrati A, Tuytelaars T (2015) Modeling video evolution for action recognition. In: IEEE international conference on computer vision and pattern recognition (CVPR) Fernando B, Gavves E, Oramas M, Ghodrati A, Tuytelaars T (2015) Modeling video evolution for action recognition. In: IEEE international conference on computer vision and pattern recognition (CVPR)
25.
Zurück zum Zitat Amor BB, Su J, Srivastava A (2016) Action recognition using rate-invariant analysis of skeletal shape trajectories. IEEE Trans Pattern Anal Mach Intell 38(1):1–13CrossRef Amor BB, Su J, Srivastava A (2016) Action recognition using rate-invariant analysis of skeletal shape trajectories. IEEE Trans Pattern Anal Mach Intell 38(1):1–13CrossRef
26.
Zurück zum Zitat Feng J, Zhang S, Xiao J (2017) Explorations of skeleton features for LSTM-based action recognition. Multimed Tools Appl 78:591–603 Feng J, Zhang S, Xiao J (2017) Explorations of skeleton features for LSTM-based action recognition. Multimed Tools Appl 78:591–603
27.
Zurück zum Zitat Bobick AF, Davis JW (2001) The recognition of human movement using temporal templates. IEEE Trans Pattern Anal Mach Intell 23(3):257–267CrossRef Bobick AF, Davis JW (2001) The recognition of human movement using temporal templates. IEEE Trans Pattern Anal Mach Intell 23(3):257–267CrossRef
28.
Zurück zum Zitat Blank M, Gorelick L, Shechtman E, Irani M, Basri R (2005) Actions as space-time shapes. In: Tenth IEEE international conference on computer vision (ICCV’05), Beijing Blank M, Gorelick L, Shechtman E, Irani M, Basri R (2005) Actions as space-time shapes. In: Tenth IEEE international conference on computer vision (ICCV’05), Beijing
29.
Zurück zum Zitat Gorelick L, Blank M, Shechtman E, Irani M, Basri R (2007) Actions as space-time shapes. Trans Pattern Anal Mach Intell 29:2247–2253CrossRef Gorelick L, Blank M, Shechtman E, Irani M, Basri R (2007) Actions as space-time shapes. Trans Pattern Anal Mach Intell 29:2247–2253CrossRef
30.
Zurück zum Zitat Laptev I (2005) On space-time interest points. Int J Comput Vision 64(2–3):107–123CrossRef Laptev I (2005) On space-time interest points. Int J Comput Vision 64(2–3):107–123CrossRef
31.
Zurück zum Zitat Matikainen P, Hebert M, Sukthankar R (2009) Trajectons: action recognition through the motion analysis of tracked features. In: IEEE 12th international conference on computer vision Matikainen P, Hebert M, Sukthankar R (2009) Trajectons: action recognition through the motion analysis of tracked features. In: IEEE 12th international conference on computer vision
32.
Zurück zum Zitat Brun L, Percannella G, Saggesea A, Vento M (2016) Action recognition by using kernels on aclets sequences. Comput Vis Image Underst 144:3–13CrossRef Brun L, Percannella G, Saggesea A, Vento M (2016) Action recognition by using kernels on aclets sequences. Comput Vis Image Underst 144:3–13CrossRef
33.
Zurück zum Zitat Carletti V, Foggia P, Percannella G, Saggese A, Vento M (2013) Recognition of human actions from RGB-D videos using a reject option. In: International workshop on social behaviour analysis Carletti V, Foggia P, Percannella G, Saggese A, Vento M (2013) Recognition of human actions from RGB-D videos using a reject option. In: International workshop on social behaviour analysis
34.
Zurück zum Zitat Saggese A, Strisciuglio N, Vento M, Petkov N (2018) Learning skeleton representations for human action recognition. Pattern Recognit Lett 118:23–31 Saggese A, Strisciuglio N, Vento M, Petkov N (2018) Learning skeleton representations for human action recognition. Pattern Recognit Lett 118:23–31
35.
Zurück zum Zitat Dalal N, Triggs B, Schmid C (2006) Human detection using oriented histograms of flow and appearance. In: Proceedings of the European conference on computer vision Dalal N, Triggs B, Schmid C (2006) Human detection using oriented histograms of flow and appearance. In: Proceedings of the European conference on computer vision
36.
Zurück zum Zitat Laptev I, Lindeberg T (2004) Local descriptors for spatio-temporal recognition. In: ECCV workshop on spatial coherence for visual motion analysis Laptev I, Lindeberg T (2004) Local descriptors for spatio-temporal recognition. In: ECCV workshop on spatial coherence for visual motion analysis
37.
Zurück zum Zitat Rodriguez MD, Ahmed J, Shah M (2008) Action MACH: a spatio-temporal maximum average correlation height filter for action recognition. In: IEEE conference on computer vision and pattern recognition, Anchorage, AK Rodriguez MD, Ahmed J, Shah M (2008) Action MACH: a spatio-temporal maximum average correlation height filter for action recognition. In: IEEE conference on computer vision and pattern recognition, Anchorage, AK
38.
Zurück zum Zitat Al-Nawashi M, Al-Hazaimeh O, Saraee M (2017) A novel framework for intelligent surveillance system based on abnormal human activity detection in academic environments. Neural Comput Appl 28:565–572CrossRef Al-Nawashi M, Al-Hazaimeh O, Saraee M (2017) A novel framework for intelligent surveillance system based on abnormal human activity detection in academic environments. Neural Comput Appl 28:565–572CrossRef
39.
Zurück zum Zitat Wang H, Schmid C (2013) Action recognition with improved trajectories. In: Proceedings of the international conference on computer vision (ICCV) Wang H, Schmid C (2013) Action recognition with improved trajectories. In: Proceedings of the international conference on computer vision (ICCV)
40.
Zurück zum Zitat Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L, (2014) Large-scale video classification with convolutional neural networks. In: IEEE conference on computer vision and pattern recognition, Columbus, OH Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L, (2014) Large-scale video classification with convolutional neural networks. In: IEEE conference on computer vision and pattern recognition, Columbus, OH
41.
Zurück zum Zitat Peng X, Zou C, Qiao Y, Peng Q (2014) Action recognition with stacked fisher vectors. In: ECCV Peng X, Zou C, Qiao Y, Peng Q (2014) Action recognition with stacked fisher vectors. In: ECCV
42.
Zurück zum Zitat Keçeli AS, Kaya A, Can AB (2018) Combining 2D and 3D deep models for action recognition with depth information. SIViP 12:1197–1205CrossRef Keçeli AS, Kaya A, Can AB (2018) Combining 2D and 3D deep models for action recognition with depth information. SIViP 12:1197–1205CrossRef
43.
Zurück zum Zitat Ijjina EP, Chalavadi KM (2017) Human action recognition in RGB-D videos using motion sequence information and deep learning. Pattern Recognit 72:504–516CrossRef Ijjina EP, Chalavadi KM (2017) Human action recognition in RGB-D videos using motion sequence information and deep learning. Pattern Recognit 72:504–516CrossRef
44.
Zurück zum Zitat Jing C, Wei P, Sun H, Zheng N (2019) Spatiotemporal neural networks for action recognition based on joint loss. Neural Comput Appl 32:4293–4302 Jing C, Wei P, Sun H, Zheng N (2019) Spatiotemporal neural networks for action recognition based on joint loss. Neural Comput Appl 32:4293–4302
45.
47.
Zurück zum Zitat Williams RJ, Hinton GE, Rumelhart DE (1986) Learning representations by back-propagating errors. Nature 323(6088):533–536CrossRef Williams RJ, Hinton GE, Rumelhart DE (1986) Learning representations by back-propagating errors. Nature 323(6088):533–536CrossRef
48.
Zurück zum Zitat Hochreiter S, Schnidhuber J (1997) Long short-term memory. Neural Comput 9(1997):1735–1780CrossRef Hochreiter S, Schnidhuber J (1997) Long short-term memory. Neural Comput 9(1997):1735–1780CrossRef
50.
Zurück zum Zitat Feichtenhofer C, Pinz A, Zisserman A (2016) Convolutional two-stream network fusion for video action recognition, arXiv:1604.06573v2 [cs.CV] Feichtenhofer C, Pinz A, Zisserman A (2016) Convolutional two-stream network fusion for video action recognition, arXiv:​1604.​06573v2 [cs.CV]
51.
Zurück zum Zitat Yun K, Honorio J, Chattopadhyay D, Berg TL, Samaras D (2012) Two-person interaction detection using body-pose features and multiple instance learning. In: IEEE international conference computer vision and pattern recognition workshops (CVPRW), Rhode Island Yun K, Honorio J, Chattopadhyay D, Berg TL, Samaras D (2012) Two-person interaction detection using body-pose features and multiple instance learning. In: IEEE international conference computer vision and pattern recognition workshops (CVPRW), Rhode Island
52.
Zurück zum Zitat Wang J, Liu Z, Wu Y, Yuan J (2012) Mining Actionlet ensemble for action recognition with depth cameras. In: IEEE conference on computer vision and pattern recognition, Rhode Island Wang J, Liu Z, Wu Y, Yuan J (2012) Mining Actionlet ensemble for action recognition with depth cameras. In: IEEE conference on computer vision and pattern recognition, Rhode Island
53.
Zurück zum Zitat Oreifej O, Liu Z (2013) HON4D: histogram of oriented 4D normals for activity recognition from depth sequences. In: IEEE international conference on computer vision and pattern recognition (CVPR), Portland, OR Oreifej O, Liu Z (2013) HON4D: histogram of oriented 4D normals for activity recognition from depth sequences. In: IEEE international conference on computer vision and pattern recognition (CVPR), Portland, OR
54.
Zurück zum Zitat Kingma PD, Ba JL (2015) ADAM: a method for stochastic optimization. In: International conference on learning representations, San Diego Kingma PD, Ba JL (2015) ADAM: a method for stochastic optimization. In: International conference on learning representations, San Diego
55.
Zurück zum Zitat Mann HB, Whitney DR (1947) On a test of whether one of two random variables is stochastically larger than the other. Ann Math Stat 18(1):50–60MathSciNetCrossRef Mann HB, Whitney DR (1947) On a test of whether one of two random variables is stochastically larger than the other. Ann Math Stat 18(1):50–60MathSciNetCrossRef
56.
Zurück zum Zitat Foggia P, Saggese A, Strisciuglio N, Vento M (2014) Exploiting the deep learning paradigm for recognizing human actions. In: IEEE AVSS Foggia P, Saggese A, Strisciuglio N, Vento M (2014) Exploiting the deep learning paradigm for recognizing human actions. In: IEEE AVSS
57.
Zurück zum Zitat Brun L, Foggia P, Saggese A, Vento M (2015) Recognition of human actions using edit distance on aclet strings. In: VISAPP Brun L, Foggia P, Saggese A, Vento M (2015) Recognition of human actions using edit distance on aclet strings. In: VISAPP
58.
Zurück zum Zitat Jia C, Kong Y, Ding Z, Fu Y (2014) Latent tensor transfer learning for RGB-D action recognition. In: Proceedings of the 22nd ACM international conference on multimedia, Orlando, FL, USA Jia C, Kong Y, Ding Z, Fu Y (2014) Latent tensor transfer learning for RGB-D action recognition. In: Proceedings of the 22nd ACM international conference on multimedia, Orlando, FL, USA
59.
Zurück zum Zitat Vemulapalli R, Chellapa R (2016) Rolling rotations for recognizing human actions from 3D skeletal data. In: IEEE international conference on computer vision and pattern recognition (CVPR) Vemulapalli R, Chellapa R (2016) Rolling rotations for recognizing human actions from 3D skeletal data. In: IEEE international conference on computer vision and pattern recognition (CVPR)
60.
Zurück zum Zitat Seidenari L, Varano V, Berretti S, Bimbo AD, Pala P (2013) Recognizing actions from depth cameras as weakly aligned multi-part bag-of-poses. In: IEEE international conference on computer vision and pattern recognition (CVPR), Portland Seidenari L, Varano V, Berretti S, Bimbo AD, Pala P (2013) Recognizing actions from depth cameras as weakly aligned multi-part bag-of-poses. In: IEEE international conference on computer vision and pattern recognition (CVPR), Portland
61.
Zurück zum Zitat Cai X, Zhou W, Wu L, Luo J, Li H (2016) Effective active skeleton representation for low latency human action recognition. IEEE Trans Multimed 18(2):141–154CrossRef Cai X, Zhou W, Wu L, Luo J, Li H (2016) Effective active skeleton representation for low latency human action recognition. IEEE Trans Multimed 18(2):141–154CrossRef
62.
Zurück zum Zitat Zhang H, Parker LE (2015) Bio-inspired predictive orientation decomposition of skeleton trajectories for real-time human activity prediction. In: IEEE international conference on robotics and automation (ICRA), Seattle, WA Zhang H, Parker LE (2015) Bio-inspired predictive orientation decomposition of skeleton trajectories for real-time human activity prediction. In: IEEE international conference on robotics and automation (ICRA), Seattle, WA
63.
Zurück zum Zitat Huynh T-T, Hua C-H, Tu NA, Hur T, Bang J, Kim D, Amin MB, Kang BH, Seung H, Shin S-Y, Kim E-S, Lee S (2018) Hierarchical topic modeling with pose-transition feature for action recognition using 3D skeleton data. Inf Sci 444:20–35MathSciNetCrossRef Huynh T-T, Hua C-H, Tu NA, Hur T, Bang J, Kim D, Amin MB, Kang BH, Seung H, Shin S-Y, Kim E-S, Lee S (2018) Hierarchical topic modeling with pose-transition feature for action recognition using 3D skeleton data. Inf Sci 444:20–35MathSciNetCrossRef
Metadaten
Titel
A deeply coupled ConvNet for human activity recognition using dynamic and RGB images
verfasst von
Tej Singh
Dinesh Kumar Vishwakarma
Publikationsdatum
21.05.2020
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 1/2021
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-020-05018-y

Weitere Artikel der Ausgabe 1/2021

Neural Computing and Applications 1/2021 Zur Ausgabe

S. I : Neural Networks in Art, sound and Design

Deep learning of individual aesthetics

Premium Partner