Skip to main content
Top
Published in: Machine Vision and Applications 5/2019

22-08-2018 | Special Issue Paper

Abnormal gesture recognition based on multi-model fusion strategy

Authors: Chi Lin, Xuxin Lin, Yiliang Xie, Yanyan Liang

Published in: Machine Vision and Applications | Issue 5/2019

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In this paper, we present a novel refined fused model combining masked Res-C3D network and skeleton LSTM for abnormal gesture recognition in RGB-D videos. The key to our design is to learn discriminative representations of gesture sequences in particular abnormal gesture samples by fusing multiple features from different models. First, deep spatiotemporal features are well extracted by 3D convolutional neural networks with residual architecture (Res-C3D). As gestures are mainly derived from the arm or hand movements, a masked Res-C3D network is built to decrease the effect of background and other variations via exploiting the skeleton of the body to reserve arm regions with discarding other regions. And then, relative positions and angles of different key points are extracted and used to build a time-series model by long short-term memory network (LSTM). Based the above representations, a fusion scheme for blending classification results and remedy model disadvantage by abnormal gesture via a weight fusion layer is developed, in which the weights of each voting sub-classifier being advantage to a certain class in our ensemble model are adaptively obtained by training in place of fixed weights. Our experimental results show that the proposed method can distinguish the abnormal gesture samples effectively and achieve the state-of-the-art performance in the IsoGD dataset.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Schlömer, T., Poppinga, B., Henze, N., Boll, S.: Gesture recognition with a WII controller. In: Proceedings of the 2nd International Conference on Tangible and Embedded Interaction, pp. 11–14. ACM (2008) Schlömer, T., Poppinga, B., Henze, N., Boll, S.: Gesture recognition with a WII controller. In: Proceedings of the 2nd International Conference on Tangible and Embedded Interaction, pp. 11–14. ACM (2008)
2.
go back to reference Wilson, A.D.: Surface UI for gesture-based interaction. US Patent 8,560,972, 15 Oct. 2013 Wilson, A.D.: Surface UI for gesture-based interaction. US Patent 8,560,972, 15 Oct. 2013
3.
go back to reference Ohn-Bar, E., Trivedi, M.M.: Hand gesture recognition in real time for automotive interfaces: a multimodal vision-based approach and evaluations. IEEE Trans. Intell. Transp. Syst. 15(6), 2368–2377 (2014)CrossRef Ohn-Bar, E., Trivedi, M.M.: Hand gesture recognition in real time for automotive interfaces: a multimodal vision-based approach and evaluations. IEEE Trans. Intell. Transp. Syst. 15(6), 2368–2377 (2014)CrossRef
4.
go back to reference Avci, A., Bosch, S., Marin-Perianu, M., Marin-Perianu, R., Havinga, P.: Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: a survey. In: Proceedings of the 23rd International Conference on Architecture of Computing Systems (ARCS), VDE, vol. 2010, pp. 1–10 (2010) Avci, A., Bosch, S., Marin-Perianu, M., Marin-Perianu, R., Havinga, P.: Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: a survey. In: Proceedings of the 23rd International Conference on Architecture of Computing Systems (ARCS), VDE, vol. 2010, pp. 1–10 (2010)
5.
go back to reference Mitra, S., Acharya, T.: Gesture recognition: a survey. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 37(3), 311–324 (2007)CrossRef Mitra, S., Acharya, T.: Gesture recognition: a survey. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 37(3), 311–324 (2007)CrossRef
6.
go back to reference Liu, Z., Chai, X., Liu, Z., Chen, X.: Continuous gesture recognition with hand-oriented spatiotemporal feature. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3056–3064 (2017) Liu, Z., Chai, X., Liu, Z., Chen, X.: Continuous gesture recognition with hand-oriented spatiotemporal feature. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3056–3064 (2017)
7.
go back to reference Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., Van Gool, L.: Temporal segment networks: towards good practices for deep action recognition. In: European Conference on Computer Vision, pp. 20–36. Springer, Berlin (2016) Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., Van Gool, L.: Temporal segment networks: towards good practices for deep action recognition. In: European Conference on Computer Vision, pp. 20–36. Springer, Berlin (2016)
8.
go back to reference Duan, J., Wan, J., Zhou, S., Guo, X., Li, S.Z.: A unified framework for multi-modal isolated gesture recognition. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 14(1s), 21 (2018) Duan, J., Wan, J., Zhou, S., Guo, X., Li, S.Z.: A unified framework for multi-modal isolated gesture recognition. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 14(1s), 21 (2018)
9.
go back to reference Wan, J., Ruan, Q., An, G., Li, W.: Gesture recognition based on hidden Markov model from sparse representative observations. In: Proceedings of the 2012 IEEE 11th International Conference on Signal Processing (ICSP), vol. 2, pp. 1180–1183. IEEE (2012) Wan, J., Ruan, Q., An, G., Li, W.: Gesture recognition based on hidden Markov model from sparse representative observations. In: Proceedings of the 2012 IEEE 11th International Conference on Signal Processing (ICSP), vol. 2, pp. 1180–1183. IEEE (2012)
10.
go back to reference Wan, J., Guo, G., Li, S.Z.: Explore efficient local features from RGB-D data for one-shot learning gesture recognition. IEEE Trans. Pattern Anal. Mach. Intell. 38(8), 1626–1639 (2016)CrossRef Wan, J., Guo, G., Li, S.Z.: Explore efficient local features from RGB-D data for one-shot learning gesture recognition. IEEE Trans. Pattern Anal. Mach. Intell. 38(8), 1626–1639 (2016)CrossRef
11.
go back to reference Wan, J., Ruan, Q., Li, W., An, G., Zhao, R.: 3d smosift: three-dimensional sparse motion scale invariant feature transform for activity recognition from RGB-D videos. J. Electron. Imaging 23(2), 023017 (2014)CrossRef Wan, J., Ruan, Q., Li, W., An, G., Zhao, R.: 3d smosift: three-dimensional sparse motion scale invariant feature transform for activity recognition from RGB-D videos. J. Electron. Imaging 23(2), 023017 (2014)CrossRef
12.
go back to reference Klaser, A., Marszałek, M., Schmid, C.: A spatio-temporal descriptor based on 3D-gradients. In: BMVC 2008-19th British Machine Vision Conference, pp. 275–1. British Machine Vision Association (2008) Klaser, A., Marszałek, M., Schmid, C.: A spatio-temporal descriptor based on 3D-gradients. In: BMVC 2008-19th British Machine Vision Conference, pp. 275–1. British Machine Vision Association (2008)
13.
go back to reference Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
14.
go back to reference Wang, P., Li, W., Liu, S., Gao, Z., Tang, C., Ogunbona, P.: Large-scale isolated gesture recognition using convolutional neural networks. In: Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 7–12. IEEE (2016) Wang, P., Li, W., Liu, S., Gao, Z., Tang, C., Ogunbona, P.: Large-scale isolated gesture recognition using convolutional neural networks. In: Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 7–12. IEEE (2016)
15.
go back to reference Wang, H., Wang, P., Song, Z., Li, W.: Large-scale multimodal gesture recognition using heterogeneous networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3129–3137 (2017) Wang, H., Wang, P., Song, Z., Li, W.: Large-scale multimodal gesture recognition using heterogeneous networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3129–3137 (2017)
16.
go back to reference Fernando, B., Gavves, E., Oramas, J., Ghodrati, A., Tuytelaars, T.: Rank pooling for action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 773–787 (2017)CrossRef Fernando, B., Gavves, E., Oramas, J., Ghodrati, A., Tuytelaars, T.: Rank pooling for action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 773–787 (2017)CrossRef
17.
go back to reference Bilen, H., Fernando, B., Gavves, E., Vedaldi, A., Gould, S.: Dynamic image networks for action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3034–3042 (2016) Bilen, H., Fernando, B., Gavves, E., Vedaldi, A., Gould, S.: Dynamic image networks for action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3034–3042 (2016)
18.
go back to reference Pigou, L., Van Den Oord, A., Dieleman, S., Van Herreweghe, M., Dambre, J.: Beyond temporal pooling: Recurrence and temporal convolutions for gesture recognition in video. Int. J. Comput. Vis. 126, 1–10 (2015)MathSciNet Pigou, L., Van Den Oord, A., Dieleman, S., Van Herreweghe, M., Dambre, J.: Beyond temporal pooling: Recurrence and temporal convolutions for gesture recognition in video. Int. J. Comput. Vis. 126, 1–10 (2015)MathSciNet
19.
go back to reference Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 4489–4497. IEEE (2015) Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 4489–4497. IEEE (2015)
20.
go back to reference Miao, Q., Li, Y., Ouyang, W., Ma, Z., Xu, X., Shi, W., Cao, X., Liu, Z., Chai, X., Liu, Z. etal.: Multimodal gesture recognition based on the resc3d network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3047–3055 (2017) Miao, Q., Li, Y., Ouyang, W., Ma, Z., Xu, X., Shi, W., Cao, X., Liu, Z., Chai, X., Liu, Z. etal.: Multimodal gesture recognition based on the resc3d network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3047–3055 (2017)
21.
go back to reference Escalante, H.J., Ponce-López, V., Wan, J., Riegler, M.A., Chen, B., Clapés, A., Escalera, S., Guyon, I., Baró, X., Halvorsen, P. et al.: Chalearn joint contest on multimedia challenges beyond visual analysis: an overview. In: Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 67–73. IEEE (2016) Escalante, H.J., Ponce-López, V., Wan, J., Riegler, M.A., Chen, B., Clapés, A., Escalera, S., Guyon, I., Baró, X., Halvorsen, P. et al.: Chalearn joint contest on multimedia challenges beyond visual analysis: an overview. In: Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 67–73. IEEE (2016)
22.
go back to reference Wan, J., Escalera, S., Baro, X., Escalante, H.J., Guyon, I., Madadi, M., Allik, J., Gorbova, J., Anbarjafari, G.: Results and analysis of chalearn lap multi-modal isolated and continuous gesture recognition, and real versus fake expressed emotions challenges. In: ChaLearn LaP, Action, Gesture, and Emotion Recognition Workshop and Competitions: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions, ICCV, vol. 4, no. 6 (2017) Wan, J., Escalera, S., Baro, X., Escalante, H.J., Guyon, I., Madadi, M., Allik, J., Gorbova, J., Anbarjafari, G.: Results and analysis of chalearn lap multi-modal isolated and continuous gesture recognition, and real versus fake expressed emotions challenges. In: ChaLearn LaP, Action, Gesture, and Emotion Recognition Workshop and Competitions: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions, ICCV, vol. 4, no. 6 (2017)
23.
go back to reference Camgoz, N.C., Hadfield, S., Koller, O., Bowden, R.: Using convolutional 3D neural networks for user-independent continuous gesture recognition. In: Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 49–54. IEEE (2016) Camgoz, N.C., Hadfield, S., Koller, O., Bowden, R.: Using convolutional 3D neural networks for user-independent continuous gesture recognition. In: Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 49–54. IEEE (2016)
24.
go back to reference Zhang, L., Zhu, G., Shen, P., Song, J., Shah, S.A., Bennamoun, M.: Learning spatiotemporal features using 3DCNN and convolutional LSTM for gesture recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3120–3128 (2017) Zhang, L., Zhu, G., Shen, P., Song, J., Shah, S.A., Bennamoun, M.: Learning spatiotemporal features using 3DCNN and convolutional LSTM for gesture recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3120–3128 (2017)
25.
go back to reference Chai, X., Liu, Z., Yin, F., Liu, Z., Chen, X.: Two streams recurrent neural networks for large-scale continuous gesture recognition. In: Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 31–36. IEEE (2016) Chai, X., Liu, Z., Yin, F., Liu, Z., Chen, X.: Two streams recurrent neural networks for large-scale continuous gesture recognition. In: Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 31–36. IEEE (2016)
26.
go back to reference Wan, J., Ruan, Q., Li, W., Deng, S.: One-shot learning gesture recognition from RGB-D data using bag of features. J. Mach. Learn. Res. 14(1), 2549–2582 (2013) Wan, J., Ruan, Q., Li, W., Deng, S.: One-shot learning gesture recognition from RGB-D data using bag of features. J. Mach. Learn. Res. 14(1), 2549–2582 (2013)
27.
go back to reference Escalante, H.J., Guyon, I., Athitsos, V., Jangyodsuk, P., Wan, J.: Principal motion components for one-shot gesture recognition. Pattern Anal. Appl. 20(1), 167–182 (2017)MathSciNetCrossRef Escalante, H.J., Guyon, I., Athitsos, V., Jangyodsuk, P., Wan, J.: Principal motion components for one-shot gesture recognition. Pattern Anal. Appl. 20(1), 167–182 (2017)MathSciNetCrossRef
28.
go back to reference Cabrera, M.E., Sanchez-Tamayo, N., Voyles, R., Wachs, J.P.: One-shot gesture recognition: one step towards adaptive learning. In Proceedings of the 2017 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017), pp. 784–789. IEEE (2017) Cabrera, M.E., Sanchez-Tamayo, N., Voyles, R., Wachs, J.P.: One-shot gesture recognition: one step towards adaptive learning. In Proceedings of the 2017 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017), pp. 784–789. IEEE (2017)
29.
go back to reference Cabrera, M.E., Wachs, J.P.: A human-centered approach to one-shot gesture learning. Front. Robot. AI 4, 8 (2017)CrossRef Cabrera, M.E., Wachs, J.P.: A human-centered approach to one-shot gesture learning. Front. Robot. AI 4, 8 (2017)CrossRef
31.
go back to reference He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
32.
go back to reference Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015) Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015)
33.
go back to reference Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Tran. Signal Process. 45(11), 2673–2681 (1997)CrossRef Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Tran. Signal Process. 45(11), 2673–2681 (1997)CrossRef
34.
go back to reference Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2 (2017) Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2 (2017)
35.
go back to reference Wei, S.-E., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4724–4732 (2016) Wei, S.-E., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4724–4732 (2016)
36.
go back to reference Guo, J., Zhou, S., Wu, J., Wan, J., Zhu, X., Lei, Z., Li, S.Z.: Multi-modality network with visual and geometrical information for micro emotion recognition. In: Proceedings of the 2017 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017), pp. 814–819. IEEE (2017) Guo, J., Zhou, S., Wu, J., Wan, J., Zhu, X., Lei, Z., Li, S.Z.: Multi-modality network with visual and geometrical information for micro emotion recognition. In: Proceedings of the 2017 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017), pp. 814–819. IEEE (2017)
37.
go back to reference Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015) Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)
38.
go back to reference Wan, J., Zhao, Y., Zhou, S., Guyon, I., Escalera, S., Li, S.Z.: Chalearn looking at people RGB-D isolated and continuous datasets for gesture recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 56–64 (2016) Wan, J., Zhao, Y., Zhou, S., Guyon, I., Escalera, S., Li, S.Z.: Chalearn looking at people RGB-D isolated and continuous datasets for gesture recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 56–64 (2016)
39.
go back to reference Ni, B., Wang, G., Moulin, P.: RGBD-hudaact: a color-depth video database for human daily activity recognition. In: Andrea, F., Juergen, G., Helmut, G., Xiaofeng, R., Kurt, K. (eds.) Consumer Depth Cameras for Computer Vision, pp. 193–208. Springer, Berlin (2013)CrossRef Ni, B., Wang, G., Moulin, P.: RGBD-hudaact: a color-depth video database for human daily activity recognition. In: Andrea, F., Juergen, G., Helmut, G., Xiaofeng, R., Kurt, K. (eds.) Consumer Depth Cameras for Computer Vision, pp. 193–208. Springer, Berlin (2013)CrossRef
40.
go back to reference Guyon, I., Athitsos, V., Jangyodsuk, P., Escalante, H.J.: The ChaLearn gesture dataset (CGD 2011). Mach. Vision Appl. 25(8), 1929–1951 (2014)CrossRef Guyon, I., Athitsos, V., Jangyodsuk, P., Escalante, H.J.: The ChaLearn gesture dataset (CGD 2011). Mach. Vision Appl. 25(8), 1929–1951 (2014)CrossRef
42.
go back to reference Laptev, I.: On space-time interest points. Int. J. Comput. Vis. 64(2–3), 107–123 (2005)CrossRef Laptev, I.: On space-time interest points. Int. J. Comput. Vis. 64(2–3), 107–123 (2005)CrossRef
43.
go back to reference Davis, J.W., Bobick, A.F.: The representation and recognition of human movement using temporal templates. In: Proceedings of the 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 928–934. IEEE (1997) Davis, J.W., Bobick, A.F.: The representation and recognition of human movement using temporal templates. In: Proceedings of the 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 928–934. IEEE (1997)
Metadata
Title
Abnormal gesture recognition based on multi-model fusion strategy
Authors
Chi Lin
Xuxin Lin
Yiliang Xie
Yanyan Liang
Publication date
22-08-2018
Publisher
Springer Berlin Heidelberg
Published in
Machine Vision and Applications / Issue 5/2019
Print ISSN: 0932-8092
Electronic ISSN: 1432-1769
DOI
https://doi.org/10.1007/s00138-018-0969-0

Other articles of this Issue 5/2019

Machine Vision and Applications 5/2019 Go to the issue

Premium Partner