Skip to main content
Erschienen in: Multimedia Systems 3/2023

30.01.2023 | Regular Paper

Structural feature representation and fusion of human spatial cooperative motion for action recognition

verfasst von: Xin Chao, Zhenjie Hou, Yujian Mo, Haiyong Shi, Wenjing Yao

Erschienen in: Multimedia Systems | Ausgabe 3/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Aiming at the cooperative relationship of human body parts in the process of action execution, we propose an action recognition method based on the structural feature model of human spatial cooperative motion. First, ten wearable sensors and Kinect v2 are used to collect human motion data. Second, we analyze the relationship between the three-axis acceleration data of multiple sensors. Third, we measure the contribution of different parts of the human body to the completion of movement. And the contribution of different parts is transformed into the structural feature model of cooperative motion. Finally, we apply unsupervised and adaptive constraints to the motion features of different parts of the human body. On this basis, the features of different modals are fused. The experimental results show that our method can significantly improve the recognition rate of the open test. At the same time, the calculation process of our method is simple and easy to implement.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Lv, S.H., Lu, Y., Dong, M.X., Wang, X.D., Dou, Y., Zhuang, W.H.: Qualitative action recognition by wireless radio signals in human-machine systems. IEEE Trans. Human-Machine. Syst. 47(6), 789–800 (2017) Lv, S.H., Lu, Y., Dong, M.X., Wang, X.D., Dou, Y., Zhuang, W.H.: Qualitative action recognition by wireless radio signals in human-machine systems. IEEE Trans. Human-Machine. Syst. 47(6), 789–800 (2017)
2.
Zurück zum Zitat Lu, J.Z., Chen, K.C., Li, B.G., Dai, M.M.: Hybrid navigation method of INS/PDR based on action recognition. IEEE Sens. Journal. 18(20), 8541–8548 (2018) Lu, J.Z., Chen, K.C., Li, B.G., Dai, M.M.: Hybrid navigation method of INS/PDR based on action recognition. IEEE Sens. Journal. 18(20), 8541–8548 (2018)
3.
Zurück zum Zitat Liu, M.Y., Meng, F.Y., Chen, C., Wu, S.T.: Joint dynamic pose image and space time reversal for human action recognition from videos, AAAI 2019. Honolulu, Hawaii, USA (2019) Liu, M.Y., Meng, F.Y., Chen, C., Wu, S.T.: Joint dynamic pose image and space time reversal for human action recognition from videos, AAAI 2019. Honolulu, Hawaii, USA (2019)
4.
Zurück zum Zitat Dhiman, C., Vishwakarma, D.K.: A robust framework for abnormal human action recognition using R -Transform and zernike moments in depth videos. IEEE Sens. Journal. 19(13), 5195–5203 (2019) Dhiman, C., Vishwakarma, D.K.: A robust framework for abnormal human action recognition using R -Transform and zernike moments in depth videos. IEEE Sens. Journal. 19(13), 5195–5203 (2019)
5.
Zurück zum Zitat Hou, R., Chen, C., Shah, M.: Tube convolutional neural network (t-cnn) for action detection in videos. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA (2017) Hou, R., Chen, C., Shah, M.: Tube convolutional neural network (t-cnn) for action detection in videos. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA (2017)
6.
Zurück zum Zitat Li, X., Hou, Z.J., Liang, J.Z., Chen, C.: Human action recognition based on 3D body mask and depth spatial-temporal maps. Multimedia. Tools. Appl. 79, 35761–35778 (2020) Li, X., Hou, Z.J., Liang, J.Z., Chen, C.: Human action recognition based on 3D body mask and depth spatial-temporal maps. Multimedia. Tools. Appl. 79, 35761–35778 (2020)
7.
Zurück zum Zitat Chao, X., Hou, Z.J., Liang, J.Z., Yang, T.J.: Integrally cooperative spatio-temporal feature representation of motion joints for action recognition. Sensors, 20(18), 1–22 (2020) Chao, X., Hou, Z.J., Liang, J.Z., Yang, T.J.: Integrally cooperative spatio-temporal feature representation of motion joints for action recognition. Sensors, 20(18), 1–22 (2020)
8.
Zurück zum Zitat Sun, B., Wang, S.F., Kong, D.H., Wang, L.C., Yin, B.C.: Real-time human action recognition using locally aggregated kinematic-guided skeletonlet and supervised hashing-by-analysis model. IEEE Trans. Cybernetics. 52(6), 4837–4849 (2022) Sun, B., Wang, S.F., Kong, D.H., Wang, L.C., Yin, B.C.: Real-time human action recognition using locally aggregated kinematic-guided skeletonlet and supervised hashing-by-analysis model. IEEE Trans. Cybernetics. 52(6), 4837–4849 (2022)
9.
Zurück zum Zitat Cheng, S.H., Bolivar-Nieto, E., Gregg, R.D.: Real-time activity recognition with instantaneous characteristic features of thigh kinematics. IEEE Trans. Neural. Syst. Rehabilitation. Eng. 29, 1827–1837 (2021) Cheng, S.H., Bolivar-Nieto, E., Gregg, R.D.: Real-time activity recognition with instantaneous characteristic features of thigh kinematics. IEEE Trans. Neural. Syst. Rehabilitation. Eng. 29, 1827–1837 (2021)
10.
Zurück zum Zitat Koniusz, P., Wang, L., Cherian, A.: Tensor representations for action recognition. IEEE Trans. Pattern. Anal. Machine. Intelligence. 44(2), 648–665 (2022) Koniusz, P., Wang, L., Cherian, A.: Tensor representations for action recognition. IEEE Trans. Pattern. Anal. Machine. Intelligence. 44(2), 648–665 (2022)
11.
Zurück zum Zitat Zerrouki, N., Harrou, F., Sun, Y., Houacine, A.: Vision-based human action classification using adaptive boosting algorithm. IEEE Sens. J. 18(12), 5115–5121 (2018) Zerrouki, N., Harrou, F., Sun, Y., Houacine, A.: Vision-based human action classification using adaptive boosting algorithm. IEEE Sens. J. 18(12), 5115–5121 (2018)
12.
Zurück zum Zitat Kishore, P.V.V., Kumar, D.A., Sastry, A.S.C.S., Kumar, E.K.: Motionlets matching with adaptive kernels for 3-D indian sign language recognition. IEEE Sens. J. 18(8), 3327–3337 (2018) Kishore, P.V.V., Kumar, D.A., Sastry, A.S.C.S., Kumar, E.K.: Motionlets matching with adaptive kernels for 3-D indian sign language recognition. IEEE Sens. J. 18(8), 3327–3337 (2018)
13.
Zurück zum Zitat Liu, G.L., Tian, G.H., Li, J.W., Zhu, X.L., Wang, Z.R.: Human action recognition using a distributed RGB-Depth camera network. IEEE Sens. J. 18(18), 7570–7576 (2018) Liu, G.L., Tian, G.H., Li, J.W., Zhu, X.L., Wang, Z.R.: Human action recognition using a distributed RGB-Depth camera network. IEEE Sens. J. 18(18), 7570–7576 (2018)
14.
Zurück zum Zitat Wu, H.M., Shao, J., Xu, X., Ji, Y.L., Shen, F.M., Shen, H.T.: Recognition and detection of two-person interactive actions using automatically selected skeleton features. IEEE Trans. Human-Machine. Syst. 48(3), 304–310 (2018) Wu, H.M., Shao, J., Xu, X., Ji, Y.L., Shen, F.M., Shen, H.T.: Recognition and detection of two-person interactive actions using automatically selected skeleton features. IEEE Trans. Human-Machine. Syst. 48(3), 304–310 (2018)
15.
Zurück zum Zitat Shahroudy, A., Ng, T.T., Gong, Y.H., Wang, G.: Deep multimodal feature analysis for action recognition in rgb+d videos. IEEE Trans. Pattern. Anal. Machine. Intelligence. 40(5), 1045–1058 (2018) Shahroudy, A., Ng, T.T., Gong, Y.H., Wang, G.: Deep multimodal feature analysis for action recognition in rgb+d videos. IEEE Trans. Pattern. Anal. Machine. Intelligence. 40(5), 1045–1058 (2018)
16.
Zurück zum Zitat Liu, T.S., Kong, J., Jiang, M.: RGB-D action recognition using multimodal correlative representation learning model. IEEE Sens. J. 19(5), 1862–1872 (2019) Liu, T.S., Kong, J., Jiang, M.: RGB-D action recognition using multimodal correlative representation learning model. IEEE Sens. J. 19(5), 1862–1872 (2019)
17.
Zurück zum Zitat Chen, C., Jafari, R., Kehtarnavaz, N.: Action recognition from depth sequences using depth motion maps-based local binary patterns, 2015 IEEE Winter Conference on Applications of Computer Vision. Waikoloa, HI, USA (2015) Chen, C., Jafari, R., Kehtarnavaz, N.: Action recognition from depth sequences using depth motion maps-based local binary patterns, 2015 IEEE Winter Conference on Applications of Computer Vision. Waikoloa, HI, USA (2015)
18.
Zurück zum Zitat Li, C.K., Hou, Y.H., Wang, P.C., Li, W.Q.: Multiview-based 3-D action recognition using deep networks. IEEE Trans. Human-Machine. Syst. 49(1), 95–104 (2019) Li, C.K., Hou, Y.H., Wang, P.C., Li, W.Q.: Multiview-based 3-D action recognition using deep networks. IEEE Trans. Human-Machine. Syst. 49(1), 95–104 (2019)
19.
Zurück zum Zitat Chao, X., Hou, Z.J., Li, X., Liang, J.Z., Huan, J., Liu, H.Y.: Action recognition under depth spatial-temporal energy feature representation. J. Image. Graphics. 25(4), 836–850 (2020) Chao, X., Hou, Z.J., Li, X., Liang, J.Z., Huan, J., Liu, H.Y.: Action recognition under depth spatial-temporal energy feature representation. J. Image. Graphics. 25(4), 836–850 (2020)
20.
Zurück zum Zitat Yang, T.J., Hou, Z.J., Liang, J.Z., Gu, Y.W., Chao, X.: Depth sequential information entropy maps and multi-label subspace learning for human action recognition. IEEE Access. 8, 135118–135130 (2020) Yang, T.J., Hou, Z.J., Liang, J.Z., Gu, Y.W., Chao, X.: Depth sequential information entropy maps and multi-label subspace learning for human action recognition. IEEE Access. 8, 135118–135130 (2020)
21.
Zurück zum Zitat Liu, J., Shahroudy, A., Xu, D., Kot, A.C., Wang, G.: Skeleton-based action recognition using spatio-temporal lstm network with trust gates. IEEE Trans. Pattern. Anal. Machine. Intelligence. 40(12), 3007–3021 (2018) Liu, J., Shahroudy, A., Xu, D., Kot, A.C., Wang, G.: Skeleton-based action recognition using spatio-temporal lstm network with trust gates. IEEE Trans. Pattern. Anal. Machine. Intelligence. 40(12), 3007–3021 (2018)
22.
Zurück zum Zitat Du, Y., Wang, W., Wang, L.: Hierarchical recurrent neural network for skeleton based action recognition, 2015 IEEE Conference on Computer Vision and Pattern Recognition. CVPR, Boston, MA, USA (2015) Du, Y., Wang, W., Wang, L.: Hierarchical recurrent neural network for skeleton based action recognition, 2015 IEEE Conference on Computer Vision and Pattern Recognition. CVPR, Boston, MA, USA (2015)
23.
Zurück zum Zitat Yang, X.D., Tian, Y.L., Sastry, S.S., Bajcsy, R.: Eigenjoints-based action recognition using naive-bayes-nearest-neighbor 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Providence, RI, USA (2012) Yang, X.D., Tian, Y.L., Sastry, S.S., Bajcsy, R.: Eigenjoints-based action recognition using naive-bayes-nearest-neighbor 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Providence, RI, USA (2012)
24.
Zurück zum Zitat Ehatisham-ul-Haq, M., Azam, M.A., Loo, J., Shuang, K., Islam, S., Naeem, U., Amin, Y.: Authentication of smartphoneusers based on activity recognition and mobile sensing. Sensors 17(9), 1–31 (2017) Ehatisham-ul-Haq, M., Azam, M.A., Loo, J., Shuang, K., Islam, S., Naeem, U., Amin, Y.: Authentication of smartphoneusers based on activity recognition and mobile sensing. Sensors 17(9), 1–31 (2017)
25.
Zurück zum Zitat Su, B.Y., Jiang, J., Tang, Q.F., Sheng, M.: Human dynamic action recognition based on functional data analysis. Acta Automatica. Sinica. 43(5), 866–876 (2017) Su, B.Y., Jiang, J., Tang, Q.F., Sheng, M.: Human dynamic action recognition based on functional data analysis. Acta Automatica. Sinica. 43(5), 866–876 (2017)
26.
Zurück zum Zitat Chen, C., Jafari, R., Kehtarnavaz, N.: Improving human action recognition using fusion of depth camera and inertial sensors. IEEE Trans. Human-Machine. Syst. 45(1), 51–61 (2015) Chen, C., Jafari, R., Kehtarnavaz, N.: Improving human action recognition using fusion of depth camera and inertial sensors. IEEE Trans. Human-Machine. Syst. 45(1), 51–61 (2015)
27.
Zurück zum Zitat Chen, C., Jafari, R., Kehtarnavaz, N.: Utd-mhad: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor, 2015 IEEE International Conference on Image Processing. ICIP, Quebec City, QC, Canada (2015) Chen, C., Jafari, R., Kehtarnavaz, N.: Utd-mhad: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor, 2015 IEEE International Conference on Image Processing. ICIP, Quebec City, QC, Canada (2015)
28.
Zurück zum Zitat Shahroudy, A., Liu, J., Ng, T., Wang, G.: Ntu rgb+d: A large scale dataset for 3d human activity analysis, 2016 IEEE Conference on Computer Vision and Pattern Recognition. CVPR, Las Vegas, NV, USA (2016) Shahroudy, A., Liu, J., Ng, T., Wang, G.: Ntu rgb+d: A large scale dataset for 3d human activity analysis, 2016 IEEE Conference on Computer Vision and Pattern Recognition. CVPR, Las Vegas, NV, USA (2016)
29.
Zurück zum Zitat Liu, J., Shahroudy, A., Perez, M., Wang, G., Duan, L.Y., Kot, A.C.: Ntu rgb+d 120: A large-scale benchmark for 3d human activity understanding. IEEE Trans. Pattern. Anal. Machine. Intelligence. 42(10), 2684–2701 (2020) Liu, J., Shahroudy, A., Perez, M., Wang, G., Duan, L.Y., Kot, A.C.: Ntu rgb+d 120: A large-scale benchmark for 3d human activity understanding. IEEE Trans. Pattern. Anal. Machine. Intelligence. 42(10), 2684–2701 (2020)
30.
Zurück zum Zitat Ofli, F., Chaudhry, R., Kurillo, G., Vidal, R., Bajcsy, R.: Berkeley mhad: A comprehensive multimodal human action database, 2013 IEEE Workshop on Applications of Computer Vision (WACV), pp 53–60 (2013) Ofli, F., Chaudhry, R., Kurillo, G., Vidal, R., Bajcsy, R.: Berkeley mhad: A comprehensive multimodal human action database, 2013 IEEE Workshop on Applications of Computer Vision (WACV), pp 53–60 (2013)
31.
Zurück zum Zitat Si, C., Jing, Y., Wang, W., Wang, L., Tan, T.N.: Skeleton-based action recognition with spatial reasoning and temporal stack learning. Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA (2018) Si, C., Jing, Y., Wang, W., Wang, L., Tan, T.N.: Skeleton-based action recognition with spatial reasoning and temporal stack learning. Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA (2018)
32.
Zurück zum Zitat Deng, S.Z., Wang, B.T., Yang, C.G., Wang, G.R.: Convolutional neural networks for human activity recognition using multi-location wearable sensors. J. Soft. 30(3), 718–737 (2019) Deng, S.Z., Wang, B.T., Yang, C.G., Wang, G.R.: Convolutional neural networks for human activity recognition using multi-location wearable sensors. J. Soft. 30(3), 718–737 (2019)
33.
Zurück zum Zitat Dawar, N., Kehtarnavaz, N.: Action detection and recognition in continuous action streams by deep learning-based sensing fusion. IEEE Sens. J. 18(23), 9660–9668 (2018) Dawar, N., Kehtarnavaz, N.: Action detection and recognition in continuous action streams by deep learning-based sensing fusion. IEEE Sens. J. 18(23), 9660–9668 (2018)
34.
Zurück zum Zitat Manzi, A., Moschetti, A., Limosani, R., Fiorini, L., Cavallo, F.: Enhancing activity recognition of self-localized robot through depth camera and wearable sensors. IEEE Syst. J. 18(22), 9324–9331 (2018) Manzi, A., Moschetti, A., Limosani, R., Fiorini, L., Cavallo, F.: Enhancing activity recognition of self-localized robot through depth camera and wearable sensors. IEEE Syst. J. 18(22), 9324–9331 (2018)
35.
Zurück zum Zitat Gravina, R., Alinia, P., Ghasemzadeh, H., Fortino, G.: Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges. Inform. Fusion. 35(5), 68–80 (2017) Gravina, R., Alinia, P., Ghasemzadeh, H., Fortino, G.: Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges. Inform. Fusion. 35(5), 68–80 (2017)
36.
Zurück zum Zitat Chen, C., Liu, K., Kehtarnavaz, N.: Real-time human action recognition based on depth motion maps. J. Real-Time. Image. Process. 12(1), 155–163 (2016) Chen, C., Liu, K., Kehtarnavaz, N.: Real-time human action recognition based on depth motion maps. J. Real-Time. Image. Process. 12(1), 155–163 (2016)
37.
Zurück zum Zitat Tang, C., Wang, W.J., Li, W., Li, G.W., Cao, F.: Multi-learner co-training model for human actioin recognition. J. Soft. 26(11), 2939–2950 (2015) Tang, C., Wang, W.J., Li, W., Li, G.W., Cao, F.: Multi-learner co-training model for human actioin recognition. J. Soft. 26(11), 2939–2950 (2015)
38.
Zurück zum Zitat Chen, C., Jafari, R., Kehtarnavaz, N.: A real-time human action recognition system using depth and inertial sensor fusion. IEEE Sens. J. 16(3), 773–781 (2016) Chen, C., Jafari, R., Kehtarnavaz, N.: A real-time human action recognition system using depth and inertial sensor fusion. IEEE Sens. J. 16(3), 773–781 (2016)
39.
Zurück zum Zitat Wang, K.Y., He, R., Wang, L., Wang, W., Tan, T.N.: Joint feature selection and subspace learning for cross-modal retrieval. IEEE Trans. Pattern. Anal. Machine. Intelligence. 38(10), 2010–2023 (2016) Wang, K.Y., He, R., Wang, L., Wang, W., Tan, T.N.: Joint feature selection and subspace learning for cross-modal retrieval. IEEE Trans. Pattern. Anal. Machine. Intelligence. 38(10), 2010–2023 (2016)
40.
Zurück zum Zitat He, R., Tan, T.N., Wang, L., Zheng, W.S.: \(\ell _{21}\) Regularized correntropy for robust feature selection, 2012 IEEE Conferenceon Computer Vision and Pattern Recognition. CVPR, Providence, RI, USA (2012) He, R., Tan, T.N., Wang, L., Zheng, W.S.: \(\ell _{21}\) Regularized correntropy for robust feature selection, 2012 IEEE Conferenceon Computer Vision and Pattern Recognition. CVPR, Providence, RI, USA (2012)
41.
Zurück zum Zitat Chao, X., Hou, Z.J., Mo, Y.J.: CZU-MHAD: a multimodal dataset for human action recognition utilizing a depth camera and 10 wearable inertial sensors. IEEE Sens. J. 22(7), 7034–7042 (2022) Chao, X., Hou, Z.J., Mo, Y.J.: CZU-MHAD: a multimodal dataset for human action recognition utilizing a depth camera and 10 wearable inertial sensors. IEEE Sens. J. 22(7), 7034–7042 (2022)
42.
Zurück zum Zitat Yang, A.Y., Jafari, R., Sastry, S.S., Bajcsy, R.: Distributed recognition of human actions using wearable motion sensor networks. J. Ambient. Intelligence. Smart Env. 1(2), 103–115 (2009) Yang, A.Y., Jafari, R., Sastry, S.S., Bajcsy, R.: Distributed recognition of human actions using wearable motion sensor networks. J. Ambient. Intelligence. Smart Env. 1(2), 103–115 (2009)
43.
Zurück zum Zitat Chen, C., Jafari, R., Kehtarnavaz, N.: A survey of depth and inertial sensor fusion for human action recognition. Multimedia. Tools. Appl. 76(3), 4405–4425 (2017) Chen, C., Jafari, R., Kehtarnavaz, N.: A survey of depth and inertial sensor fusion for human action recognition. Multimedia. Tools. Appl. 76(3), 4405–4425 (2017)
44.
Zurück zum Zitat Chen, C., Zhang, B.C., Hou, Z.J., Jiang, J.J., Liu, M.Y., Yang, Y.: Action recognition from depth sequences using weighted fusion of 2d and 3d auto-correlation of gradients features. Multimedia. Tools. Appl. 76(3), 4651–4669 (2017) Chen, C., Zhang, B.C., Hou, Z.J., Jiang, J.J., Liu, M.Y., Yang, Y.: Action recognition from depth sequences using weighted fusion of 2d and 3d auto-correlation of gradients features. Multimedia. Tools. Appl. 76(3), 4651–4669 (2017)
45.
Zurück zum Zitat Shi, H.Y., Hou, Z.J., Liang, J.Z., Lin, E., Zhong, Z.K.: DSFNet: a distributed sensors fusion network for action recognition. IEEE Sens. J. 23(1), 839–848 (2023) Shi, H.Y., Hou, Z.J., Liang, J.Z., Lin, E., Zhong, Z.K.: DSFNet: a distributed sensors fusion network for action recognition. IEEE Sens. J. 23(1), 839–848 (2023)
Metadaten
Titel
Structural feature representation and fusion of human spatial cooperative motion for action recognition
verfasst von
Xin Chao
Zhenjie Hou
Yujian Mo
Haiyong Shi
Wenjing Yao
Publikationsdatum
30.01.2023
Verlag
Springer Berlin Heidelberg
Erschienen in
Multimedia Systems / Ausgabe 3/2023
Print ISSN: 0942-4962
Elektronische ISSN: 1432-1882
DOI
https://doi.org/10.1007/s00530-023-01054-5

Weitere Artikel der Ausgabe 3/2023

Multimedia Systems 3/2023 Zur Ausgabe