Skip to main content
Top
Published in: Multimedia Systems 5/2023

01-07-2023 | Special Issue Paper

SI-Net: spatial interaction network for deepfake detection

Authors: Jian Wang, Xiaoyu Du, Yu Cheng, Yunlian Sun, Jinhui Tang

Published in: Multimedia Systems | Issue 5/2023

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

As manipulated faces become more realistic and indistinguishable, there is a high demand for efficiently and accurately detecting deepfakes. Existing CNN-based deepfake detection methods either learn a global feature representation of the whole face or learn multiple local features. However, these methods learn the global and local features independently, thus neglect the spatial correlations between the local features and global context, which are vital in identifying different forgery patterns. Therefore, in this paper, we propose Spatial Interaction Network (SI-Net), a deepfake detection method to mine potential complementary and co-occurrent features between local texture and global context concurrently. Specifically, we first utilize a region feature extractor that distills local features from the global features, to simplify the procedure of local feature extraction. We then propose spatial-aware transformer to learn the co-occurrence feature from local texture and global context, concurrently. We capture the attended feature from the local regions according to their importance. The final prediction is made through the composite considerations of the aforementioned modules. Experimental results on two public datasets, FaceForensics++ and WildDeepfake, demonstrate the superior performance of SI-Net compared with the state-of-the-art methods.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Afchar, D., Nozick. V., Yamagishi, J., et al.: Mesonet: a compact facial video forgery detection network. In: IEEE International Workshop on Information Forensics and Security (2018) Afchar, D., Nozick. V., Yamagishi, J., et al.: Mesonet: a compact facial video forgery detection network. In: IEEE International Workshop on Information Forensics and Security (2018)
2.
go back to reference Bappy, J..H., Simons, C., Nataraj, L., et al.: Hybrid lstm and encoder–decoder architecture for detection of image forgeries. IEEE Transact. Image Process 28(7), 3286–3300 (2019)MathSciNetCrossRefMATH Bappy, J..H., Simons, C., Nataraj, L., et al.: Hybrid lstm and encoder–decoder architecture for detection of image forgeries. IEEE Transact. Image Process 28(7), 3286–3300 (2019)MathSciNetCrossRefMATH
3.
go back to reference Bayar, B., Stamm, M.C.: A deep learning approach to universal image manipulation detection using a new convolutional layer. In: ACM Workshop on Information Hiding and Multimedia Security, pp 5–10 (2016) Bayar, B., Stamm, M.C.: A deep learning approach to universal image manipulation detection using a new convolutional layer. In: ACM Workshop on Information Hiding and Multimedia Security, pp 5–10 (2016)
4.
go back to reference Chai, L., Bau, D., Lim, S.N., et. al.: What makes fake images detectable? understanding properties that generalize. In: European Conference on Computer Vision, pp 103–120 (2020) Chai, L., Bau, D., Lim, S.N., et. al.: What makes fake images detectable? understanding properties that generalize. In: European Conference on Computer Vision, pp 103–120 (2020)
5.
go back to reference Chen, Z., Yang, H.: Manipulated face detector: Joint spatial and frequency domain attention network. (2020) arXiv preprint arXiv:2005.02958 Chen, Z., Yang, H.: Manipulated face detector: Joint spatial and frequency domain attention network. (2020) arXiv preprint arXiv:​2005.​02958
6.
go back to reference Chi, C., Wei, F., Hu, H.: Relationnet++: Bridging visual representations for object detection via transformer decoder. In: Annual Conference on Neural Information Processing Systems (2020) Chi, C., Wei, F., Hu, H.: Relationnet++: Bridging visual representations for object detection via transformer decoder. In: Annual Conference on Neural Information Processing Systems (2020)
7.
go back to reference Chollet, F.: Xception: Deep learning with depthwise separable convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 1251–1258 (2017) Chollet, F.: Xception: Deep learning with depthwise separable convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 1251–1258 (2017)
11.
go back to reference Cozzolino, D., Poggi, G., Verdoliva, L.: Recasting residual-based local descriptors as convolutional neural networks: an application to image forgery detection. In: ACM Workshop on Information Hiding and Multimedia Security, pp 159–164 (2017) Cozzolino, D., Poggi, G., Verdoliva, L.: Recasting residual-based local descriptors as convolutional neural networks: an application to image forgery detection. In: ACM Workshop on Information Hiding and Multimedia Security, pp 159–164 (2017)
12.
go back to reference Dang, H., Liu, F., Stehouwer, J., et al.: On the detection of digital face manipulation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 5781–5790 (2020) Dang, H., Liu, F., Stehouwer, J., et al.: On the detection of digital face manipulation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 5781–5790 (2020)
14.
go back to reference Di, D., Shang, X., Zhang, W., et al.: Multiple hypothesis video relation detection. In: IEEE International Conference on Multimedia Big Data, pp 287–291 (2019) Di, D., Shang, X., Zhang, W., et al.: Multiple hypothesis video relation detection. In: IEEE International Conference on Multimedia Big Data, pp 287–291 (2019)
15.
go back to reference Du, X.Y., Yang, Y., Yang, L., et al.: Captioning videos using large-scale image corpus. J. Comp. Sci. Technol. 32(3), 480–493 (2017)CrossRef Du, X.Y., Yang, Y., Yang, L., et al.: Captioning videos using large-scale image corpus. J. Comp. Sci. Technol. 32(3), 480–493 (2017)CrossRef
16.
go back to reference Du, Y., Yuan, C., Li, B., et al.: Interaction-aware spatio-temporal pyramid attention networks for action classification. In: European Conference on Computer Vision, pp 373–389 (2018) Du, Y., Yuan, C., Li, B., et al.: Interaction-aware spatio-temporal pyramid attention networks for action classification. In: European Conference on Computer Vision, pp 373–389 (2018)
17.
go back to reference Durall, R., Keuper, M., Keuper, J.: Watch your up-convolution: Cnn based generative deep neural networks are failing to reproduce spectral distributions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 7890–7899 (2020) Durall, R., Keuper, M., Keuper, J.: Watch your up-convolution: Cnn based generative deep neural networks are failing to reproduce spectral distributions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 7890–7899 (2020)
18.
go back to reference Frank, J., Eisenhofer, T., Schönherr, L., et al.: Leveraging frequency analysis for deep fake image recognition. In: International Conference on Machine Learning, pp 3247–3258 (2020) Frank, J., Eisenhofer, T., Schönherr, L., et al.: Leveraging frequency analysis for deep fake image recognition. In: International Conference on Machine Learning, pp 3247–3258 (2020)
19.
go back to reference Fridrich, J., Kodovsky, J.: Rich models for steganalysis of digital images. IEEE Transact. Informat. Forens. Security 7(3), 868–882 (2012)CrossRef Fridrich, J., Kodovsky, J.: Rich models for steganalysis of digital images. IEEE Transact. Informat. Forens. Security 7(3), 868–882 (2012)CrossRef
20.
go back to reference He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 770–778 (2016) He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 770–778 (2016)
21.
go back to reference He, K., Gkioxari, G., Dollár, P., et al.: Mask r-cnn. In: IEEE International Conference on Computer Vision, pp 2961– 2969 (2017) He, K., Gkioxari, G., Dollár, P., et al.: Mask r-cnn. In: IEEE International Conference on Computer Vision, pp 2961– 2969 (2017)
22.
go back to reference Hou, Y., Xu, J., Liu, M., et al.: Nlh: a blind pixel-level non-local method for real-world image denoising. IEEE Transact. Image Process. 29, 5121–5135 (2020)CrossRefMATH Hou, Y., Xu, J., Liu, M., et al.: Nlh: a blind pixel-level non-local method for real-world image denoising. IEEE Transact. Image Process. 29, 5121–5135 (2020)CrossRefMATH
23.
go back to reference Huang, L., Wang, W., Chen, J., et al.: Attention on attention for image captioning. In: IEEE International Conference on Computer Vision, pp 4634–4643 (2019) Huang, L., Wang, W., Chen, J., et al.: Attention on attention for image captioning. In: IEEE International Conference on Computer Vision, pp 4634–4643 (2019)
24.
go back to reference Huang, Y., Juefei-Xu, F., Wang, R., et al.: Fakelocator: Robust localization of gan-based face manipulations via semantic segmentation networks with bells and whistles. arXiv preprint arXiv:2001.09598 (2020) Huang, Y., Juefei-Xu, F., Wang, R., et al.: Fakelocator: Robust localization of gan-based face manipulations via semantic segmentation networks with bells and whistles. arXiv preprint arXiv:​2001.​09598 (2020)
25.
go back to reference King, D.E.: Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10, 1755–1758 (2009) King, D.E.: Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10, 1755–1758 (2009)
27.
go back to reference Li, L., Bao, J., Zhang, T., et al.: (2020) Face x-ray for more general face forgery detection. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 5001–5010 Li, L., Bao, J., Zhang, T., et al.: (2020) Face x-ray for more general face forgery detection. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 5001–5010
28.
go back to reference Li, Y., Chang, M.C., Lyu, S.: In ictu oculi: Exposing ai created fake videos by detecting eye blinking. In: IEEE International Workshop on Information Forensics and Security (2018a) Li, Y., Chang, M.C., Lyu, S.: In ictu oculi: Exposing ai created fake videos by detecting eye blinking. In: IEEE International Workshop on Information Forensics and Security (2018a)
29.
go back to reference Li, Y., Zeng, J., Shan, S., et al.: Occlusion aware facial expression recognition using cnn with attention mechanism. IEEE Transact. Image Process. 28(5), 2439–2450 (2018)MathSciNetCrossRef Li, Y., Zeng, J., Shan, S., et al.: Occlusion aware facial expression recognition using cnn with attention mechanism. IEEE Transact. Image Process. 28(5), 2439–2450 (2018)MathSciNetCrossRef
30.
go back to reference Li, Y., Yang, X., Shang, X., et al.: Interventional video relation detection. In: ACM International Conference on Multimedia, pp 4091–4099 (2021) Li, Y., Yang, X., Shang, X., et al.: Interventional video relation detection. In: ACM International Conference on Multimedia, pp 4091–4099 (2021)
31.
go back to reference de Lima, O., Franklin, S., Basu, S., et al.: Deepfake detection using spatiotemporal convolutional networks. arXiv preprint arXiv:2006.14749 (2020) de Lima, O., Franklin, S., Basu, S., et al.: Deepfake detection using spatiotemporal convolutional networks. arXiv preprint arXiv:​2006.​14749 (2020)
32.
go back to reference Liu, H., Feng, J., Qi, M., et al.: End-to-end comparative attention networks for person re-identification. IEEE Transact. Image Process. 26(7), 3492–3506 (2017)MathSciNetCrossRefMATH Liu, H., Feng, J., Qi, M., et al.: End-to-end comparative attention networks for person re-identification. IEEE Transact. Image Process. 26(7), 3492–3506 (2017)MathSciNetCrossRefMATH
33.
go back to reference Liu, X., Yang, X., Wang, M., et al.: Deep neighborhood component analysis for visual similarity modeling. ACM Transact. Intell. Syst. Technol. 11(3), 1–15 (2020) Liu, X., Yang, X., Wang, M., et al.: Deep neighborhood component analysis for visual similarity modeling. ACM Transact. Intell. Syst. Technol. 11(3), 1–15 (2020)
34.
go back to reference Liu, Z., Qi, X., Torr, P.H.: Global texture enhancement for fake face detection in the wild. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 8060–8069 (2020b) Liu, Z., Qi, X., Torr, P.H.: Global texture enhancement for fake face detection in the wild. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 8060–8069 (2020b)
35.
go back to reference Masi, I., Killekar, A., Mascarenhas, R.M., et al.: Two-branch recurrent network for isolating deepfakes in videos. In: European Conference on Computer Vision, pp 667–684 (2020) Masi, I., Killekar, A., Mascarenhas, R.M., et al.: Two-branch recurrent network for isolating deepfakes in videos. In: European Conference on Computer Vision, pp 667–684 (2020)
36.
go back to reference Matern, F., Riess, C., Stamminger, M.: Exploiting visual artifacts to expose deepfakes and face manipulations. In: IEEE Winter Applications of Computer Vision Workshops, pp 83–92 (2019) Matern, F., Riess, C., Stamminger, M.: Exploiting visual artifacts to expose deepfakes and face manipulations. In: IEEE Winter Applications of Computer Vision Workshops, pp 83–92 (2019)
37.
go back to reference Neekhara, P., Hussain, S., Jere, M., et al.: Adversarial deepfakes: Evaluating vulnerability of deepfake detectors to adversarial examples. arXiv preprint arXiv:2002.12749 (2020) Neekhara, P., Hussain, S., Jere, M., et al.: Adversarial deepfakes: Evaluating vulnerability of deepfake detectors to adversarial examples. arXiv preprint arXiv:​2002.​12749 (2020)
38.
go back to reference Nguyen, H.H., Fang, F., Yamagishi, J., et al.: Multi-task learning for detecting and segmenting manipulated facial images and videos. In: International Conference on Biometrics Theory, Applications and Systems (2019) Nguyen, H.H., Fang, F., Yamagishi, J., et al.: Multi-task learning for detecting and segmenting manipulated facial images and videos. In: International Conference on Biometrics Theory, Applications and Systems (2019)
41.
go back to reference Peng, Y., He, X., Zhao, J.: Object-part attention model for fine-grained image classification. IEEE Transact. Image Process. 27(3), 1487–1500 (2017)MathSciNetCrossRefMATH Peng, Y., He, X., Zhao, J.: Object-part attention model for fine-grained image classification. IEEE Transact. Image Process. 27(3), 1487–1500 (2017)MathSciNetCrossRefMATH
42.
go back to reference Qian, Y., Yin, G., Sheng, L., et al.: Thinking in frequency: Face forgery detection by mining frequency-aware clues. In: European Conference on Computer Vision, pp 86–103 (2020) Qian, Y., Yin, G., Sheng, L., et al.: Thinking in frequency: Face forgery detection by mining frequency-aware clues. In: European Conference on Computer Vision, pp 86–103 (2020)
43.
go back to reference Rahmouni, N., Nozick, V., Yamagishi, J., et al.: Distinguishing computer graphics from natural images using convolution neural networks. In: IEEE International Workshop on Information Forensics and Security (2017) Rahmouni, N., Nozick, V., Yamagishi, J., et al.: Distinguishing computer graphics from natural images using convolution neural networks. In: IEEE International Workshop on Information Forensics and Security (2017)
44.
go back to reference Rossler, A., Cozzolino, D., Verdoliva, L., et al.: Faceforensics++: Learning to detect manipulated facial images. In: IEEE International Conference on Computer Vision (2019) Rossler, A., Cozzolino, D., Verdoliva, L., et al.: Faceforensics++: Learning to detect manipulated facial images. In: IEEE International Conference on Computer Vision (2019)
46.
go back to reference Shang, X., Di, D., Xiao, J., et al.: Annotating objects and relations in user-generated videos. In: International Conference on Multimedia Retrieval, pp 279–287 (2019) Shang, X., Di, D., Xiao, J., et al.: Annotating objects and relations in user-generated videos. In: International Conference on Multimedia Retrieval, pp 279–287 (2019)
47.
go back to reference Tan, Y., Hao, Y., He, X., et al.: Selective dependency aggregation for action classification. In: ACM International Conference on Multimedia, pp 592–601 (2021) Tan, Y., Hao, Y., He, X., et al.: Selective dependency aggregation for action classification. In: ACM International Conference on Multimedia, pp 592–601 (2021)
48.
go back to reference Thies, J., Zollhofer, M., Stamminger, M., et al.: Face2face: Real-time face capture and reenactment of rgb videos. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 2387–2395 (2016) Thies, J., Zollhofer, M., Stamminger, M., et al.: Face2face: Real-time face capture and reenactment of rgb videos. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 2387–2395 (2016)
49.
go back to reference Thies, J., Zollhöfer, M., Nießner, M.: Deferred neural rendering: image synthesis using neural textures. ACM Transact. Graph. 38(4), 66:1-66:12 (2019) Thies, J., Zollhöfer, M., Nießner, M.: Deferred neural rendering: image synthesis using neural textures. ACM Transact. Graph. 38(4), 66:1-66:12 (2019)
50.
go back to reference Tolosana, R., Romero-Tapiador, S., Fierrez, J., et al.: Deepfakes evolution: Analysis of facial regions and fake detection performance. arXiv preprint arXiv:2004.07532 (2020) Tolosana, R., Romero-Tapiador, S., Fierrez, J., et al.: Deepfakes evolution: Analysis of facial regions and fake detection performance. arXiv preprint arXiv:​2004.​07532 (2020)
51.
go back to reference Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: Annual Conference on Neural Information Processing Systems pp 5998–6008 (2017) Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: Annual Conference on Neural Information Processing Systems pp 5998–6008 (2017)
52.
go back to reference Wang, S.Y., Wang, O., Zhang, R., et al.: Cnn-generated images are surprisingly easy to spot... for now. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 8692–8701 (2020) Wang, S.Y., Wang, O., Zhang, R., et al.: Cnn-generated images are surprisingly easy to spot... for now. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 8692–8701 (2020)
53.
go back to reference Wang, X., Girshick, R., Gupta, A., et al.: Non-local neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 7794–7803 (2018) Wang, X., Girshick, R., Gupta, A., et al.: Non-local neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 7794–7803 (2018)
54.
go back to reference Wang, Z., Bovik, A.C., Sheikh, H.R., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Transact. Image Process. 13(4), 600–612 (2004)CrossRef Wang, Z., Bovik, A.C., Sheikh, H.R., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Transact. Image Process. 13(4), 600–612 (2004)CrossRef
55.
go back to reference Xiao, J., Shang, X., Yang, X., et al.: Visual relation grounding in videos. In: European Conference on Computer Vision, Springer, pp 447–464 (2020) Xiao, J., Shang, X., Yang, X., et al.: Visual relation grounding in videos. In: European Conference on Computer Vision, Springer, pp 447–464 (2020)
56.
go back to reference Yang, C., Ding, L., Chen, Y., et al.: Defending against gan-based deepfake attacks via transformation-aware adversarial faces. arXiv preprint arXiv:2006.07421 (2020a) Yang, C., Ding, L., Chen, Y., et al.: Defending against gan-based deepfake attacks via transformation-aware adversarial faces. arXiv preprint arXiv:​2006.​07421 (2020a)
57.
go back to reference Yang, X., Dong, J., Cao, Y., et al.: Tree-augmented cross-modal encoding for complex-query video retrieval. In: ACM SIGIR Conference on Research and Development in Information Retrieval, pp 1339–1348 (2020b) Yang, X., Dong, J., Cao, Y., et al.: Tree-augmented cross-modal encoding for complex-query video retrieval. In: ACM SIGIR Conference on Research and Development in Information Retrieval, pp 1339–1348 (2020b)
58.
go back to reference Yang, X., Liu, X., Jian, M., et al.: Weakly-supervised video object grounding by exploring spatio-temporal contexts. In: ACM International Conference on Multimedia, pp 1939–1947 (2020c) Yang, X., Liu, X., Jian, M., et al.: Weakly-supervised video object grounding by exploring spatio-temporal contexts. In: ACM International Conference on Multimedia, pp 1939–1947 (2020c)
59.
go back to reference Yang, X., Feng, F., Ji, W., et al.: Deconfounded video moment retrieval with causal intervention. In: SIGIR (2021) Yang, X., Feng, F., Ji, W., et al.: Deconfounded video moment retrieval with causal intervention. In: SIGIR (2021)
60.
go back to reference Zhang, D., Zhang, H., Tang, J., et al.: Feature pyramid transformer. In: European Conference on Computer Vision, pp 323–339(2020) Zhang, D., Zhang, H., Tang, J., et al.: Feature pyramid transformer. In: European Conference on Computer Vision, pp 323–339(2020)
61.
go back to reference Zhang, K., Zhang, Z., Li, Z., et al.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Sig. Process. Lett. 23(10), 1499–1503 (2016)CrossRef Zhang, K., Zhang, Z., Li, Z., et al.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Sig. Process. Lett. 23(10), 1499–1503 (2016)CrossRef
63.
go back to reference Zhu, F., Fang, C., Ma, K.K.: Pnen: Pyramid non-local enhanced networks. IEEE Transact. Image Process. 29, 8831–8841 (2020)CrossRefMATH Zhu, F., Fang, C., Ma, K.K.: Pnen: Pyramid non-local enhanced networks. IEEE Transact. Image Process. 29, 8831–8841 (2020)CrossRefMATH
64.
go back to reference Zi, B., Chang, M., Chen, J., et al.:Wilddeepfake: A challenging real-world dataset for deepfake detection. In: ACM International Conference on Multimedia, pp 2382–2390 (2020) Zi, B., Chang, M., Chen, J., et al.:Wilddeepfake: A challenging real-world dataset for deepfake detection. In: ACM International Conference on Multimedia, pp 2382–2390 (2020)
Metadata
Title
SI-Net: spatial interaction network for deepfake detection
Authors
Jian Wang
Xiaoyu Du
Yu Cheng
Yunlian Sun
Jinhui Tang
Publication date
01-07-2023
Publisher
Springer Berlin Heidelberg
Published in
Multimedia Systems / Issue 5/2023
Print ISSN: 0942-4962
Electronic ISSN: 1432-1882
DOI
https://doi.org/10.1007/s00530-023-01114-w

Other articles of this Issue 5/2023

Multimedia Systems 5/2023 Go to the issue