Skip to main content
Top
Published in: Optical Memory and Neural Networks 2/2023

01-06-2023

Improving the Performance of Human Part Segmentation Based on Swin Transformer

Authors: Juan Du, Tao Yang

Published in: Optical Memory and Neural Networks | Issue 2/2023

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

One of the current challenges in deep learning is semantic segmentation. Moreover, human part segmentation is a sub-task in image segmentation, which differs from traditional segmentation to understand the human body’s intrinsic connections. Convolutional Neural Network (CNN) has always been a standard feature extraction network in human part segmentation. Recently, the proposed Swin Transformer surpasses CNN for many image applications. However, few articles have explored the performance of Swin Transformer in human part segmentation compared to CNN. In this paper, we make a comparison experiment on this issue, and the experimental results prove that even in the area of human part segmentation and without any additional trick, the Swin Transformer has good results compared with CNN. At the same time, this paper also combines the Edge Perceiving Module (EPM) currently commonly used in CNN with Swin Transformer to prove that Swin Transformer can see the intrinsic connection of segmented parts. This research demonstrates the feasibility of applying Swin Transformer to the part segmentation of images, which is conducive to advancing image segmentation technology in the future.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Huang, Z., Wei, Y., Wang, X., Liu, W., Huang, T.S., and Shi, H., Alignseg: Feature-aligned segmentation networks, IEEE. Trans. Pattern Anal., 2021, vol. 44, no.1, pp. 550–557. Huang, Z., Wei, Y., Wang, X., Liu, W., Huang, T.S., and Shi, H., Alignseg: Feature-aligned segmentation networks, IEEE. Trans. Pattern Anal., 2021, vol. 44, no.1, pp. 550–557.
2.
go back to reference Tanzi, L., Piazzolla, P., Porpiglia, F., and Vezzetti, Real-time deep learning semantic segmentation during intra-operative surgery for 3D augmented reality assistance, Int. J. Comput. Assisted Radiol., 2021, vol. 6, no. 9, pp. 1435–1445.CrossRef Tanzi, L., Piazzolla, P., Porpiglia, F., and Vezzetti, Real-time deep learning semantic segmentation during intra-operative surgery for 3D augmented reality assistance, Int. J. Comput. Assisted Radiol., 2021, vol. 6, no. 9, pp. 1435–1445.CrossRef
3.
go back to reference Fernández, C., Muñoz-Bulnes, J., Fernández-Llorca, D., Parra, I., Garcia-Daza, I., Izquierdo, R., and Sotelo, M., A. High-level interpretation of urban road maps fusing deep learning-based pixelwise scene segmentation and digital navigation maps, J. Adv. Transp., 2018. Fernández, C., Muñoz-Bulnes, J., Fernández-Llorca, D., Parra, I., Garcia-Daza, I., Izquierdo, R., and Sotelo, M., A. High-level interpretation of urban road maps fusing deep learning-based pixelwise scene segmentation and digital navigation maps, J. Adv. Transp., 2018.
4.
go back to reference Chen, L.C., Papandreou, G., Kokkinos, I., et al., Semantic image segmentation with deep convolutional nets and fully connected, 2014; arXiv: 1412.7062. Chen, L.C., Papandreou, G., Kokkinos, I., et al., Semantic image segmentation with deep convolutional nets and fully connected, 2014; arXiv: 1412.7062.
5.
go back to reference Chen, L.C., Yang, Y., Wang, J., Xu, W., and Yuille, A.L., Attention to scale: Scale-aware semantic image segmentation, in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3640–3649. Chen, L.C., Yang, Y., Wang, J., Xu, W., and Yuille, A.L., Attention to scale: Scale-aware semantic image segmentation, in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3640–3649.
6.
go back to reference Xia, F., Wang, P., Chen, L.C., and Yuille, A.L., Zoom better to see clearer: Human part segmentation with auto zoom net, in Proc. of the European Conference on Computer Vision, 2016, pp. 648–663. Xia, F., Wang, P., Chen, L.C., and Yuille, A.L., Zoom better to see clearer: Human part segmentation with auto zoom net, in Proc. of the European Conference on Computer Vision, 2016, pp. 648–663.
7.
go back to reference Luo, Y., Zheng, Z., Zheng, L., Guan, T., Yu, J., and Yang, Y., Macro-micro adversarial network for human parsing, in Proc. of the European Conference on Computer Vision, 2018, pp. 418–434. Luo, Y., Zheng, Z., Zheng, L., Guan, T., Yu, J., and Yang, Y., Macro-micro adversarial network for human parsing, in Proc. of the European Conference on Computer Vision, 2018, pp. 418–434.
8.
go back to reference Liang, X., Shen, X., Feng, J., Lin, L., and Yan, S., Semantic object parsing with graph lstm, in European Conference on Computer Vision, 2016, pp. 125–143. Liang, X., Shen, X., Feng, J., Lin, L., and Yan, S., Semantic object parsing with graph lstm, in European Conference on Computer Vision, 2016, pp. 125–143.
9.
go back to reference Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., et al., Swin transformer: Hierarchical vision transformer using shifted windows, in Proc. of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012–10022. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., et al., Swin transformer: Hierarchical vision transformer using shifted windows, in Proc. of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012–10022.
10.
go back to reference Li, P., Xu, Y., Wei, Y., and Yang, Y., Self-correction for human parsing, IEEE. Trans. Pattern Anal., 2020, vol. 44, no. 6, pp. 3260–3271.CrossRef Li, P., Xu, Y., Wei, Y., and Yang, Y., Self-correction for human parsing, IEEE. Trans. Pattern Anal., 2020, vol. 44, no. 6, pp. 3260–3271.CrossRef
11.
go back to reference Ruan, T., Liu, T., Huang, Z., Wei, Y., Wei, S., and Zhao, Y. Devil in the details: Towards accurate single and multiple human parsing, in Proc. of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, no. 01, pp. 4814–4821. Ruan, T., Liu, T., Huang, Z., Wei, Y., Wei, S., and Zhao, Y. Devil in the details: Towards accurate single and multiple human parsing, in Proc. of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, no. 01, pp. 4814–4821.
12.
go back to reference Bottou, Léon, Large-scale machine learning with stochastic gradient descent, Proc. of COMPSTAT'2010: Physica-Verlag HD, 2010, pp. 177–186. Bottou, Léon, Large-scale machine learning with stochastic gradient descent, Proc. of COMPSTAT'2010: Physica-Verlag HD, 2010, pp. 177–186.
13.
go back to reference Chen, X., Mottaghi, R., Liu, X., Fidler, S., Urtasun, R., and Yuille, A., Detect what you can: Detecting and representing objects using holistic models and body parts, in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1971–1978. Chen, X., Mottaghi, R., Liu, X., Fidler, S., Urtasun, R., and Yuille, A., Detect what you can: Detecting and representing objects using holistic models and body parts, in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1971–1978.
14.
go back to reference Liu, Z., Luo, P., Qiu, S., Wang, X., and Tang, X., Deepfashion: Powering robust clothes recognition and retrieval with rich annotations, in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1096–1104. Liu, Z., Luo, P., Qiu, S., Wang, X., and Tang, X., Deepfashion: Powering robust clothes recognition and retrieval with rich annotations, in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1096–1104.
Metadata
Title
Improving the Performance of Human Part Segmentation Based on Swin Transformer
Authors
Juan Du
Tao Yang
Publication date
01-06-2023
Publisher
Pleiades Publishing
Published in
Optical Memory and Neural Networks / Issue 2/2023
Print ISSN: 1060-992X
Electronic ISSN: 1934-7898
DOI
https://doi.org/10.3103/S1060992X23020030

Other articles of this Issue 2/2023

Optical Memory and Neural Networks 2/2023 Go to the issue

Premium Partner