Skip to main content
Erschienen in: Multimedia Systems 4/2023

25.04.2023 | Regular Paper

Face attribute recognition via end-to-end weakly supervised regional location

verfasst von: Jian Shi, Ge Sun, Jinyu Zhang, Zhihui Wang, Haojie Li

Erschienen in: Multimedia Systems | Ausgabe 4/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Facial attributes have been successfully applied in many fields of computer vision, such as face recognition, face retrieval, and face image synthesis. Locating attribute-related facial regions is a prerequisite for predicting the presence of attributes. However, most existing face attribute recognition methods obtain specific attribute regions through face segmentation annotations or face landmarks that are not easily available. In this paper, we propose a weakly supervised attribute location module (ALM) that can effectively detect facial regions with only image-level attribute labels and improve face attribute recognition using region-based local features. Moreover, we introduce a bottom-up skip connection structure to fuse the features from multiple convolutional layers, which can enhance attribute-specific region location with low-level spatial information supplements. Our network is easy to build and can be trained in an end-to-end manner. Extensive experiments demonstrate the superior performance of our method on LFWA and CelebA datasets.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Berg, T., Belhumeur, P.: Poof: part-based one-vs.-one features for fine-grained categorization, face verification, and attribute estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 955–962 (2013) Berg, T., Belhumeur, P.: Poof: part-based one-vs.-one features for fine-grained categorization, face verification, and attribute estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 955–962 (2013)
2.
Zurück zum Zitat Song, F., Tan, X., Chen, S.: Exploiting relationship between attributes for improved face verification. Comput. Vis. Image Underst. 122, 143–154 (2014)CrossRef Song, F., Tan, X., Chen, S.: Exploiting relationship between attributes for improved face verification. Comput. Vis. Image Underst. 122, 143–154 (2014)CrossRef
3.
Zurück zum Zitat He, R., Wu, X., Sun, Z., Tan, T.: Wasserstein cnn: learning invariant features for nir-vis face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1761–1773 (2018)CrossRef He, R., Wu, X., Sun, Z., Tan, T.: Wasserstein cnn: learning invariant features for nir-vis face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1761–1773 (2018)CrossRef
4.
Zurück zum Zitat He, R., Tan, T., Davis, L., Sun, Z.: Robust rgb-d face recognition using attribute-aware loss. IEEE Trans. Pattern Anal. Mach. Intell. 42(10), 2552–2566 (2020)CrossRef He, R., Tan, T., Davis, L., Sun, Z.: Robust rgb-d face recognition using attribute-aware loss. IEEE Trans. Pattern Anal. Mach. Intell. 42(10), 2552–2566 (2020)CrossRef
5.
Zurück zum Zitat Fan, C., Wang, Z., Li, J., Wang, S., Sun, X.: Robust facial expression recognition with global-local joint representation learning. In: Multimedia Systems, pp. 1–11 (2022) Fan, C., Wang, Z., Li, J., Wang, S., Sun, X.: Robust facial expression recognition with global-local joint representation learning. In: Multimedia Systems, pp. 1–11 (2022)
6.
Zurück zum Zitat Jagadeesh, M., Baranidharan, B.: Facial expression recognition of online learners from real-time videos using a novel deep learning model. In: Multimedia Systems, pp. 1–22 (2022) Jagadeesh, M., Baranidharan, B.: Facial expression recognition of online learners from real-time videos using a novel deep learning model. In: Multimedia Systems, pp. 1–22 (2022)
7.
Zurück zum Zitat Fang, Y., Xiao, Z., Zhang, W., Huang, Y., Wang, L., Boujemaa, N., Geman, D.: Attribute prototype learning for interactive face retrieval. IEEE Trans. Inf. Forensics Secur. 16, 2593–2607 (2021)CrossRef Fang, Y., Xiao, Z., Zhang, W., Huang, Y., Wang, L., Boujemaa, N., Geman, D.: Attribute prototype learning for interactive face retrieval. IEEE Trans. Inf. Forensics Secur. 16, 2593–2607 (2021)CrossRef
8.
Zurück zum Zitat Fang, Y., Yuan, Q.: Attribute-enhanced metric learning for face retrieval. EURASIP J. Image Video Process. 2018(1), 1–9 (2018)CrossRef Fang, Y., Yuan, Q.: Attribute-enhanced metric learning for face retrieval. EURASIP J. Image Video Process. 2018(1), 1–9 (2018)CrossRef
9.
Zurück zum Zitat Di, X., Patel, V.M.: Multimodal face synthesis from visual attributes. IEEE Trans. Biom. Behav. Identity Sci. 3(3), 427–439 (2021)CrossRef Di, X., Patel, V.M.: Multimodal face synthesis from visual attributes. IEEE Trans. Biom. Behav. Identity Sci. 3(3), 427–439 (2021)CrossRef
10.
Zurück zum Zitat Cao, J., Li, Y., Zhang, Z.: Partially shared multi-task convolutional neural network with local constraint for face attribute learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4290–4299 (2018) Cao, J., Li, Y., Zhang, Z.: Partially shared multi-task convolutional neural network with local constraint for face attribute learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4290–4299 (2018)
11.
Zurück zum Zitat Rudd, E.M., Günther, M., Boult, T.E.: Moon: A mixed objective optimization network for the recognition of facial attributes. In: European Conference on Computer Vision, pp. 19–35 (2016) Rudd, E.M., Günther, M., Boult, T.E.: Moon: A mixed objective optimization network for the recognition of facial attributes. In: European Conference on Computer Vision, pp. 19–35 (2016)
12.
Zurück zum Zitat Hand, E.M., Chellappa, R.: Attributes for improved attributes: a multi-task network utilizing implicit and explicit relationships for facial attribute classification. In: Thirty-First AAAI Conference on Artificial Intelligence, pp. 4068–4074 (2017) Hand, E.M., Chellappa, R.: Attributes for improved attributes: a multi-task network utilizing implicit and explicit relationships for facial attribute classification. In: Thirty-First AAAI Conference on Artificial Intelligence, pp. 4068–4074 (2017)
13.
Zurück zum Zitat Han, H., Jain, A.K., Wang, F., Shan, S., Chen, X.: Heterogeneous face attribute estimation: a deep multi-task learning approach. IEEE Trans. Pattern Anal. Mach. Intell. 40(11), 2597–2609 (2017)CrossRef Han, H., Jain, A.K., Wang, F., Shan, S., Chen, X.: Heterogeneous face attribute estimation: a deep multi-task learning approach. IEEE Trans. Pattern Anal. Mach. Intell. 40(11), 2597–2609 (2017)CrossRef
14.
Zurück zum Zitat Zhang, N., Paluri, M., Ranzato, M., Darrell, T., Bourdev, L.: Panda: Pose aligned networks for deep attribute modeling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1637–1644 (2014) Zhang, N., Paluri, M., Ranzato, M., Darrell, T., Bourdev, L.: Panda: Pose aligned networks for deep attribute modeling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1637–1644 (2014)
15.
Zurück zum Zitat Gkioxari, G., Girshick, R., Malik, J.: Actions and attributes from wholes and parts. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2470–2478 (2015) Gkioxari, G., Girshick, R., Malik, J.: Actions and attributes from wholes and parts. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2470–2478 (2015)
16.
Zurück zum Zitat Bourdev, L., Malik, J.: Poselets: Body part detectors trained using 3d human pose annotations. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 1365–1372 (2009) Bourdev, L., Malik, J.: Poselets: Body part detectors trained using 3d human pose annotations. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 1365–1372 (2009)
17.
Zurück zum Zitat Bourdev, L., Maji, S., Malik, J.: Describing people: a poselet-based approach to attribute classification. In: 2011 International Conference on Computer Vision, pp. 1543–1550 (2011) Bourdev, L., Maji, S., Malik, J.: Describing people: a poselet-based approach to attribute classification. In: 2011 International Conference on Computer Vision, pp. 1543–1550 (2011)
18.
Zurück zum Zitat Kalayeh, M.M., Gong, B., Shah, M.: Improving facial attribute prediction using semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6942–6950 (2017) Kalayeh, M.M., Gong, B., Shah, M.: Improving facial attribute prediction using semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6942–6950 (2017)
19.
Zurück zum Zitat Mahbub, U., Sarkar, S., Chellappa, R.: Segment-based methods for facial attribute detection from partial faces. IEEE Trans. Affect. Comput. 11(4), 601–613 (2018)CrossRef Mahbub, U., Sarkar, S., Chellappa, R.: Segment-based methods for facial attribute detection from partial faces. IEEE Trans. Affect. Comput. 11(4), 601–613 (2018)CrossRef
20.
Zurück zum Zitat Ding, H., Zhou, H., Zhou, S.K., Chellappa, R.: A deep cascade network for unaligned face attribute classification. In: Thirty-Second AAAI Conference on Artificial Intelligence, pp. 6789–6796 (2018) Ding, H., Zhou, H., Zhou, S.K., Chellappa, R.: A deep cascade network for unaligned face attribute classification. In: Thirty-Second AAAI Conference on Artificial Intelligence, pp. 6789–6796 (2018)
21.
Zurück zum Zitat Wang, Y., Morariu, V.I., Davis, L.S.: Learning a discriminative filter bank within a cnn for fine-grained recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4148–4157 (2018) Wang, Y., Morariu, V.I., Davis, L.S.: Learning a discriminative filter bank within a cnn for fine-grained recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4148–4157 (2018)
22.
Zurück zum Zitat Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017) Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)
23.
Zurück zum Zitat Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A.C.: Ssd: single shot multibox detector. In: European Conference on Computer Vision, pp. 21–37 (2016) Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A.C.: Ssd: single shot multibox detector. In: European Conference on Computer Vision, pp. 21–37 (2016)
24.
Zurück zum Zitat Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Tech. Rep. (2008) Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Tech. Rep. (2008)
25.
Zurück zum Zitat Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3730–3738 (2015) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3730–3738 (2015)
26.
Zurück zum Zitat Zheng, X., Guo, Y., Huang, H., Li, Y., He, R.: A survey of deep facial attribute analysis. Int. J. Comput. Vis. 128, 2002–2034 (2020)CrossRef Zheng, X., Guo, Y., Huang, H., Li, Y., He, R.: A survey of deep facial attribute analysis. Int. J. Comput. Vis. 128, 2002–2034 (2020)CrossRef
27.
Zurück zum Zitat Taherkhani, F., Dabouei, A., Soleymani, S., Dawson, J., Nasrabadi, N.M.: Tasks structure regularization in multi-task learning for improving facial attribute prediction (2021). arXiv:2108.04353 Taherkhani, F., Dabouei, A., Soleymani, S., Dawson, J., Nasrabadi, N.M.: Tasks structure regularization in multi-task learning for improving facial attribute prediction (2021). arXiv:​2108.​04353
28.
Zurück zum Zitat Duan, M., Li, K., Li, K., Tian, Q.: A novel multi-task tensor correlation neural network for facial attribute prediction. Trans. Intell. Syst. Technol. 12(1), 1–22 (2021)CrossRef Duan, M., Li, K., Li, K., Tian, Q.: A novel multi-task tensor correlation neural network for facial attribute prediction. Trans. Intell. Syst. Technol. 12(1), 1–22 (2021)CrossRef
29.
Zurück zum Zitat Chen, Z., Liu, F., Zhao, Z.: Let them choose what they want: a multi-task cnn architecture leveraging mid-level deep representations for face attribute classification. In: 2021 IEEE International Conference on Image Processing (ICIP), pp. 879–883 (2021) Chen, Z., Liu, F., Zhao, Z.: Let them choose what they want: a multi-task cnn architecture leveraging mid-level deep representations for face attribute classification. In: 2021 IEEE International Conference on Image Processing (ICIP), pp. 879–883 (2021)
30.
Zurück zum Zitat Fang, K., Yang, J.: Robust deep facial attribute prediction against adversarial attacks. In: 2021 7th International Conference on Computing and Artificial Intelligence, pp. 202–207 (2021) Fang, K., Yang, J.: Robust deep facial attribute prediction against adversarial attacks. In: 2021 7th International Conference on Computing and Artificial Intelligence, pp. 202–207 (2021)
31.
Zurück zum Zitat Fang, K., Tao, Q., Wu, Y., Li, T., Cai, J., Cai, F., Huang, X., Yang, J.: Learn robust features via orthogonal multi-path (2020). arXiv:2010.12190 Fang, K., Tao, Q., Wu, Y., Li, T., Cai, J., Cai, F., Huang, X., Yang, J.: Learn robust features via orthogonal multi-path (2020). arXiv:​2010.​12190
32.
Zurück zum Zitat Singh, K.K., Lee, Y.J.: End-to-end localization and ranking for relative attributes. In: European Conference on Computer Vision, pp. 753–769 (2016) Singh, K.K., Lee, Y.J.: End-to-end localization and ranking for relative attributes. In: European Conference on Computer Vision, pp. 753–769 (2016)
33.
Zurück zum Zitat Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in Neural Information Processing Systems, pp. 2017–2025 (2015) Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in Neural Information Processing Systems, pp. 2017–2025 (2015)
34.
Zurück zum Zitat Khan, K., Attique, M., Khan, R.U., Syed, I.S., Chung, T.S.: A multi-task framework for facial attributes classification through end-to-end face parsing and deep convolutional neural networks. Sensors 20(2), 328 (2020)CrossRef Khan, K., Attique, M., Khan, R.U., Syed, I.S., Chung, T.S.: A multi-task framework for facial attributes classification through end-to-end face parsing and deep convolutional neural networks. Sensors 20(2), 328 (2020)CrossRef
35.
Zurück zum Zitat Ge, H., Dong, J., Zhang, L.: Face attributes recognition based on one-way inferential correlation between attributes. In: MultiMedia Modeling, pp. 253–265 (2020) Ge, H., Dong, J., Zhang, L.: Face attributes recognition based on one-way inferential correlation between attributes. In: MultiMedia Modeling, pp. 253–265 (2020)
36.
Zurück zum Zitat Deng, Z., Fang, Y., Zhang, Y.: Face attribute estimation with hmax-gcnet model. In: Biometric Recognition, pp. 392–399 (2021) Deng, Z., Fang, Y., Zhang, Y.: Face attribute estimation with hmax-gcnet model. In: Biometric Recognition, pp. 392–399 (2021)
37.
Zurück zum Zitat Chen, Z., Gu, S., Zhu, F., Xu, J., Zhao, R.: Improving facial attribute recognition by group and graph learning. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021) Chen, Z., Gu, S., Zhu, F., Xu, J., Zhao, R.: Improving facial attribute recognition by group and graph learning. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021)
38.
Zurück zum Zitat Deng, J., Guo, J., Xue, N., Zafeiriou, S.: Arcface: additive angular margin loss for deep face recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4690–4699 (2019) Deng, J., Guo, J., Xue, N., Zafeiriou, S.: Arcface: additive angular margin loss for deep face recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4690–4699 (2019)
39.
Zurück zum Zitat Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018) Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)
40.
Zurück zum Zitat Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016) Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)
41.
Zurück zum Zitat Tang, C., Sheng, L., Zhang, Z., Hu, X.: Improving pedestrian attribute recognition with weakly-supervised multi-scale attribute-specific localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4997–5006 (2019) Tang, C., Sheng, L., Zhang, Z., Hu, X.: Improving pedestrian attribute recognition with weakly-supervised multi-scale attribute-specific localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4997–5006 (2019)
42.
Zurück zum Zitat Deng, J., Roussos, A., Chrysos, G.G., Ververas, E., Kotsia, I., Shen, J., Zafeiriou, S.: The menpo benchmark for multi-pose 2d and 3d facial landmark localisation and tracking. Int. J. Comput. Vis. 127(6), 599–624 (2019)CrossRef Deng, J., Roussos, A., Chrysos, G.G., Ververas, E., Kotsia, I., Shen, J., Zafeiriou, S.: The menpo benchmark for multi-pose 2d and 3d facial landmark localisation and tracking. Int. J. Comput. Vis. 127(6), 599–624 (2019)CrossRef
43.
Zurück zum Zitat Guo, J., Deng, J., Xue, N., Zafeiriou, S.: Stacked dense u-nets with dual transformers for robust face alignment. In: British Machine Vision Conference, p. 44 (2018) Guo, J., Deng, J., Xue, N., Zafeiriou, S.: Stacked dense u-nets with dual transformers for robust face alignment. In: British Machine Vision Conference, p. 44 (2018)
44.
Zurück zum Zitat Deng, J., Guo, J., Zhou, Y., Yu, J., Kotsia, I., Zafeiriou, S.: Retinaface: single-stage dense face localisation in the wild (2019). arXiv:1905.00641 Deng, J., Guo, J., Zhou, Y., Yu, J., Kotsia, I., Zafeiriou, S.: Retinaface: single-stage dense face localisation in the wild (2019). arXiv:​1905.​00641
46.
Zurück zum Zitat Mao, L., Yan, Y., Xue, J.-H., Wang, H.: Deep multi-task multi-label cnn for effective facial attribute classification. IEEE Trans. Affect. Comput. 13(2), 1 (2020) Mao, L., Yan, Y., Xue, J.-H., Wang, H.: Deep multi-task multi-label cnn for effective facial attribute classification. IEEE Trans. Affect. Comput. 13(2), 1 (2020)
47.
Zurück zum Zitat Shu, Y., Yan, Y., Chen, S., Xue, J.-H., Shen, C., Wang, H.: Learning spatial-semantic relationship for facial attribute recognition with limited labeled data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11916–11925 (2021) Shu, Y., Yan, Y., Chen, S., Xue, J.-H., Shen, C., Wang, H.: Learning spatial-semantic relationship for facial attribute recognition with limited labeled data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11916–11925 (2021)
48.
Zurück zum Zitat Ankit, K.S., Hassan, F.: Slim-cnn: a light-weight cnn for face attribute prediction. In: 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), pp. 329–335 (2020) Ankit, K.S., Hassan, F.: Slim-cnn: a light-weight cnn for face attribute prediction. In: 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), pp. 329–335 (2020)
49.
Zurück zum Zitat Li, K., Zhang, J., Shan, S.: Learning shape-appearance based attributes representation for facial attribute recognition with limited labeled data. In: 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), pp. 1–8 (2021) Li, K., Zhang, J., Shan, S.: Learning shape-appearance based attributes representation for facial attribute recognition with limited labeled data. In: 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), pp. 1–8 (2021)
50.
Zurück zum Zitat Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012)
Metadaten
Titel
Face attribute recognition via end-to-end weakly supervised regional location
verfasst von
Jian Shi
Ge Sun
Jinyu Zhang
Zhihui Wang
Haojie Li
Publikationsdatum
25.04.2023
Verlag
Springer Berlin Heidelberg
Erschienen in
Multimedia Systems / Ausgabe 4/2023
Print ISSN: 0942-4962
Elektronische ISSN: 1432-1882
DOI
https://doi.org/10.1007/s00530-023-01095-w

Weitere Artikel der Ausgabe 4/2023

Multimedia Systems 4/2023 Zur Ausgabe