Skip to main content
Erschienen in: Neural Computing and Applications 1/2021

19.05.2020 | Original Article

J-LDFR: joint low-level and deep neural network feature representations for pedestrian gender classification

verfasst von: Muhammad Fayyaz, Mussarat Yasmin, Muhammad Sharif, Mudassar Raza

Erschienen in: Neural Computing and Applications | Ausgabe 1/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Appearance-based gender classification is one of the key areas in pedestrian analysis, and it has many useful applications such as visual surveillance, predict demographics statistics, population prediction, and human–computer interaction. For pedestrian gender classification, traditional and deep convolutional neural network (CNN) approaches are employed individually. However, they are facing issues, for instance, discriminative feature representations, lower classification accuracy, and small sample size for model learning. To address these issues, this article proposes a framework that considers the combination of both traditional and deep CNN approaches for gender classification. To realize it, HOG- and LOMO-assisted low-level features are extracted to handle rotation, viewpoint and illumination variances in the images. Simultaneously, VGG19- and ResNet101-based standard deep CNN architectures are employed to acquire the deep features which are robust against pose variations. To avoid the ambiguous and unnecessary feature representations, the entropy-controlled features are picked from both low-level and deep representations of features that reduce the dimension of computed features. By merging the selected low-level features with deep features, we obtain a robust joint feature representation. The extensive experiments are conducted on PETA and MIT datasets, and computed results suggest that using the integration of both low-level and deep feature representations can improve the performance as compared to using these feature representations, individually. The proposed framework achieves AU-ROC of 96% and accuracy of 89.3% on the PETA dataset, and AU-ROC of 86% and accuracy of 82% on the MIT dataset. The experimental outcomes show that the proposed J-LDFR framework outperformed the existing gender classification methods.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Pławiak P, Abdar M, Acharya UR (2019) Application of new deep genetic cascade ensemble of SVM classifiers to predict the Australian credit scoring. Appl Soft Comput 84:105740 Pławiak P, Abdar M, Acharya UR (2019) Application of new deep genetic cascade ensemble of SVM classifiers to predict the Australian credit scoring. Appl Soft Comput 84:105740
2.
Zurück zum Zitat Pławiak P, Abdar M, Pławiak J, Makarenkov V, Acharya UR (2020) DGHNL: a new deep genetic hierarchical network of learners for prediction of credit scoring. Inf Sci 516:401–418 Pławiak P, Abdar M, Pławiak J, Makarenkov V, Acharya UR (2020) DGHNL: a new deep genetic hierarchical network of learners for prediction of credit scoring. Inf Sci 516:401–418
3.
Zurück zum Zitat Fathima A, Vaidehi K (2020) Review on facial expression recognition system using machine learning techniques. In: Advances in decision sciences, image Pprocessing, security and computer vision. Springer, Cham, pp 608–618 Fathima A, Vaidehi K (2020) Review on facial expression recognition system using machine learning techniques. In: Advances in decision sciences, image Pprocessing, security and computer vision. Springer, Cham, pp 608–618
4.
Zurück zum Zitat Pandey P, Pallavi S, Pandey SC (2020) Pragmatic medical image analysis and deep learning: an emerging trend. In: Advancement of machine intelligence in interactive medical image analysis. Springer, Singapore, pp 1–18 Pandey P, Pallavi S, Pandey SC (2020) Pragmatic medical image analysis and deep learning: an emerging trend. In: Advancement of machine intelligence in interactive medical image analysis. Springer, Singapore, pp 1–18
5.
Zurück zum Zitat Pławiak P, Acharya UR (2019) Novel deep genetic ensemble of classifiers for arrhythmia detection using ECG signals. In: MacIntyre J (ed) Neural computing and applications. Springer, Berlin, pp 1–25 Pławiak P, Acharya UR (2019) Novel deep genetic ensemble of classifiers for arrhythmia detection using ECG signals. In: MacIntyre J (ed) Neural computing and applications. Springer, Berlin, pp 1–25
6.
Zurück zum Zitat Tuncer T, Dogan S, Pławiak P, Acharya UR (2019) Automated arrhythmia detection using novel hexadecimal local pattern and multilevel wavelet transform with ECG signals. Knowl-Based Syst 186:104923 Tuncer T, Dogan S, Pławiak P, Acharya UR (2019) Automated arrhythmia detection using novel hexadecimal local pattern and multilevel wavelet transform with ECG signals. Knowl-Based Syst 186:104923
7.
Zurück zum Zitat Cai Z, Saberian MJ, Vasconcelos N (2019) Learning complexity-aware cascades for pedestrian detection. IEEE Trans Pattern Anal Mach Intell Cai Z, Saberian MJ, Vasconcelos N (2019) Learning complexity-aware cascades for pedestrian detection. IEEE Trans Pattern Anal Mach Intell
8.
Zurück zum Zitat Khan MA, Akram T, Sharif M, Javed MY, Muhammad N, Yasmin M (2018) An implementation of optimized framework for action classification using multilayers neural network on selected fused features. Pattern Anal Appl 22(4):1377–1397MathSciNet Khan MA, Akram T, Sharif M, Javed MY, Muhammad N, Yasmin M (2018) An implementation of optimized framework for action classification using multilayers neural network on selected fused features. Pattern Anal Appl 22(4):1377–1397MathSciNet
9.
Zurück zum Zitat Yao H, Zhang S, Hong R, Zhang Y, Xu C, Tian Q (2019) Deep representation learning with part loss for person re-identification. IEEE Trans Image Process 28(6):2860–2871MathSciNetMATH Yao H, Zhang S, Hong R, Zhang Y, Xu C, Tian Q (2019) Deep representation learning with part loss for person re-identification. IEEE Trans Image Process 28(6):2860–2871MathSciNetMATH
10.
Zurück zum Zitat Cai L, Zeng H, Zhu J, Cao J, Hou J, Cai C (2017) Multi-view joint learning network for pedestrian gender classification. In: International symposium on intelligent signal processing and communication systems (ISPACS), pp 23–27 Cai L, Zeng H, Zhu J, Cao J, Hou J, Cai C (2017) Multi-view joint learning network for pedestrian gender classification. In: International symposium on intelligent signal processing and communication systems (ISPACS), pp 23–27
11.
Zurück zum Zitat Sivabalakrishnan M, Menaka R, Jeeva S (2019) Smart video surveillance systems and identification of human behavior analysis. In: Countering cyber attacks and preserving the integrity and availability of critical systems. IGI Global, pp 64–97 Sivabalakrishnan M, Menaka R, Jeeva S (2019) Smart video surveillance systems and identification of human behavior analysis. In: Countering cyber attacks and preserving the integrity and availability of critical systems. IGI Global, pp 64–97
12.
Zurück zum Zitat Chen Y, Duffner S, Stoian A, Dufour J-Y, Baskurt A (2018) Deep and low-level feature based attribute learning for person re-identification. Image Vis Comput 79:25–34 Chen Y, Duffner S, Stoian A, Dufour J-Y, Baskurt A (2018) Deep and low-level feature based attribute learning for person re-identification. Image Vis Comput 79:25–34
13.
Zurück zum Zitat Sun Y, Zhang M, Sun Z, Tan T (2017) Demographic analysis from biometric data: achievements, challenges, and new frontiers. IEEE Trans Pattern Anal Mach Intell 40:332–351 Sun Y, Zhang M, Sun Z, Tan T (2017) Demographic analysis from biometric data: achievements, challenges, and new frontiers. IEEE Trans Pattern Anal Mach Intell 40:332–351
14.
Zurück zum Zitat Azzopardi G, Greco A, Saggese A, Vento M (2018) Fusion of domain-specific and trainable features for gender recognition from face images. IEEE access 6:24171–24183 Azzopardi G, Greco A, Saggese A, Vento M (2018) Fusion of domain-specific and trainable features for gender recognition from face images. IEEE access 6:24171–24183
15.
Zurück zum Zitat Mane S, Shah G (2019) Facial recognition, expression recognition, and gender identification. In: Data management, analytics and innovation. Springer, Singapore, pp 275–290 Mane S, Shah G (2019) Facial recognition, expression recognition, and gender identification. In: Data management, analytics and innovation. Springer, Singapore, pp 275–290
16.
Zurück zum Zitat Cheng J, Li Y, Wang J, Yu L, Wang S (2019) Exploiting effective facial patches for robust gender recognition. Tsinghua Sci Technol 24:333–345 Cheng J, Li Y, Wang J, Yu L, Wang S (2019) Exploiting effective facial patches for robust gender recognition. Tsinghua Sci Technol 24:333–345
17.
Zurück zum Zitat Geetha A, Sundaram M, Vijayakumari B (2019) Gender classification from face images by mixing the classifier outcome of prime, distinct descriptors. Soft Comput 23:2525–2535 Geetha A, Sundaram M, Vijayakumari B (2019) Gender classification from face images by mixing the classifier outcome of prime, distinct descriptors. Soft Comput 23:2525–2535
18.
Zurück zum Zitat Guo G, Mu G, Fu Y (2009) Gender from body: a biologically-inspired approach with manifold learning. In: Asian conference on computer vision, pp 236–245 Guo G, Mu G, Fu Y (2009) Gender from body: a biologically-inspired approach with manifold learning. In: Asian conference on computer vision, pp 236–245
19.
Zurück zum Zitat Ng C-B, Tay Y-H, Goi B-M (2013) A convolutional neural network for pedestrian gender recognition. In: International symposium on neural networks, pp 558–564 Ng C-B, Tay Y-H, Goi B-M (2013) A convolutional neural network for pedestrian gender recognition. In: International symposium on neural networks, pp 558–564
20.
Zurück zum Zitat Ahmad K, Sohail A, Conci N, De Natale F (2018) A comparative study of global and deep features for the analysis of user-generated natural disaster related images. In: IEEE 13th image, video, and multidimensional signal processing workshop (IVMSP), pp 1–5 Ahmad K, Sohail A, Conci N, De Natale F (2018) A comparative study of global and deep features for the analysis of user-generated natural disaster related images. In: IEEE 13th image, video, and multidimensional signal processing workshop (IVMSP), pp 1–5
21.
Zurück zum Zitat Liao S, Hu Y, Zhu X, Li SZ (2015) Person re-identification by local maximal occurrence representation and metric learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2197–2206 Liao S, Hu Y, Zhu X, Li SZ (2015) Person re-identification by local maximal occurrence representation and metric learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2197–2206
22.
Zurück zum Zitat Afifi M, Abdelhamed A (2019) AFIF4: deep gender classification based on adaboost-based fusion of isolated facial features and foggy faces. J Vis Commun Image Represent 62:77–86 Afifi M, Abdelhamed A (2019) AFIF4: deep gender classification based on adaboost-based fusion of isolated facial features and foggy faces. J Vis Commun Image Represent 62:77–86
23.
Zurück zum Zitat Ng C-B, Tay Y-H, Goi B-M (2015) A review of facial gender recognition. Pattern Anal Appl 18:739–755MathSciNet Ng C-B, Tay Y-H, Goi B-M (2015) A review of facial gender recognition. Pattern Anal Appl 18:739–755MathSciNet
24.
Zurück zum Zitat BenAbdelkader C, Griffin P (2005) A local region-based approach to gender classi. cation from face images. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05)-workshops, pp 52–52 BenAbdelkader C, Griffin P (2005) A local region-based approach to gender classi. cation from face images. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05)-workshops, pp 52–52
25.
Zurück zum Zitat Eidinger E, Enbar R, Hassner T (2014) Age and gender estimation of unfiltered faces. IEEE Trans Inf Forensics Secur 9:2170–2179 Eidinger E, Enbar R, Hassner T (2014) Age and gender estimation of unfiltered faces. IEEE Trans Inf Forensics Secur 9:2170–2179
26.
Zurück zum Zitat Ahmed HA, Rashid TA, Sidiq A (2016) Face behavior recognition through support vector machines. Int J Adv Comput Sci Appl 7:101–108 Ahmed HA, Rashid TA, Sidiq A (2016) Face behavior recognition through support vector machines. Int J Adv Comput Sci Appl 7:101–108
27.
Zurück zum Zitat Sun N, Zheng W, Sun C, Zou C, Zhao L (2006) Gender classification based on boosting local binary pattern. In: International symposium on neural networks, pp 194–201 Sun N, Zheng W, Sun C, Zou C, Zhao L (2006) Gender classification based on boosting local binary pattern. In: International symposium on neural networks, pp 194–201
28.
Zurück zum Zitat Shan C (2012) Learning local binary patterns for gender classification on real-world face images. Pattern Recogn Lett 33:431–437 Shan C (2012) Learning local binary patterns for gender classification on real-world face images. Pattern Recogn Lett 33:431–437
29.
Zurück zum Zitat Wang J-G, Li J, Yau W-Y, Sung E (2010) Boosting dense SIFT descriptors and shape contexts of face images for gender recognition. In: 2010 IEEE computer society conference on computer vision and pattern recognition-workshops, pp 96–102 Wang J-G, Li J, Yau W-Y, Sung E (2010) Boosting dense SIFT descriptors and shape contexts of face images for gender recognition. In: 2010 IEEE computer society conference on computer vision and pattern recognition-workshops, pp 96–102
30.
Zurück zum Zitat Bekios-Calfa J, Buenaposada JM, Baumela L (2014) Robust gender recognition by exploiting facial attributes dependencies. Pattern Recogn Lett 36:228–234 Bekios-Calfa J, Buenaposada JM, Baumela L (2014) Robust gender recognition by exploiting facial attributes dependencies. Pattern Recogn Lett 36:228–234
31.
Zurück zum Zitat Alexandre LA (2010) Gender recognition: a multiscale decision fusion approach. Pattern Recogn Lett 31:1422–1427 Alexandre LA (2010) Gender recognition: a multiscale decision fusion approach. Pattern Recogn Lett 31:1422–1427
32.
Zurück zum Zitat Patel B, Maheshwari R, Raman B (2016) Compass local binary patterns for gender recognition of facial photographs and sketches. Neurocomputing 218:203–215 Patel B, Maheshwari R, Raman B (2016) Compass local binary patterns for gender recognition of facial photographs and sketches. Neurocomputing 218:203–215
33.
Zurück zum Zitat Li X, Zhao X, Fu Y, Liu Y (2010) Bimodal gender recognition from face and fingerprint. In: 2010 IEEE computer society conference on computer vision and pattern recognition, pp 2590–2597 Li X, Zhao X, Fu Y, Liu Y (2010) Bimodal gender recognition from face and fingerprint. In: 2010 IEEE computer society conference on computer vision and pattern recognition, pp 2590–2597
34.
Zurück zum Zitat Bekhouche SE, Ouafi A, Dornaika F, Taleb-Ahmed A, Hadid A (2017) Pyramid multi-level features for facial demographic estimation. Expert Syst Appl 80:297–310 Bekhouche SE, Ouafi A, Dornaika F, Taleb-Ahmed A, Hadid A (2017) Pyramid multi-level features for facial demographic estimation. Expert Syst Appl 80:297–310
35.
Zurück zum Zitat Divate CP, Ali SZ (2018) Study of different bio-metric based gender classification systems. In: International conference on inventive research in computing applications (ICIRCA), pp 347–353 Divate CP, Ali SZ (2018) Study of different bio-metric based gender classification systems. In: International conference on inventive research in computing applications (ICIRCA), pp 347–353
36.
Zurück zum Zitat Ali AM, Rashid TA (2016) Kernel visual keyword description for object and place recognition. In: Advances in signal processing and intelligent recognition systems. Springer, Cham, pp 27–38 Ali AM, Rashid TA (2016) Kernel visual keyword description for object and place recognition. In: Advances in signal processing and intelligent recognition systems. Springer, Cham, pp 27–38
37.
Zurück zum Zitat Moghaddam B, Yang M-H (2002) Learning gender with support faces. IEEE Trans Pattern Anal Mach Intell 24:707–711 Moghaddam B, Yang M-H (2002) Learning gender with support faces. IEEE Trans Pattern Anal Mach Intell 24:707–711
38.
Zurück zum Zitat Bekios-Calfa J, Buenaposada JM, Baumela L (2010) Revisiting linear discriminant techniques in gender recognition. IEEE Trans Pattern Anal Mach Intell 33:858–864 Bekios-Calfa J, Buenaposada JM, Baumela L (2010) Revisiting linear discriminant techniques in gender recognition. IEEE Trans Pattern Anal Mach Intell 33:858–864
39.
Zurück zum Zitat Duan M, Li K, Yang C, Li K (2018) A hybrid deep learning CNN–ELM for age and gender classification. Neurocomputing 275:448–461 Duan M, Li K, Yang C, Li K (2018) A hybrid deep learning CNN–ELM for age and gender classification. Neurocomputing 275:448–461
40.
Zurück zum Zitat Dhomne A, Kumar R, Bhan V (2018) Gender recognition through face using deep learning. Procedia Comput Sci 132:2–10 Dhomne A, Kumar R, Bhan V (2018) Gender recognition through face using deep learning. Procedia Comput Sci 132:2–10
41.
Zurück zum Zitat Zhang K, Tan L, Li Z, Qiao Y (2016) Gender and smile classification using deep convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 34–38 Zhang K, Tan L, Li Z, Qiao Y (2016) Gender and smile classification using deep convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 34–38
42.
Zurück zum Zitat Mansanet J, Albiol A, Paredes R (2016) Local deep neural networks for gender recognition. Pattern Recogn Lett 70:80–86 Mansanet J, Albiol A, Paredes R (2016) Local deep neural networks for gender recognition. Pattern Recogn Lett 70:80–86
43.
Zurück zum Zitat Asmara RA, Masruri I, Rahmad C, Siradjuddin I, Rohadi E, Ronilaya F et al (2018) Comparative study of gait gender identification using gait energy image (GEI) and gait information image (GII). In: MATEC web of conferences, p 15006 Asmara RA, Masruri I, Rahmad C, Siradjuddin I, Rohadi E, Ronilaya F et al (2018) Comparative study of gait gender identification using gait energy image (GEI) and gait information image (GII). In: MATEC web of conferences, p 15006
44.
Zurück zum Zitat Hassan OMS, Abdulazeez AM, Tiryaki VM (2018) Gait-Based human gender classification using lifting 5/3 wavelet and principal component analysis. In: International conference on advanced science and engineering (ICOASE), pp 173–178 Hassan OMS, Abdulazeez AM, Tiryaki VM (2018) Gait-Based human gender classification using lifting 5/3 wavelet and principal component analysis. In: International conference on advanced science and engineering (ICOASE), pp 173–178
45.
Zurück zum Zitat Bei S, Deng J, Zhen Z, Shaojing S (2019) Gender recognition via fused silhouette features based on visual sensors. IEEE Sens J 19(20):9496–9503 Bei S, Deng J, Zhen Z, Shaojing S (2019) Gender recognition via fused silhouette features based on visual sensors. IEEE Sens J 19(20):9496–9503
46.
Zurück zum Zitat Amayeh G, Bebis G, Nicolescu M (2008) Gender classification from hand shape. In: IEEE computer society conference on computer vision and pattern recognition workshops, pp 1–7 Amayeh G, Bebis G, Nicolescu M (2008) Gender classification from hand shape. In: IEEE computer society conference on computer vision and pattern recognition workshops, pp 1–7
47.
Zurück zum Zitat Matsumoto R, Yoshimura H, Nishiyama M, Iwai Y (2017) Feature extraction using gaze of participants for classifying gender of pedestrians in images. In: IEEE international conference on image processing (ICIP), pp 3545–3549 Matsumoto R, Yoshimura H, Nishiyama M, Iwai Y (2017) Feature extraction using gaze of participants for classifying gender of pedestrians in images. In: IEEE international conference on image processing (ICIP), pp 3545–3549
48.
Zurück zum Zitat Liu T, Ye X, Sun B (2018) Clothing and carrying invariant gait-based gender recognition. In: International conference on image and video processing, and artificial intelligence, p 108360X Liu T, Ye X, Sun B (2018) Clothing and carrying invariant gait-based gender recognition. In: International conference on image and video processing, and artificial intelligence, p 108360X
49.
Zurück zum Zitat Cao L, Dikmen M, Fu Y, Huang TS (2008) Gender recognition from body. In: Proceedings of the 16th ACM international conference on multimedia, pp 725–728 Cao L, Dikmen M, Fu Y, Huang TS (2008) Gender recognition from body. In: Proceedings of the 16th ACM international conference on multimedia, pp 725–728
50.
Zurück zum Zitat Collins M, Zhang J, Miller P, Wang H (2009) Full body image feature representations for gender profiling. In: IEEE 12th international conference on computer vision workshops, ICCV workshops, pp 1235–1242 Collins M, Zhang J, Miller P, Wang H (2009) Full body image feature representations for gender profiling. In: IEEE 12th international conference on computer vision workshops, ICCV workshops, pp 1235–1242
51.
Zurück zum Zitat Geelen CD, Wijnhoven RG, Dubbelman G (2015) Gender classification in low-resolution surveillance video: in-depth comparison of random forests and SVMs. In: Video surveillance and transportation imaging applications, p 94070 M Geelen CD, Wijnhoven RG, Dubbelman G (2015) Gender classification in low-resolution surveillance video: in-depth comparison of random forests and SVMs. In: Video surveillance and transportation imaging applications, p 94070 M
52.
Zurück zum Zitat Ng C-B, Tay Y-H, Goi B-M (2013) Comparing image representations for training a convolutional neural network to classify gender. In: 1st international conference on artificial intelligence, modelling and simulation, pp 29–33 Ng C-B, Tay Y-H, Goi B-M (2013) Comparing image representations for training a convolutional neural network to classify gender. In: 1st international conference on artificial intelligence, modelling and simulation, pp 29–33
53.
Zurück zum Zitat Antipov G, Berrani S-A, Ruchaud N, Dugelay J-L (2015) Learned vs. hand-crafted features for pedestrian gender recognition. In: Proceedings of the 23rd ACM international conference on multimedia, pp 1263–1266 Antipov G, Berrani S-A, Ruchaud N, Dugelay J-L (2015) Learned vs. hand-crafted features for pedestrian gender recognition. In: Proceedings of the 23rd ACM international conference on multimedia, pp 1263–1266
54.
Zurück zum Zitat Raza M, Zonghai C, Rehman SU, Zhenhua G, Jikai W, Peng B (2017) Part-wise pedestrian gender recognition via deep convolutional neural networks, pp 26–6 Raza M, Zonghai C, Rehman SU, Zhenhua G, Jikai W, Peng B (2017) Part-wise pedestrian gender recognition via deep convolutional neural networks, pp 26–6
55.
Zurück zum Zitat Ng C-B, Tay Y-H, Goi B-M (2017) Training strategy for convolutional neural networks in pedestrian gender classification. In: Second international workshop on pattern recognition, p 104431A Ng C-B, Tay Y-H, Goi B-M (2017) Training strategy for convolutional neural networks in pedestrian gender classification. In: Second international workshop on pattern recognition, p 104431A
56.
Zurück zum Zitat Raza M, Sharif M, Yasmin M, Khan MA, Saba T, Fernandes SL (2018) Appearance based pedestrians’ gender recognition by employing stacked auto encoders in deep learning. Future Gen Comput Syst 88:28–39 Raza M, Sharif M, Yasmin M, Khan MA, Saba T, Fernandes SL (2018) Appearance based pedestrians’ gender recognition by employing stacked auto encoders in deep learning. Future Gen Comput Syst 88:28–39
57.
Zurück zum Zitat Cai L, Zhu J, Zeng H, Chen J, Cai C (2018) Deep-learned and hand-crafted features fusion network for pedestrian gender recognition. In: Proceedings of ELM-2016. Springer, Berlin, pp 207–215 Cai L, Zhu J, Zeng H, Chen J, Cai C (2018) Deep-learned and hand-crafted features fusion network for pedestrian gender recognition. In: Proceedings of ELM-2016. Springer, Berlin, pp 207–215
58.
Zurück zum Zitat Cai L, Zhu J, Zeng H, Chen J, Cai C, Ma K-K (2018) Hog-assisted deep feature learning for pedestrian gender recognition. J Franklin Inst 355:1991–2008 Cai L, Zhu J, Zeng H, Chen J, Cai C, Ma K-K (2018) Hog-assisted deep feature learning for pedestrian gender recognition. J Franklin Inst 355:1991–2008
59.
Zurück zum Zitat Ng CB, Tay Y-H, Goi B-M (2018) Pedestrian gender classification using combined global and local parts-based convolutional neural networks. Pattern Anal Appl 22(4):1469–1480MathSciNet Ng CB, Tay Y-H, Goi B-M (2018) Pedestrian gender classification using combined global and local parts-based convolutional neural networks. Pattern Anal Appl 22(4):1469–1480MathSciNet
60.
Zurück zum Zitat Sindagi VA, Patel VM (2018) A survey of recent advances in cnn-based single image crowd counting and density estimation. Pattern Recogn Lett 107:3–16 Sindagi VA, Patel VM (2018) A survey of recent advances in cnn-based single image crowd counting and density estimation. Pattern Recogn Lett 107:3–16
61.
Zurück zum Zitat Li C, Guo J, Porikli F, Pang Y (2018) Lightennet: a convolutional neural network for weakly illuminated image enhancement. Pattern Recogn Lett 104:15–22 Li C, Guo J, Porikli F, Pang Y (2018) Lightennet: a convolutional neural network for weakly illuminated image enhancement. Pattern Recogn Lett 104:15–22
62.
Zurück zum Zitat Rashid M, Khan MA, Sharif M, Raza M, Sarfraz MM, Afza F (2019) Object detection and classification: a joint selection and fusion strategy of deep convolutional neural network and SIFT point features. Multimedia Tools Appl 78:15751–15777 Rashid M, Khan MA, Sharif M, Raza M, Sarfraz MM, Afza F (2019) Object detection and classification: a joint selection and fusion strategy of deep convolutional neural network and SIFT point features. Multimedia Tools Appl 78:15751–15777
63.
Zurück zum Zitat Khan MA, Akram T, Sharif M, Awais M, Javed K, Ali H et al (2018) CCDF: automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN features. Comput Electron Agric 155:220–236 Khan MA, Akram T, Sharif M, Awais M, Javed K, Ali H et al (2018) CCDF: automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN features. Comput Electron Agric 155:220–236
64.
Zurück zum Zitat Sharif M, Attique Khan M, Rashid M, Yasmin M, Afza F, Tanik UJ (2019) Deep CNN and geometric features-based gastrointestinal tract diseases detection and classification from wireless capsule endoscopy images. J Exp Theor Artif Intell, 1–23 Sharif M, Attique Khan M, Rashid M, Yasmin M, Afza F, Tanik UJ (2019) Deep CNN and geometric features-based gastrointestinal tract diseases detection and classification from wireless capsule endoscopy images. J Exp Theor Artif Intell, 1–23
65.
Zurück zum Zitat Rashid TA, Fattah P, Awla DK (2018) Using accuracy measure for improving the training of lstm with metaheuristic algorithms. Procedia Computer Science 140:324–333 Rashid TA, Fattah P, Awla DK (2018) Using accuracy measure for improving the training of lstm with metaheuristic algorithms. Procedia Computer Science 140:324–333
66.
Zurück zum Zitat Rashid TA (2016) Convolutional neural networks based method for improving facial expression recognition. In: The international symposium on intelligent systems technologies and applications, pp 73–84 Rashid TA (2016) Convolutional neural networks based method for improving facial expression recognition. In: The international symposium on intelligent systems technologies and applications, pp 73–84
67.
Zurück zum Zitat Shamsaldin AS, Fattah P, Rashid TA, Al-Salihi NK (2019) A study of the convolutional neural networks applications. UKH J Sci Eng 3:31–40 Shamsaldin AS, Fattah P, Rashid TA, Al-Salihi NK (2019) A study of the convolutional neural networks applications. UKH J Sci Eng 3:31–40
68.
Zurück zum Zitat Rashid TA, Abbas DK, Turel YK (2019) A multi hidden recurrent neural network with a modified grey wolf optimizer. PLoS ONE 14:e0213237 Rashid TA, Abbas DK, Turel YK (2019) A multi hidden recurrent neural network with a modified grey wolf optimizer. PLoS ONE 14:e0213237
69.
Zurück zum Zitat Rashid TA, Abdullah SM (2018) A hybrid of artificial bee colony, genetic algorithm, and neural network for diabetic mellitus diagnosing. ARO Sci J Koya Univ 6:55–64 Rashid TA, Abdullah SM (2018) A hybrid of artificial bee colony, genetic algorithm, and neural network for diabetic mellitus diagnosing. ARO Sci J Koya Univ 6:55–64
70.
Zurück zum Zitat Uddin MA, Lee Y-K (2019) Feature fusion of deep spatial features and handcrafted spatiotemporal features for human action recognition. Sensors 19:1599 Uddin MA, Lee Y-K (2019) Feature fusion of deep spatial features and handcrafted spatiotemporal features for human action recognition. Sensors 19:1599
71.
Zurück zum Zitat Fan X, Tjahjadi T (2019) Fusing dynamic deep learned features and handcrafted features for facial expression recognition. J Vis Commun Image Represent 65:102659 Fan X, Tjahjadi T (2019) Fusing dynamic deep learned features and handcrafted features for facial expression recognition. J Vis Commun Image Represent 65:102659
72.
Zurück zum Zitat Georgescu M-I, Ionescu RT, Popescu M (2019) Local learning with deep and handcrafted features for facial expression recognition. IEEE Access 7:64827–64836 Georgescu M-I, Ionescu RT, Popescu M (2019) Local learning with deep and handcrafted features for facial expression recognition. IEEE Access 7:64827–64836
73.
Zurück zum Zitat Hasan AM, Jalab HA, Meziane F, Kahtan H, Al-Ahmad AS (2019) Combining deep and handcrafted image features for MRI brain scan classification. IEEE Access 7:79959–79967 Hasan AM, Jalab HA, Meziane F, Kahtan H, Al-Ahmad AS (2019) Combining deep and handcrafted image features for MRI brain scan classification. IEEE Access 7:79959–79967
74.
Zurück zum Zitat Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), vol 1. IEEE, pp 886–893 Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), vol 1. IEEE, pp 886–893
75.
Zurück zum Zitat Wang X, Zhao C, Miao D, Wei Z, Zhang R, Ye T (2016) Fusion of multiple channel features for person re-identification. Neurocomputing 213:125–136 Wang X, Zhao C, Miao D, Wei Z, Zhang R, Ye T (2016) Fusion of multiple channel features for person re-identification. Neurocomputing 213:125–136
76.
Zurück zum Zitat Qi Z, Tian Y, Shi Y (2013) Efficient railway tracks detection and turnouts recognition method using HOG features. Neural Comput Appl 23:245–254 Qi Z, Tian Y, Shi Y (2013) Efficient railway tracks detection and turnouts recognition method using HOG features. Neural Comput Appl 23:245–254
77.
Zurück zum Zitat Chee KW, Teoh SS (2019) Pedestrian detection in visual images using combination of HOG and HOM features. In: 10th international conference on robotics, vision, signal processing and power applications, pp. 591–597 Chee KW, Teoh SS (2019) Pedestrian detection in visual images using combination of HOG and HOM features. In: 10th international conference on robotics, vision, signal processing and power applications, pp. 591–597
78.
Zurück zum Zitat Wei Y, Tian Q, Guo J, Huang W, Cao J (2019) Multi-vehicle detection algorithm through combining Harr and HOG features. Math Comput Simul 155:130–145MathSciNet Wei Y, Tian Q, Guo J, Huang W, Cao J (2019) Multi-vehicle detection algorithm through combining Harr and HOG features. Math Comput Simul 155:130–145MathSciNet
79.
Zurück zum Zitat Firuzi K, Vakilian M, Phung BT, Blackburn TR (2018) Partial discharges pattern recognition of transformer defect model by LBP & HOG features. IEEE Trans Power Delivery 34:542–550 Firuzi K, Vakilian M, Phung BT, Blackburn TR (2018) Partial discharges pattern recognition of transformer defect model by LBP & HOG features. IEEE Trans Power Delivery 34:542–550
80.
Zurück zum Zitat Xiao T, Li S, Wang B, Lin L, Wang X (2017) Joint detection and identification feature learning for person search. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3415–3424 Xiao T, Li S, Wang B, Lin L, Wang X (2017) Joint detection and identification feature learning for person search. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3415–3424
81.
Zurück zum Zitat Xu J, Luo L, Deng C, Huang H (2018) Bilevel distance metric learning for robust image recognition. In: Advances in neural information processing systems, pp 4198–4207 Xu J, Luo L, Deng C, Huang H (2018) Bilevel distance metric learning for robust image recognition. In: Advances in neural information processing systems, pp 4198–4207
82.
Zurück zum Zitat Junejo IN (2019) A deep learning based multi-color space approach for pedestrian attribute recognition. In: Proceedings of the 2019 3rd international conference on graphics and signal processing, pp 113–116 Junejo IN (2019) A deep learning based multi-color space approach for pedestrian attribute recognition. In: Proceedings of the 2019 3rd international conference on graphics and signal processing, pp 113–116
83.
Zurück zum Zitat Ren Q-Q, Tian W-D, Zhao Z-Q (2019) “Person re-identification based on feature fusion. In: International Conference on Intelligent Computing, 2019, pp. 65-73 Ren Q-Q, Tian W-D, Zhao Z-Q (2019) “Person re-identification based on feature fusion. In: International Conference on Intelligent Computing, 2019, pp. 65-73
84.
Zurück zum Zitat Liao S, Zhao G, Kellokumpu V, Pietikäinen M, Li SZ (2010) Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes. In: IEEE computer society conference on computer vision and pattern recognition, pp 1301–1306 Liao S, Zhao G, Kellokumpu V, Pietikäinen M, Li SZ (2010) Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes. In: IEEE computer society conference on computer vision and pattern recognition, pp 1301–1306
85.
Zurück zum Zitat AroraS, Bhatia M (2018) A robust approach for gender recognition using deep learning. In: 9th international conference on computing, communication and networking technologies (ICCCNT), pp 1–6 AroraS, Bhatia M (2018) A robust approach for gender recognition using deep learning. In: 9th international conference on computing, communication and networking technologies (ICCCNT), pp 1–6
86.
Zurück zum Zitat Labati RD, Muñoz E, Piuri V, Sassi R, Scotti F (2018) Deep-ECG: convolutional neural networks for ECG biometric recognition. Pattern Recognit Lett 126:78–85 Labati RD, Muñoz E, Piuri V, Sassi R, Scotti F (2018) Deep-ECG: convolutional neural networks for ECG biometric recognition. Pattern Recognit Lett 126:78–85
87.
Zurück zum Zitat Cheng E-J, Chou K-P, Rajora S, Jin B-H, Tanveer M, Lin C-T et al (2019) Deep Sparse Representation Classifier for facial recognition and detection system. Pattern Recognit Lett 125:71–77 Cheng E-J, Chou K-P, Rajora S, Jin B-H, Tanveer M, Lin C-T et al (2019) Deep Sparse Representation Classifier for facial recognition and detection system. Pattern Recognit Lett 125:71–77
88.
Zurück zum Zitat Fayyaz M, Yasmin M, Sharif M, Shah JH, Raza M, Iqbal T (2019) Person re-identification with features-based clustering and deep features. Neural Comput Appl, 1–22 Fayyaz M, Yasmin M, Sharif M, Shah JH, Raza M, Iqbal T (2019) Person re-identification with features-based clustering and deep features. Neural Comput Appl, 1–22
89.
Zurück zum Zitat Hu F, Xia G-S, Hu J, Zhang L (2015) Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery. Remote Sens 7:14680–14707 Hu F, Xia G-S, Hu J, Zhang L (2015) Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery. Remote Sens 7:14680–14707
90.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
91.
Zurück zum Zitat Nigam K, Lafferty J, McCallum A (1999) Using maximum entropy for text classification. In: IJCAI-99 workshop on machine learning for information filtering, pp 61–67 Nigam K, Lafferty J, McCallum A (1999) Using maximum entropy for text classification. In: IJCAI-99 workshop on machine learning for information filtering, pp 61–67
92.
Zurück zum Zitat Morais CL, Lima KM, Martin FL (2019) Uncertainty estimation and misclassification probability for classification models based on discriminant analysis and support vector machines. Anal Chim Acta 1063:40–46 Morais CL, Lima KM, Martin FL (2019) Uncertainty estimation and misclassification probability for classification models based on discriminant analysis and support vector machines. Anal Chim Acta 1063:40–46
93.
Zurück zum Zitat Radhika K, Varadarajan S (2018) Ensemble subspace discriminant classification of satellite images Radhika K, Varadarajan S (2018) Ensemble subspace discriminant classification of satellite images
94.
Zurück zum Zitat Shirkhorshidi AS, Aghabozorgi S, Wah TY (2015) A comparison study on similarity and dissimilarity measures in clustering continuous data. PLoS One 10(12) Shirkhorshidi AS, Aghabozorgi S, Wah TY (2015) A comparison study on similarity and dissimilarity measures in clustering continuous data. PLoS One 10(12)
95.
Zurück zum Zitat Lekdioui K, Messoussi R, Ruichek Y, Chaabi Y, Touahni R (2017) Facial decomposition for expression recognition using texture/shape descriptors and SVM classifier. Sig Process Image Commun 58:300–312 Lekdioui K, Messoussi R, Ruichek Y, Chaabi Y, Touahni R (2017) Facial decomposition for expression recognition using texture/shape descriptors and SVM classifier. Sig Process Image Commun 58:300–312
96.
Zurück zum Zitat Niu X-X, Suen CY (2012) A novel hybrid CNN–SVM classifier for recognizing handwritten digits. Pattern Recognit 45:1318–1325 Niu X-X, Suen CY (2012) A novel hybrid CNN–SVM classifier for recognizing handwritten digits. Pattern Recognit 45:1318–1325
97.
Zurück zum Zitat Deng Y, Luo P, Loy CC, Tang X (2014) Pedestrian attribute recognition at far distance. In: Proceedings of the 22nd ACM international conference on multimedia, pp 789–792 Deng Y, Luo P, Loy CC, Tang X (2014) Pedestrian attribute recognition at far distance. In: Proceedings of the 22nd ACM international conference on multimedia, pp 789–792
98.
Zurück zum Zitat Zhu W, Miao J, Qing L, Huang G-B (2015) Hierarchical extreme learning machine for unsupervised representation learning. In: International joint conference on neural networks (IJCNN), pp 1–8 Zhu W, Miao J, Qing L, Huang G-B (2015) Hierarchical extreme learning machine for unsupervised representation learning. In: International joint conference on neural networks (IJCNN), pp 1–8
99.
Zurück zum Zitat Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9 Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9
100.
Zurück zum Zitat Wang Q, Ye L, Luo H, Men A, Zhao F, Huang Y (2019) Pedestrian stride-length estimation based on LSTM and denoising autoencoders. Sensors 19:840 Wang Q, Ye L, Luo H, Men A, Zhao F, Huang Y (2019) Pedestrian stride-length estimation based on LSTM and denoising autoencoders. Sensors 19:840
101.
Zurück zum Zitat Rashid T, Jabar A (2018) A modified particle swarm optimization with neural network via Euclidean distance. Int J Recent Contrib Eng Sci IT (IJES) 6:4–18 Rashid T, Jabar A (2018) A modified particle swarm optimization with neural network via Euclidean distance. Int J Recent Contrib Eng Sci IT (IJES) 6:4–18
102.
Zurück zum Zitat Haghighat M, Abdel-Mottaleb M, Alhalabi W (2016) Discriminant correlation analysis: real-time feature level fusion for multimodal biometric recognition. IEEE Trans Inf Forensics Secur 11:1984–1996 Haghighat M, Abdel-Mottaleb M, Alhalabi W (2016) Discriminant correlation analysis: real-time feature level fusion for multimodal biometric recognition. IEEE Trans Inf Forensics Secur 11:1984–1996
Metadaten
Titel
J-LDFR: joint low-level and deep neural network feature representations for pedestrian gender classification
verfasst von
Muhammad Fayyaz
Mussarat Yasmin
Muhammad Sharif
Mudassar Raza
Publikationsdatum
19.05.2020
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 1/2021
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-020-05015-1

Weitere Artikel der Ausgabe 1/2021

Neural Computing and Applications 1/2021 Zur Ausgabe

S. I : Neural Networks in Art, sound and Design

Deep learning of individual aesthetics