Skip to main content
Erschienen in: Neural Computing and Applications 12/2021

15.10.2020 | Original Article

The Mode-Fisher pooling for time complexity optimization in deep convolutional neural networks

verfasst von: Dou El Kefel Mansouri, Bachir Kaddar, Seif-Eddine Benkabou, Khalid Benabdeslem

Erschienen in: Neural Computing and Applications | Ausgabe 12/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper, we aim to improve the performance, time complexity and energy efficiency of deep convolutional neural networks (CNNs) by combining hardware and specialization techniques. Since the pooling step represents a process that contributes significantly to CNNs performance improvement, we propose the Mode-Fisher pooling method. This form of pooling can potentially offer a very promising results in terms of improving feature extraction performance. The proposed method reduces significantly the data movement in the CNN and save up to 10% of total energy, without any performance penalty.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Fußnoten
1
Computer file that contains an uncompressed image. It is not viewable directly by most computer systems.
 
2
In the literature, the most used filters do not exceed the size (\(5 \times 5\)).
 
3
Max pooling: y= Max (\(x_{ij}\)).
 
4
Average pooling: y= Mean (\(x_{ij}\)),y represents the output, i and j are the row and column index of the pooling region.
 
5
All data sets are summarized in Table 1.
 
Literatur
1.
Zurück zum Zitat Abhimanyu D, Otkrist G, Ramesh R, Nikhil N (2018) Maximum-entropy fine grained classification. In: Advances in neural information processing systems, pp 637–647 Abhimanyu D, Otkrist G, Ramesh R, Nikhil N (2018) Maximum-entropy fine grained classification. In: Advances in neural information processing systems, pp 637–647
2.
Zurück zum Zitat Akata Z, Perronnin F, Harchaoui Z, Schmid C (2014) Good practice in large-scale learning for image classification. IEEE Trans Pattern Anal Mach Intell 36:507–520CrossRef Akata Z, Perronnin F, Harchaoui Z, Schmid C (2014) Good practice in large-scale learning for image classification. IEEE Trans Pattern Anal Mach Intell 36:507–520CrossRef
3.
Zurück zum Zitat Asif U, Bennamoun M, Sohel F (2017) A multi-modal, discriminative and spatially invariant CNN for RGB-D object labeling. IEEE Trans Pattern Anal Mach Intell 40(9):2051–2065CrossRef Asif U, Bennamoun M, Sohel F (2017) A multi-modal, discriminative and spatially invariant CNN for RGB-D object labeling. IEEE Trans Pattern Anal Mach Intell 40(9):2051–2065CrossRef
4.
Zurück zum Zitat Beigpour S, Riess C, Van De Weijer J, Angelopoulou E (2014) Multi-illuminant estimation with conditional random fields. IEEE Trans Image Process 23:83–96MathSciNetCrossRef Beigpour S, Riess C, Van De Weijer J, Angelopoulou E (2014) Multi-illuminant estimation with conditional random fields. IEEE Trans Image Process 23:83–96MathSciNetCrossRef
5.
Zurück zum Zitat Bianco S (2017) Single and multiple illuminant estimation using convolutional neural networks. IEEE Trans Image Process 26(9):4347–4362MathSciNetCrossRef Bianco S (2017) Single and multiple illuminant estimation using convolutional neural networks. IEEE Trans Image Process 26(9):4347–4362MathSciNetCrossRef
6.
Zurück zum Zitat Bottou L, Cortes C, Denker JS, Drucker H, Guyon I, Jackel LD, LeCun Y, Muller UA, et al (1994) Comparison of classifier methods: a case study in handwritten digit recognition. In: Proceedings of the 12th IAPR international conference on pattern recognition, conference B: computer vision and image processing, IEEE, vol 2, pp 77–82 Bottou L, Cortes C, Denker JS, Drucker H, Guyon I, Jackel LD, LeCun Y, Muller UA, et al (1994) Comparison of classifier methods: a case study in handwritten digit recognition. In: Proceedings of the 12th IAPR international conference on pattern recognition, conference B: computer vision and image processing, IEEE, vol 2, pp 77–82
7.
Zurück zum Zitat Brain G (2017) Tensorflow: an open-source software library for machine intelligence Brain G (2017) Tensorflow: an open-source software library for machine intelligence
8.
Zurück zum Zitat Chen Y-H, Emer J, Sze V (2017) Using dataflow to optimize energy efficiency of deep neural network accelerators. IEEE Micro 37:12–21CrossRef Chen Y-H, Emer J, Sze V (2017) Using dataflow to optimize energy efficiency of deep neural network accelerators. IEEE Micro 37:12–21CrossRef
11.
Zurück zum Zitat Cimpoi M, Maji S, Kokkinos I, Vedaldi A (2016) Deep filter banks for texture recognition, description, and segmentation. Int J Comput Vis 118:65–94MathSciNetCrossRef Cimpoi M, Maji S, Kokkinos I, Vedaldi A (2016) Deep filter banks for texture recognition, description, and segmentation. Int J Comput Vis 118:65–94MathSciNetCrossRef
12.
Zurück zum Zitat Cohen BH, Lea RB (2004) Essentials of statistics for the social and behavioral sciences, vol 3. Wiley, New York Cohen BH, Lea RB (2004) Essentials of statistics for the social and behavioral sciences, vol 3. Wiley, New York
13.
Zurück zum Zitat Conneau A, Schwenk H, Barrault L, Lecun Y (2017) Very deep convolutional networks for text classification. In: Proceedings of the 15th conference of the European chapter of the association for computational linguistics, Long Papers, vol. 1, pp 1107–1116 Conneau A, Schwenk H, Barrault L, Lecun Y (2017) Very deep convolutional networks for text classification. In: Proceedings of the 15th conference of the European chapter of the association for computational linguistics, Long Papers, vol. 1, pp 1107–1116
14.
Zurück zum Zitat core team P (2017) Pytorch: Tensors and dynamic neural networks in python with strong GPU acceleration core team P (2017) Pytorch: Tensors and dynamic neural networks in python with strong GPU acceleration
15.
Zurück zum Zitat Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetMATH Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetMATH
18.
Zurück zum Zitat Dietterich T (1995) Overfitting and undercomputing in machine learning. ACM Comput Surv 27:326–327CrossRef Dietterich T (1995) Overfitting and undercomputing in machine learning. ACM Comput Surv 27:326–327CrossRef
19.
Zurück zum Zitat Dixit M, Chen S, Gao D, Rasiwasia N, Vasconcelos N (2015) Scene classification with semantic fisher vectors. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2974–2983 Dixit M, Chen S, Gao D, Rasiwasia N, Vasconcelos N (2015) Scene classification with semantic fisher vectors. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2974–2983
20.
Zurück zum Zitat Ebner M (2009) Color constancy based on local space average color. Mach Vis Appl 20:283–301CrossRef Ebner M (2009) Color constancy based on local space average color. Mach Vis Appl 20:283–301CrossRef
21.
Zurück zum Zitat Erhan D, Bengio Y, Courville A, Manzagol P-A, Vincent P, Bengio S (2010) Why does unsupervised pre-training help deep learning? J Mach Learn Res 11:625–660MathSciNetMATH Erhan D, Bengio Y, Courville A, Manzagol P-A, Vincent P, Bengio S (2010) Why does unsupervised pre-training help deep learning? J Mach Learn Res 11:625–660MathSciNetMATH
22.
Zurück zum Zitat Friedman M (1937) The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J Am Stat Assoc 32:675–701CrossRef Friedman M (1937) The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J Am Stat Assoc 32:675–701CrossRef
23.
Zurück zum Zitat Friedman M (1940) A comparison of alternative tests of significance for the problem of m rankings. Ann Math Stat 11:86–92MathSciNetCrossRef Friedman M (1940) A comparison of alternative tests of significance for the problem of m rankings. Ann Math Stat 11:86–92MathSciNetCrossRef
24.
Zurück zum Zitat Gijsenij A, Lu R, Gevers T (2012) Color constancy for multiple light sources. IEEE Trans Image Process 21:697–707MathSciNetCrossRef Gijsenij A, Lu R, Gevers T (2012) Color constancy for multiple light sources. IEEE Trans Image Process 21:697–707MathSciNetCrossRef
25.
Zurück zum Zitat Goodfellow IJ, Warde-Farley D, Mirza M, Courville M, Bengio Y (2013) Maxout networks. arXiv preprint arXiv:1302.4389 Goodfellow IJ, Warde-Farley D, Mirza M, Courville M, Bengio Y (2013) Maxout networks. arXiv preprint arXiv:1302.4389
26.
Zurück zum Zitat Gravetter F, Wallnau L (2015). Statistics for the behavioral sciences. Cengage Learning Gravetter F, Wallnau L (2015). Statistics for the behavioral sciences. Cengage Learning
27.
Zurück zum Zitat Hassaballah M, Abdelmgeid AA, Alshazly HA (2016) Image features detection, description and matching. Image feature detectors and descriptors. Springer, Cham, pp 11–45CrossRef Hassaballah M, Abdelmgeid AA, Alshazly HA (2016) Image features detection, description and matching. Image feature detectors and descriptors. Springer, Cham, pp 11–45CrossRef
28.
Zurück zum Zitat He K, Sun J (2015) Convolutional neural networks at constrained time cost. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), IEEE, pp 5353–5360 He K, Sun J (2015) Convolutional neural networks at constrained time cost. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), IEEE, pp 5353–5360
29.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37:1904–1916CrossRef He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37:1904–1916CrossRef
30.
Zurück zum Zitat Hensman P, Masko D (2015) The impact of imbalanced training data for convolutional neural networks. Degree Project in Computer Science, KTH Royal Institute of Technology Hensman P, Masko D (2015) The impact of imbalanced training data for convolutional neural networks. Degree Project in Computer Science, KTH Royal Institute of Technology
31.
Zurück zum Zitat Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313:504–507MathSciNetCrossRef Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313:504–507MathSciNetCrossRef
32.
Zurück zum Zitat Hsi-Shou W (2018) Energy-efficient neural network architectures Hsi-Shou W (2018) Energy-efficient neural network architectures
33.
Zurück zum Zitat Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K (2016) Squeezenet: Alexnet-level accuracy with 50x fewer parameters and $<$ 0.5 mb model size. arXiv preprint arXiv:1602.07360 Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K (2016) Squeezenet: Alexnet-level accuracy with 50x fewer parameters and $<$ 0.5 mb model size. arXiv preprint arXiv:1602.07360
35.
36.
Zurück zum Zitat Jolicoeur P (2012) Introduction to biometry. Springer, New York Jolicoeur P (2012) Introduction to biometry. Springer, New York
38.
Zurück zum Zitat Krizhevsky A (2017) The cifar10/100 datasets Krizhevsky A (2017) The cifar10/100 datasets
39.
Zurück zum Zitat Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105 Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105
40.
Zurück zum Zitat lab FAR (2017) Torch: a scientific computing framework for luajit lab FAR (2017) Torch: a scientific computing framework for luajit
41.
Zurück zum Zitat Land EH (1977) The retinex theory of color vision. Sci Am 237:108–129CrossRef Land EH (1977) The retinex theory of color vision. Sci Am 237:108–129CrossRef
42.
Zurück zum Zitat LeCun Y (2017) The MNIST database of handwritten digits LeCun Y (2017) The MNIST database of handwritten digits
43.
45.
Zurück zum Zitat Lee C-Y, Gallagher P, Tu Z (2017) Generalizing pooling functions in CNNs: mixed, gated, and tree. IEEE Trans Pattern Anal Mach Intell PP(99):1 Lee C-Y, Gallagher P, Tu Z (2017) Generalizing pooling functions in CNNs: mixed, gated, and tree. IEEE Trans Pattern Anal Mach Intell PP(99):1
46.
Zurück zum Zitat Li D, Chen X, Becchi M, Zong Z (2016) Evaluating the energy efficiency of deep convolutional neural networks on CPUs and GPUs. In: 2016 IEEE international conferences on big data and cloud computing (BDCloud), social computing and networking (SocialCom), sustainable computing and communications, IEEE, pp 477–484 Li D, Chen X, Becchi M, Zong Z (2016) Evaluating the energy efficiency of deep convolutional neural networks on CPUs and GPUs. In: 2016 IEEE international conferences on big data and cloud computing (BDCloud), social computing and networking (SocialCom), sustainable computing and communications, IEEE, pp 477–484
47.
Zurück zum Zitat Liu L, Fieguth P, Guo Y, Wang X, Pietikäinen M (2017) Local binary features for texture classification: taxonomy and experimental study. Pattern Recogn 62:135–160CrossRef Liu L, Fieguth P, Guo Y, Wang X, Pietikäinen M (2017) Local binary features for texture classification: taxonomy and experimental study. Pattern Recogn 62:135–160CrossRef
48.
Zurück zum Zitat Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60:91–110CrossRef Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60:91–110CrossRef
49.
Zurück zum Zitat Mittal S (2012) A survey of architectural techniques for dram power management. Int J High Perform Syst Archit 4:110–119CrossRef Mittal S (2012) A survey of architectural techniques for dram power management. Int J High Perform Syst Archit 4:110–119CrossRef
51.
Zurück zum Zitat Perronnin F, Dance C (2007) Fisher kernels on visual vocabularies for image categorization. In: IEEE conference on computer vision and pattern recognition, CVPR’07, IEEE, pp 1–8 Perronnin F, Dance C (2007) Fisher kernels on visual vocabularies for image categorization. In: IEEE conference on computer vision and pattern recognition, CVPR’07, IEEE, pp 1–8
52.
Zurück zum Zitat Perronnin F, Sánchez J (2010) Improving the fisher kernel for large-scale image classification. Comput Vis ECCV 2010:143–156 Perronnin F, Sánchez J (2010) Improving the fisher kernel for large-scale image classification. Comput Vis ECCV 2010:143–156
53.
Zurück zum Zitat Perronnin F, Dance C (2007) Fisher kernels on visual vocabularies for image categorization. In: 2007 IEEE conference on computer vision and pattern recognition, IEEE, pp 1–8 Perronnin F, Dance C (2007) Fisher kernels on visual vocabularies for image categorization. In: 2007 IEEE conference on computer vision and pattern recognition, IEEE, pp 1–8
54.
Zurück zum Zitat Ren M, Liao R, Urtasun R, Sinz FH, Zemel RS (2016) Normalizing the normalizers: comparing and extending network normalization schemes. arXiv preprint arXiv:1611.04520 Ren M, Liao R, Urtasun R, Sinz FH, Zemel RS (2016) Normalizing the normalizers: comparing and extending network normalization schemes. arXiv preprint arXiv:1611.04520
55.
Zurück zum Zitat Sánchez J, Perronnin F, Mensink T, Verbeek J (2013) Image classification with the fisher vector: theory and practice. Int J Comput Vis 105:222–245MathSciNetCrossRef Sánchez J, Perronnin F, Mensink T, Verbeek J (2013) Image classification with the fisher vector: theory and practice. Int J Comput Vis 105:222–245MathSciNetCrossRef
56.
Zurück zum Zitat Scardapane S, Comminiello D, Hussain A (2017) Group sparse regularization for deep neural networks. Neurocomputing 241:81–89CrossRef Scardapane S, Comminiello D, Hussain A (2017) Group sparse regularization for deep neural networks. Neurocomputing 241:81–89CrossRef
57.
Zurück zum Zitat Simonyan K, Vedaldi A, Zisserman A (2013) Deep fisher networks for large-scale image classification. In: Advances in neural information processing systems, pp 163–171 Simonyan K, Vedaldi A, Zisserman A (2013) Deep fisher networks for large-scale image classification. In: Advances in neural information processing systems, pp 163–171
58.
Zurück zum Zitat Song Y, Zhang F, Li Q, Huang H (2017) Locally-transferred fisher vectors for texture classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4912–4920 Song Y, Zhang F, Li Q, Huang H (2017) Locally-transferred fisher vectors for texture classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4912–4920
59.
Zurück zum Zitat Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958MathSciNetMATH Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958MathSciNetMATH
60.
Zurück zum Zitat Stehlík M, Kisel’ák J, Bukina E, Lu Y, Baran S (2020) Fredholm integral relation between compound estimation and prediction (FIRCEP). Stoch Anal Appl 38:427–459MathSciNetCrossRef Stehlík M, Kisel’ák J, Bukina E, Lu Y, Baran S (2020) Fredholm integral relation between compound estimation and prediction (FIRCEP). Stoch Anal Appl 38:427–459MathSciNetCrossRef
61.
Zurück zum Zitat Sumner R (2014) Processing raw images in MATLAB. University of California Sata Cruz, Department of Electrical Engineering, Sata Cruz Sumner R (2014) Processing raw images in MATLAB. University of California Sata Cruz, Department of Electrical Engineering, Sata Cruz
62.
Zurück zum Zitat Sun M, Song Z, Jiang X, Pan J, Pang Y (2017) Learning pooling for convolutional neural network. Neurocomputing 224:96–104CrossRef Sun M, Song Z, Jiang X, Pan J, Pang Y (2017) Learning pooling for convolutional neural network. Neurocomputing 224:96–104CrossRef
63.
Zurück zum Zitat Sydorov V et al (2014) Deep fisher kernels-end to end learning of the fisher kernel GMM parameters. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1402–1409 Sydorov V et al (2014) Deep fisher kernels-end to end learning of the fisher kernel GMM parameters. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1402–1409
64.
Zurück zum Zitat Sze V, Chen Y-H, Yang T-J, Emer JS (2017) Efficient processing of deep neural networks: a tutorial and survey. Proc IEEE 105:2295–2329CrossRef Sze V, Chen Y-H, Yang T-J, Emer JS (2017) Efficient processing of deep neural networks: a tutorial and survey. Proc IEEE 105:2295–2329CrossRef
65.
Zurück zum Zitat Tang P, Wang X, Shi B, Bai X, Liu W, Tu Z (2016) Deep fishernet for object classification. arXiv preprint arXiv:1608.00182 Tang P, Wang X, Shi B, Bai X, Liu W, Tu Z (2016) Deep fishernet for object classification. arXiv preprint arXiv:1608.00182
66.
Zurück zum Zitat Tong Z, Aihara K, Tanaka G (2016) A hybrid pooling method for convolutional neural networks. In: International conference on neural information processing, Springer, pp 454–461 Tong Z, Aihara K, Tanaka G (2016) A hybrid pooling method for convolutional neural networks. In: International conference on neural information processing, Springer, pp 454–461
67.
Zurück zum Zitat Wager S, Wang S, Liang PS (2013) Dropout training as adaptive regularization. In: Advances in neural information processing systems, pp 351–359 Wager S, Wang S, Liang PS (2013) Dropout training as adaptive regularization. In: Advances in neural information processing systems, pp 351–359
68.
Zurück zum Zitat Wan L, Zeiler M, Zhang S, Cun YL, Fergus R (2013) Regularization of neural networks using dropconnect. In: Proceedings of the 30th international conference on machine learning (ICML-13), pp 1058–1066 Wan L, Zeiler M, Zhang S, Cun YL, Fergus R (2013) Regularization of neural networks using dropconnect. In: Proceedings of the 30th international conference on machine learning (ICML-13), pp 1058–1066
69.
Zurück zum Zitat Wu H, Gu X (2015) Max-pooling dropout for regularization of convolutional neural networks. In: International conference on neural information processing, Springer, pp 46–54 Wu H, Gu X (2015) Max-pooling dropout for regularization of convolutional neural networks. In: International conference on neural information processing, Springer, pp 46–54
70.
Zurück zum Zitat Xiao T, Li H, Ouyang W, Wang X (2016) Learning deep feature representations with domain guided dropout for person re-identification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1249–1258 Xiao T, Li H, Ouyang W, Wang X (2016) Learning deep feature representations with domain guided dropout for person re-identification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1249–1258
71.
Zurück zum Zitat Xie L, Tian Q, Zhang B (2016) Simple techniques make sense: feature pooling and normalization for image classification. IEEE Trans Circuits Syst Video Technol 26:1251–1264CrossRef Xie L, Tian Q, Zhang B (2016) Simple techniques make sense: feature pooling and normalization for image classification. IEEE Trans Circuits Syst Video Technol 26:1251–1264CrossRef
72.
Zurück zum Zitat Xu Z, Yang Y, Hauptmann AG (2015) A discriminative CNN video representation for event detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1798–1807 Xu Z, Yang Y, Hauptmann AG (2015) A discriminative CNN video representation for event detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1798–1807
73.
Zurück zum Zitat Yu Z, Ni D, Chen S, Qin J, Li S, Wang T, Lei B (2017) Hybrid dermoscopy image classification framework based on deep convolutional neural network and fisher vector. In: 2017 IEEE 14th international symposium on biomedical imaging (ISBI, 2017), IEEE, pp 301–304 Yu Z, Ni D, Chen S, Qin J, Li S, Wang T, Lei B (2017) Hybrid dermoscopy image classification framework based on deep convolutional neural network and fisher vector. In: 2017 IEEE 14th international symposium on biomedical imaging (ISBI, 2017), IEEE, pp 301–304
74.
Zurück zum Zitat Zeiler M, Fergus R (2013) Stochastic pooling for regularization of deep convolutional neural networks. In: Proceedings of the international conference on learning representation (ICLR) Zeiler M, Fergus R (2013) Stochastic pooling for regularization of deep convolutional neural networks. In: Proceedings of the international conference on learning representation (ICLR)
Metadaten
Titel
The Mode-Fisher pooling for time complexity optimization in deep convolutional neural networks
verfasst von
Dou El Kefel Mansouri
Bachir Kaddar
Seif-Eddine Benkabou
Khalid Benabdeslem
Publikationsdatum
15.10.2020
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 12/2021
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-020-05406-4

Weitere Artikel der Ausgabe 12/2021

Neural Computing and Applications 12/2021 Zur Ausgabe

Premium Partner