Skip to main content
Erschienen in: Neural Computing and Applications 13/2021

04.01.2021 | Original Article

AutoFCL: automatically tuning fully connected layers for handling small dataset

verfasst von: S. H. Shabbeer Basha, Sravan Kumar Vinakota, Shiv Ram Dubey, Viswanath Pulabaigari, Snehasis Mukherjee

Erschienen in: Neural Computing and Applications | Ausgabe 13/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Deep convolutional neural networks (CNN) have evolved as popular machine learning models for image classification during the past few years, due to their ability to learn the problem-specific features directly from the input images. The success of deep learning models solicits architecture engineering rather than hand-engineering the features. However, designing state-of-the-art CNN for a given task remains a non-trivial and challenging task, especially when training data size is less. To address this phenomena, transfer learning has been used as a popularly adopted technique. While transferring the learned knowledge from one task to another, fine-tuning with the target-dependent fully connected (FC) layers generally produces better results over the target task. In this paper, the proposed AutoFCL model attempts to learn the structure of FC layers of a CNN automatically using Bayesian Optimization. To evaluate the performance of the proposed AutoFCL, we utilize five pre-trained CNN models such as VGG-16, ResNet, DenseNet, MobileNet, and NASNetMobile. The experiments are conducted on three benchmark datasets, namely CalTech-101, Oxford-102 Flowers, and UC Merced Land Use datasets. Fine-tuning the newly learned (target-dependent) FC layers leads to state-of-the-art performance, according to the experiments carried out in this research. The proposed AutoFCL method outperforms the existing methods over CalTech-101 and Oxford-102 Flowers datasets by achieving the accuracy of \(94.38\%\) and \(98.89\%\), respectively. However, our method achieves comparable performance on the UC Merced Land Use dataset with \(96.83\%\) accuracy. The source code of this research is available at https://​github.​com/​shabbeersh/​AutoFCL.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Hinton GE, Krizhevsky A, Sutskever I (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1106–1114 Hinton GE, Krizhevsky A, Sutskever I (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1106–1114
2.
Zurück zum Zitat Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9 Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9
3.
Zurück zum Zitat Hinton G, Deng L, Yu D, Dahl G, Mohamed A-r, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath T, Kingsbury B (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97CrossRef Hinton G, Deng L, Yu D, Dahl G, Mohamed A-r, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath T, Kingsbury B (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97CrossRef
4.
Zurück zum Zitat Wang M, Abdelfattah S, Moustafa N, Hu J (2018) Deep gaussian mixture-hidden markov model for classification of eeg signals. IEEE Trans Emerg Top Comput Intell 2(4):278–287CrossRef Wang M, Abdelfattah S, Moustafa N, Hu J (2018) Deep gaussian mixture-hidden markov model for classification of eeg signals. IEEE Trans Emerg Top Comput Intell 2(4):278–287CrossRef
5.
Zurück zum Zitat Zoph B, Vasudevan V, Shlens J, Le QV (2018) Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8697–8710 Zoph B, Vasudevan V, Shlens J, Le QV (2018) Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8697–8710
6.
Zurück zum Zitat Liu C, Zoph B, Neumann M, Shlens J, Hua W, Li L-J, Fei-Fei L, Yuille A, Huang J, Murphy K (2018) Progressive neural architecture search. In: Proceedings of the European conference on computer vision (ECCV), pp 19–34 Liu C, Zoph B, Neumann M, Shlens J, Hua W, Li L-J, Fei-Fei L, Yuille A, Huang J, Murphy K (2018) Progressive neural architecture search. In: Proceedings of the European conference on computer vision (ECCV), pp 19–34
8.
Zurück zum Zitat Jaafra Y, Laurent JL, Deruyver A, Naceur MS (2019) Reinforcement learning for neural architecture search: a review. Image Vis Comput 89:57–66CrossRef Jaafra Y, Laurent JL, Deruyver A, Naceur MS (2019) Reinforcement learning for neural architecture search: a review. Image Vis Comput 89:57–66CrossRef
9.
Zurück zum Zitat Basha SHS, Dubey SR, Pulabaigari V, Mukherjee S (2019) Impact of fully connected layers on performance of convolutional neural networks for image classification. Neurocomputing 378:112–119CrossRef Basha SHS, Dubey SR, Pulabaigari V, Mukherjee S (2019) Impact of fully connected layers on performance of convolutional neural networks for image classification. Neurocomputing 378:112–119CrossRef
10.
Zurück zum Zitat Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: A large-scale hierarchical image database. In: Computer vision and pattern recognition (CVPR) 2009. IEEE Conference on IEEE, pp 248–255 Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: A large-scale hierarchical image database. In: Computer vision and pattern recognition (CVPR) 2009. IEEE Conference on IEEE, pp 248–255
11.
Zurück zum Zitat Zeiler MD, and Fergus R (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision, Springer, pp 818–833 Zeiler MD, and Fergus R (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision, Springer, pp 818–833
12.
Zurück zum Zitat Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:​1409.​1556
13.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
14.
Zurück zum Zitat Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708 Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708
15.
Zurück zum Zitat Xu Q, Zhang M, Gu Z, Pan G (2019) Overfitting remedy by sparsifying regularization on fully-connected layers of cnns. Neurocomputing 328:69–74CrossRef Xu Q, Zhang M, Gu Z, Pan G (2019) Overfitting remedy by sparsifying regularization on fully-connected layers of cnns. Neurocomputing 328:69–74CrossRef
16.
Zurück zum Zitat Mendoza H, Klein A, Feurer M, Springenberg JT, Hutter F (2016) Towards automatically-tuned neural networks. In: Workshop on automatic machine learning, pp 58–65 Mendoza H, Klein A, Feurer M, Springenberg JT, Hutter F (2016) Towards automatically-tuned neural networks. In: Workshop on automatic machine learning, pp 58–65
17.
Zurück zum Zitat Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826 Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826
18.
Zurück zum Zitat Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:​1704.​04861
19.
Zurück zum Zitat Ng H-W, Nguyen VD, Vonikakis V, Winkler S (2015) Deep learning for emotion recognition on small datasets using transfer learning. In: Proceedings of the 2015 ACM on international conference on multimodal interaction. ACM, pp 443–449 Ng H-W, Nguyen VD, Vonikakis V, Winkler S (2015) Deep learning for emotion recognition on small datasets using transfer learning. In: Proceedings of the 2015 ACM on international conference on multimodal interaction. ACM, pp 443–449
21.
Zurück zum Zitat Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis (IJCV) 115(3):211–252MathSciNetCrossRef Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis (IJCV) 115(3):211–252MathSciNetCrossRef
22.
Zurück zum Zitat Li X, Grandvalet Y, Davoine F, Cheng J, Cui Y, Zhang H, Belongie S, Tsai Y-H, Yang M-H (2020) Transfer learning in computer vision tasks: remember where you come from. Image Vis Comput 93:103853CrossRef Li X, Grandvalet Y, Davoine F, Cheng J, Cui Y, Zhang H, Belongie S, Tsai Y-H, Yang M-H (2020) Transfer learning in computer vision tasks: remember where you come from. Image Vis Comput 93:103853CrossRef
23.
Zurück zum Zitat Hu J (2017) Discriminative transfer learning with sparsity regularization for single-sample face recognition. Image Vis Comput 60:48–57CrossRef Hu J (2017) Discriminative transfer learning with sparsity regularization for single-sample face recognition. Image Vis Comput 60:48–57CrossRef
24.
Zurück zum Zitat Han D, Liu Q, Fan W (2018) A new image classification method using cnn transfer learning and web data augmentation. Expert Syst Appl 95:43–56CrossRef Han D, Liu Q, Fan W (2018) A new image classification method using cnn transfer learning and web data augmentation. Expert Syst Appl 95:43–56CrossRef
25.
Zurück zum Zitat Duchi J, Hazan E, Singer Y (2011) Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res 12(Jul):2121–2159MathSciNetMATH Duchi J, Hazan E, Singer Y (2011) Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res 12(Jul):2121–2159MathSciNetMATH
27.
Zurück zum Zitat Wistuba M (2017) Bayesian optimization combined with successive halving for neural network architecture optimization. In: AutoML@ PKDD/ECML , pp 2–11 Wistuba M (2017) Bayesian optimization combined with successive halving for neural network architecture optimization. In: AutoML@ PKDD/ECML , pp 2–11
28.
Zurück zum Zitat Ji D, Jiang Y, Qian P, Wang S (2019) A novel doubly reweighting multisource transfer learning framework. IEEE Trans Emerg Top Comput Intell 3(5):380–391CrossRef Ji D, Jiang Y, Qian P, Wang S (2019) A novel doubly reweighting multisource transfer learning framework. IEEE Trans Emerg Top Comput Intell 3(5):380–391CrossRef
29.
Zurück zum Zitat Gupta A, Ong Y-S, Feng L (2017) Insights on transfer optimization: because experience is the best teacher. IEEE Trans Emerg Top Comput Intell 2(1):51–64CrossRef Gupta A, Ong Y-S, Feng L (2017) Insights on transfer optimization: because experience is the best teacher. IEEE Trans Emerg Top Comput Intell 2(1):51–64CrossRef
30.
Zurück zum Zitat Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks?. In: Advances in neural information processing systems, pp 3320–3328 Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks?. In: Advances in neural information processing systems, pp 3320–3328
31.
Zurück zum Zitat Xie M, Jean N, Burke M, Lobell D, Ermon S (2016) Transfer learning from deep features for remote sensing and poverty mapping. In: 13th AAAI conference on artificial intelligence Xie M, Jean N, Burke M, Lobell D, Ermon S (2016) Transfer learning from deep features for remote sensing and poverty mapping. In: 13th AAAI conference on artificial intelligence
32.
Zurück zum Zitat Molchanov P, Tyree S, Karras T, Aila T, Kautz J (2016) Pruning convolutional neural networks for resource efficient transfer learning, vol 3. arXiv preprint arXiv:1611.06440 Molchanov P, Tyree S, Karras T, Aila T, Kautz J (2016) Pruning convolutional neural networks for resource efficient transfer learning, vol 3. arXiv preprint arXiv:​1611.​06440
33.
Zurück zum Zitat Snoek J, Larochelle H, Adams RP (2012) Practical Bayesian optimization of machine learning algorithms. In: Advances in neural information processing systems, pp 2951–2959 Snoek J, Larochelle H, Adams RP (2012) Practical Bayesian optimization of machine learning algorithms. In: Advances in neural information processing systems, pp 2951–2959
34.
Zurück zum Zitat Williams CK, Rasmussen CE (2006) Gaussian processes for machine learning, vol 2. MIT press, Cambridge, MAMATH Williams CK, Rasmussen CE (2006) Gaussian processes for machine learning, vol 2. MIT press, Cambridge, MAMATH
35.
Zurück zum Zitat Rasmussen CE (2003) Gaussian processes in machine learning. In: Summer school on machine learning. Springer, Berlin, Heidelberg, pp 63–71 Rasmussen CE (2003) Gaussian processes in machine learning. In: Summer school on machine learning. Springer, Berlin, Heidelberg, pp 63–71
36.
Zurück zum Zitat Jones DR, Schonlau M, Welch WJ (1998) Efficient global optimization of expensive black-box functions. J Glob Optim 13(4):455–492MathSciNetCrossRef Jones DR, Schonlau M, Welch WJ (1998) Efficient global optimization of expensive black-box functions. J Glob Optim 13(4):455–492MathSciNetCrossRef
37.
Zurück zum Zitat Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:​1502.​03167
38.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision, pp 1026–1034 He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision, pp 1026–1034
39.
Zurück zum Zitat Kelley HJ (1960) Gradient theory of optimal flight paths. Ars J 30(10):947–954CrossRef Kelley HJ (1960) Gradient theory of optimal flight paths. Ars J 30(10):947–954CrossRef
40.
Zurück zum Zitat Fei-Fei L, Fergus R, Perona P (2006) One-shot learning of object categories. IEEE Trans Pattern Anal Mach Intell 28(4):594–611CrossRef Fei-Fei L, Fergus R, Perona P (2006) One-shot learning of object categories. IEEE Trans Pattern Anal Mach Intell 28(4):594–611CrossRef
41.
Zurück zum Zitat Nilsback M-E, Zisserman A (2008) Automated flower classification over a large number of classes. In: Proceedings of the Indian conference on computer vision, graphics and image processing, Dec Nilsback M-E, Zisserman A (2008) Automated flower classification over a large number of classes. In: Proceedings of the Indian conference on computer vision, graphics and image processing, Dec
42.
Zurück zum Zitat Yang Y, Newsam S (2010) Bag-of-visual-words and spatial extensions for land-use classification. In: Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems. ACM, pp 270–279 Yang Y, Newsam S (2010) Bag-of-visual-words and spatial extensions for land-use classification. In: Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems. ACM, pp 270–279
43.
Zurück zum Zitat Lee H, Grosse R, Ranganath R, Ng AY (2009) Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th annual international conference on machine learning. ACM, pp 609–616 Lee H, Grosse R, Ranganath R, Ng AY (2009) Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th annual international conference on machine learning. ACM, pp 609–616
44.
Zurück zum Zitat Cubuk ED, Zoph B, Mane D, Vasudevan V, and Le QV (2019) Autoaugment: learning augmentation strategies from data. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 113–123 Cubuk ED, Zoph B, Mane D, Vasudevan V, and Le QV (2019) Autoaugment: learning augmentation strategies from data. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 113–123
45.
Zurück zum Zitat Sawada Y, Sato Y, Nakada T, Yamaguchi S, Ujimoto K, Hayashi N (2019) Improvement in classification performance based on target vector modification for all-transfer deep learning. Appl Sci 9(1):128CrossRef Sawada Y, Sato Y, Nakada T, Yamaguchi S, Ujimoto K, Hayashi N (2019) Improvement in classification performance based on target vector modification for all-transfer deep learning. Appl Sci 9(1):128CrossRef
46.
Zurück zum Zitat Huang B, Hu Y, Sun Y, Hao X, Yan C (2018) A flower classification framework based on ensemble of CNNS. In: Pacific Rim Conference on Multimedia, Springer, pp 235–244 Huang B, Hu Y, Sun Y, Hao X, Yan C (2018) A flower classification framework based on ensemble of CNNS. In: Pacific Rim Conference on Multimedia, Springer, pp 235–244
47.
Zurück zum Zitat Lv X, Duan F (2018) Metric learning via feature weighting for scalable image retrieval. Pattern Recognit Lett 109:97–102CrossRef Lv X, Duan F (2018) Metric learning via feature weighting for scalable image retrieval. Pattern Recognit Lett 109:97–102CrossRef
48.
Zurück zum Zitat Murabito F, Spampinato C, Palazzo S, Giordano D, Pogorelov K, Riegler M (2018) Top-down saliency detection driven by visual classification. Comput Vis Image Underst 172:67–76CrossRef Murabito F, Spampinato C, Palazzo S, Giordano D, Pogorelov K, Riegler M (2018) Top-down saliency detection driven by visual classification. Comput Vis Image Underst 172:67–76CrossRef
50.
Zurück zum Zitat Karlinsky L, Shtok J, Harary S, Schwartz E, Aides A, Feris R, Giryes R, Bronstein AM (2019) Repmet: representative-based metric learning for classification and few-shot object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5197–5206 Karlinsky L, Shtok J, Harary S, Schwartz E, Aides A, Feris R, Giryes R, Bronstein AM (2019) Repmet: representative-based metric learning for classification and few-shot object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5197–5206
51.
Zurück zum Zitat Shao W, Yang W, Xia G-S, Liu G (2013) A hierarchical scheme of multiple feature fusion for high-resolution satellite scene categorization. In: International conference on computer vision systems, Springer, pp 324–333 Shao W, Yang W, Xia G-S, Liu G (2013) A hierarchical scheme of multiple feature fusion for high-resolution satellite scene categorization. In: International conference on computer vision systems, Springer, pp 324–333
52.
Zurück zum Zitat Yang MY, Al-Shaikhli S, Jiang T, Cao Y, Rosenhahn B (2016) Bi-layer dictionary learning for remote sensing image classification. In: IEEE International geoscience and remote sensing symposium (IGARSS), pp 3059–3062 Yang MY, Al-Shaikhli S, Jiang T, Cao Y, Rosenhahn B (2016) Bi-layer dictionary learning for remote sensing image classification. In: IEEE International geoscience and remote sensing symposium (IGARSS), pp 3059–3062
53.
Zurück zum Zitat Akram T, Laurent B, Naqvi SR, Alex MM, Muhammad N et al (2018) A deep heterogeneous feature fusion approach for automatic land-use classification. Inf Sci 467:199–218CrossRef Akram T, Laurent B, Naqvi SR, Alex MM, Muhammad N et al (2018) A deep heterogeneous feature fusion approach for automatic land-use classification. Inf Sci 467:199–218CrossRef
54.
Zurück zum Zitat Wang EK, Li Y, Nie Z, Yu J, Liang Z, Zhang X, Yiu SM (2019) Deep fusion feature based object detection method for high resolution optical remote sensing images. Appl Sci 9(6):1130CrossRef Wang EK, Li Y, Nie Z, Yu J, Liang Z, Zhang X, Yiu SM (2019) Deep fusion feature based object detection method for high resolution optical remote sensing images. Appl Sci 9(6):1130CrossRef
55.
Zurück zum Zitat LeCun Y, Bottou L, Bengio Y, Haffner P et al (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRef LeCun Y, Bottou L, Bengio Y, Haffner P et al (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRef
56.
Zurück zum Zitat Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images
Metadaten
Titel
AutoFCL: automatically tuning fully connected layers for handling small dataset
verfasst von
S. H. Shabbeer Basha
Sravan Kumar Vinakota
Shiv Ram Dubey
Viswanath Pulabaigari
Snehasis Mukherjee
Publikationsdatum
04.01.2021
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 13/2021
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-020-05549-4

Weitere Artikel der Ausgabe 13/2021

Neural Computing and Applications 13/2021 Zur Ausgabe

Premium Partner