Skip to main content
Top

2017 | OriginalPaper | Chapter

Design and Optimization of the Model for Traffic Signs Classification Based on Convolutional Neural Networks

Authors : Jiarong Song, Zhong Yang, Tianyi Zhang, Jiaming Han

Published in: Computer Vision Systems

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Recently, convolutional neural networks (CNNs) demonstrate state-of-the-art performance in computer vision such as classification, recognition and detection. In this paper, a traffic signs classification system based on CNNs is proposed. Generally, a convolutional network usually has a large number of parameters which need millions of data and a great deal of time to train. To solve this problem, the strategy of transfer learning is utilized in this paper. Besides, further improvement is implemented on the chosen model to improve the performance of the network by changing some fully connected layers into convolutional connection. This is because that the weight shared feature of convolutional layers is able to reduce the number of parameters contained in a network. In addition, these convolutional kernels are decomposed into multi-layer and smaller convolutional kernels to get a better performance. Finally, the performance of the final optimized network is compared with unoptimized networks. Experimental results demonstrate that the final optimized network presents the best performance.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Gu, Y., et al.: An optimal sample data usage strategy to minimize over fitting and under fitting effects in regression tree models based on remotely-sensed data. Remote sens. 8(11) pp. 1–13, Article 943 (2016) Gu, Y., et al.: An optimal sample data usage strategy to minimize over fitting and under fitting effects in regression tree models based on remotely-sensed data. Remote sens. 8(11) pp. 1–13, Article 943 (2016)
2.
go back to reference Kumar, S., Ghosh, J., Crawford, M.M.: Best-bases feature extraction algorithms for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 39(7), 1368–1379 (2001)CrossRef Kumar, S., Ghosh, J., Crawford, M.M.: Best-bases feature extraction algorithms for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 39(7), 1368–1379 (2001)CrossRef
3.
go back to reference Windrim, L., et al.: Hyperspectral CNN classification with limited training samples (2016) Windrim, L., et al.: Hyperspectral CNN classification with limited training samples (2016)
4.
go back to reference Jacobsen, J.H., et al.: Structured receptive fields in CNNs, pp. 2610–2619 (2016) Jacobsen, J.H., et al.: Structured receptive fields in CNNs, pp. 2610–2619 (2016)
5.
go back to reference Audhkhasi, K., Osoba, O., Kosko, B.: Noise-enhanced convolutional neural networks. Neural Netw. Off. J. Int. Neural Netw. Soc. 78, 15–23 (2015)CrossRef Audhkhasi, K., Osoba, O., Kosko, B.: Noise-enhanced convolutional neural networks. Neural Netw. Off. J. Int. Neural Netw. Soc. 78, 15–23 (2015)CrossRef
6.
go back to reference Cui, X., Goel, V., Kingsbury, B.: Data augmentation for deep neural network acoustic modeling. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2014, pp. 5582–5586. IEEE (2014) Cui, X., Goel, V., Kingsbury, B.: Data augmentation for deep neural network acoustic modeling. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2014, pp. 5582–5586. IEEE (2014)
7.
go back to reference Yu, S., Jia, S., Xu, C.: Convolutional neural networks for hyperspectral image classification. Neurocomputing 219, 88–98 (2016)CrossRef Yu, S., Jia, S., Xu, C.: Convolutional neural networks for hyperspectral image classification. Neurocomputing 219, 88–98 (2016)CrossRef
8.
go back to reference Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems, pp. 1097–1105. Curran Associates Inc. (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems, pp. 1097–1105. Curran Associates Inc. (2012)
9.
go back to reference Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRef Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRef
10.
go back to reference Sun, M., et al.: Learning pooling for convolutional neural network. Neurocomputing (2016) Sun, M., et al.: Learning pooling for convolutional neural network. Neurocomputing (2016)
11.
go back to reference Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier networks. Learn. Stat. Optim. (2010) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier networks. Learn. Stat. Optim. (2010)
12.
go back to reference Jarrett, K., et al.: What is the best multi-stage architecture for object recognition? In: Proceedings of International Conference on Computer Vision, pp. 2146–2153 (2009) Jarrett, K., et al.: What is the best multi-stage architecture for object recognition? In: Proceedings of International Conference on Computer Vision, pp. 2146–2153 (2009)
13.
go back to reference Li, H.X., Chen, C.P.: The equivalence between fuzzy logic systems and feedforward neural networks. IEEE Trans. Neural Netw. 11(2), 356–365 (2000)CrossRef Li, H.X., Chen, C.P.: The equivalence between fuzzy logic systems and feedforward neural networks. IEEE Trans. Neural Netw. 11(2), 356–365 (2000)CrossRef
14.
go back to reference Wilamowski, B.M.: Neural network architectures and learning algorithms. Ind. Electron. Mag. IEEE 3(4), 56–63 (2010)CrossRef Wilamowski, B.M.: Neural network architectures and learning algorithms. Ind. Electron. Mag. IEEE 3(4), 56–63 (2010)CrossRef
15.
go back to reference Srivastava, N., et al.: Dropout: a simple way to prevent neural networks from over fitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetMATH Srivastava, N., et al.: Dropout: a simple way to prevent neural networks from over fitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetMATH
16.
go back to reference Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans. Med. Imaging 35(5), 1299–1312 (2016)CrossRef Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans. Med. Imaging 35(5), 1299–1312 (2016)CrossRef
17.
go back to reference Krizhevsky, A.: Convolutional deep belief networks on CIFAR-10 (2012) Krizhevsky, A.: Convolutional deep belief networks on CIFAR-10 (2012)
18.
go back to reference Shin, H.C., et al.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35(5), 1285 (2016)CrossRef Shin, H.C., et al.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35(5), 1285 (2016)CrossRef
19.
go back to reference Marmanis, D., et al.: Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geosci. Remote Sens. Lett. 13(1), 105–109 (2016)CrossRef Marmanis, D., et al.: Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geosci. Remote Sens. Lett. 13(1), 105–109 (2016)CrossRef
20.
go back to reference Kim, J.K., Lee, M.Y., Kim, J.Y.: An efficient pruning and weight sharing method for neural network. In: IEEE International Conference on Consumer Electronics, p. 2 (2016) Kim, J.K., Lee, M.Y., Kim, J.Y.: An efficient pruning and weight sharing method for neural network. In: IEEE International Conference on Consumer Electronics, p. 2 (2016)
21.
go back to reference Bouvrie, J.: Notes on convolutional neural networks. Neural Nets (2006) Bouvrie, J.: Notes on convolutional neural networks. Neural Nets (2006)
Metadata
Title
Design and Optimization of the Model for Traffic Signs Classification Based on Convolutional Neural Networks
Authors
Jiarong Song
Zhong Yang
Tianyi Zhang
Jiaming Han
Copyright Year
2017
DOI
https://doi.org/10.1007/978-3-319-68345-4_35

Premium Partner