Skip to main content

2022 | OriginalPaper | Buchkapitel

Classification of Objects Using Neuromorphic Camera and Convolutional Neural Networks

verfasst von : E.  B.  Gouveia, E.  L.  S.  Gouveia, V.  T.  Costa, A.  Nakagawa-Silva, A.  B.  Soares

Erschienen in: XXVII Brazilian Congress on Biomedical Engineering

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The classification of objects is a field very well explored by Computer Vision and has achieved excellent results over the last decade. The sophistication of biological systems has led us to the development of bioinspired technologies through Neuromorphic Engineering that proposes to develop robotic systems with operation inspired by the physiological processes found in nature. Seeking to combine the high speed of information processing, with the low dimensionality of the data and a reduced computational cost, we combine a neuromorphic vision sensor (DVS128) with a Convolutional Neural Network (CNN) to classify images of nine different objects. Our deep learning model achieved 75.31% accuracy when performing network validation using the holdout method.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Liu S-C, Delbruck T, Indiveri G, Douglas R, Whatley A (2014) Event-based neuromorphic systems. Wiley Liu S-C, Delbruck T, Indiveri G, Douglas R, Whatley A (2014) Event-based neuromorphic systems. Wiley
2.
Zurück zum Zitat Liu S-C, Delbruck T (2010) Neuromorphic sensory systems Curr Opin Neurobiol 20:288–295 Liu S-C, Delbruck T (2010) Neuromorphic sensory systems Curr Opin Neurobiol 20:288–295
3.
Zurück zum Zitat Kwabena Boahen (2005) Neuromorphic microchips. Scie Am 292:56–63 Kwabena Boahen (2005) Neuromorphic microchips. Scie Am 292:56–63
4.
Zurück zum Zitat Douglas R, Mahowald M, Mead C (1995) Neuromorphic analogue VLSI. Ann Rev Neurosci 18:255–281 Douglas R, Mahowald M, Mead C (1995) Neuromorphic analogue VLSI. Ann Rev Neurosci 18:255–281
5.
Zurück zum Zitat Delbrück T, Linares-Barranco B, Culurciello E, Posch C (2010) Activity-driven, event-based vision sensors. In Proceedings of 2010 IEEE international symposium on circuits and systems, pp 2426–2429. IEEE Delbrück T, Linares-Barranco B, Culurciello E, Posch C (2010) Activity-driven, event-based vision sensors. In Proceedings of 2010 IEEE international symposium on circuits and systems, pp 2426–2429. IEEE
6.
Zurück zum Zitat Brandli C, Berner R, Yang M, Liu S-C, Delbruck T (2014) A 240\(\times \) 180 130 db 3 \(\mu \)s latency global shutter spatiotemporal vision sensor. IEEE J Solid-State Circ 49:2333–2341 Brandli C, Berner R, Yang M, Liu S-C, Delbruck T (2014) A 240\(\times \) 180 130 db 3 \(\mu \)s latency global shutter spatiotemporal vision sensor. IEEE J Solid-State Circ 49:2333–2341
7.
Zurück zum Zitat Paredes-Vallés F, Scheper KYW, De Croon GC, Eugene H (2019) Unsupervised learning of a hierarchical spiking neural network for optical flow estimation: from events to global motion perception. IEEE Trans Pattern Anal Mach Intell Paredes-Vallés F, Scheper KYW, De Croon GC, Eugene H (2019) Unsupervised learning of a hierarchical spiking neural network for optical flow estimation: from events to global motion perception. IEEE Trans Pattern Anal Mach Intell
8.
Zurück zum Zitat Lichtsteiner P, Posch C, Delbruck T (2008) A 128x128 120 dB 15mu’s latency asynchronous temporal contrast vision sensor. IEEE Jo Solid-State Circ 43:566–576 Lichtsteiner P, Posch C, Delbruck T (2008) A 128x128 120 dB 15mu’s latency asynchronous temporal contrast vision sensor. IEEE Jo Solid-State Circ 43:566–576
9.
Zurück zum Zitat Delbruck T (2008) Frame-free dynamic digital vision. In Proceedings of international symposium on secure-life electronics, advanced electronics for quality life and society, pp 21–26. Citeseer 2008 Delbruck T (2008) Frame-free dynamic digital vision. In Proceedings of international symposium on secure-life electronics, advanced electronics for quality life and society, pp 21–26. Citeseer 2008
10.
Zurück zum Zitat Yadav S, Shukla S (2016) Analysis of k-fold cross-validation over hold-out validation on colossal datasets for quality classification. In: 2016 IEEE 6th International conference on advanced computing (IACC), pp 78–83. IEEE Yadav S, Shukla S (2016) Analysis of k-fold cross-validation over hold-out validation on colossal datasets for quality classification. In: 2016 IEEE 6th International conference on advanced computing (IACC), pp 78–83. IEEE
11.
Zurück zum Zitat Cifar 10 at www.cs.toronto.edu/ kriz Cifar 10 at www.cs.toronto.edu/ kriz
12.
Zurück zum Zitat Deng J, Dong W, Socher R, Li L-J, Li K, Li FF (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255. IEEE Deng J, Dong W, Socher R, Li L-J, Li K, Li FF (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255. IEEE
13.
Zurück zum Zitat Haber N, Mrowca D, Li F-F, Yamins D (2018) Modeling the scientist in the crib. J Vis 18:10–10 Haber N, Mrowca D, Li F-F, Yamins D (2018) Modeling the scientist in the crib. J Vis 18:10–10
14.
Zurück zum Zitat Huang Y, Cheng Y, Bapna A et al (2019) Gpipe: efficient training of giant neural networks using pipeline parallelism. In: Advances in neural information processing systems, vol 103–112 Huang Y, Cheng Y, Bapna A et al (2019) Gpipe: efficient training of giant neural networks using pipeline parallelism. In: Advances in neural information processing systems, vol 103–112
15.
Zurück zum Zitat Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826 Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826
16.
Zurück zum Zitat Szegedy C, Liu W, Jia Y et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9 Szegedy C, Liu W, Jia Y et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9
17.
Zurück zum Zitat Kowsari K, Heidarysafa M, Brown DE, Meimandi KJ, Barnes LE (2018) Rmdl: random multimodel deep learning for classification. In: Proceedings of the 2nd international conference on information system and data mining, pp 19–28 Kowsari K, Heidarysafa M, Brown DE, Meimandi KJ, Barnes LE (2018) Rmdl: random multimodel deep learning for classification. In: Proceedings of the 2nd international conference on information system and data mining, pp 19–28
18.
Zurück zum Zitat Franco JAG, Valle Padilla JL, Cisneros SO (2013) Event-based image processing using a neuromorphic vision sensor. In: 2013 IEEE international autumn meeting on power electronics and computing (ROPEC) Franco JAG, Valle Padilla JL, Cisneros SO (2013) Event-based image processing using a neuromorphic vision sensor. In: 2013 IEEE international autumn meeting on power electronics and computing (ROPEC)
19.
Zurück zum Zitat Intel 2017 - Beyond Today’s AI at shorturl.at/dgEU5 Intel 2017 - Beyond Today’s AI at shorturl.at/dgEU5
20.
Zurück zum Zitat Tan C, Lallee S, Orchard G (2015) Benchmarking neuromorphic vision: lessons learnt from computer vision. Front Neurosci 9:374 Tan C, Lallee S, Orchard G (2015) Benchmarking neuromorphic vision: lessons learnt from computer vision. Front Neurosci 9:374
21.
Zurück zum Zitat Orchard G, Jayawant A, Cohen GK, Thakor N (2015) Converting static image datasets to spiking neuromorphic datasets using saccades. Front Neurosci 9:437 Orchard G, Jayawant A, Cohen GK, Thakor N (2015) Converting static image datasets to spiking neuromorphic datasets using saccades. Front Neurosci 9:437
22.
Zurück zum Zitat Gopalakrishnan R, Chua Y, Iyer LR (2018) Classifying neuromorphic data using a deep learning framework for image classification. In: 2018 15th international conference on control, automation, robotics and vision (ICARCV) Gopalakrishnan R, Chua Y, Iyer LR (2018) Classifying neuromorphic data using a deep learning framework for image classification. In: 2018 15th international conference on control, automation, robotics and vision (ICARCV)
23.
Zurück zum Zitat Ghosh R, Mishra A, Orchard G, Thakor NV (2014) Real-time object recognition and orientation estimation using an event-based camera and CNN. In: Proceedings of 2014 IEEE biomedical circuits and systems conference (BioCAS), pp 544–547. IEEE Ghosh R, Mishra A, Orchard G, Thakor NV (2014) Real-time object recognition and orientation estimation using an event-based camera and CNN. In: Proceedings of 2014 IEEE biomedical circuits and systems conference (BioCAS), pp 544–547. IEEE
Metadaten
Titel
Classification of Objects Using Neuromorphic Camera and Convolutional Neural Networks
verfasst von
E.  B.  Gouveia
E.  L.  S.  Gouveia
V.  T.  Costa
A.  Nakagawa-Silva
A.  B.  Soares
Copyright-Jahr
2022
DOI
https://doi.org/10.1007/978-3-030-70601-2_334

Neuer Inhalt