Skip to main content
Top

2024 | OriginalPaper | Chapter

RMPE:Reducing Residual Membrane Potential Error for Enabling High-Accuracy and Ultra-low-latency Spiking Neural Networks

Authors : Yunhua Chen, Zhimin Xiong, Ren Feng, Pinghua Chen, Jinsheng Xiao

Published in: Neural Information Processing

Publisher: Springer Nature Singapore

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Spiking neural networks (SNNs) have attracted great attention due to their distinctive properties of low power consumption and high computing efficiency on neuromorphic hardware. An effective way to obtain deep SNNs with competitive accuracy on large-scale datasets is ANN-SNN conversion. However, it requires a long time window to get an optimal mapping between the firing rates of SNNs and the activation of ANNs due to conversion error. Compared with the source ANN, the converted SNN usually suffers a huge loss of accuracy at ultra-low latency. In this paper, we first analyze the residual membrane potential error caused by the asynchronous transmission property of spikes at ultra-low latency, and we deduce an explicit expression for the residual membrane potential error (RMPE) and the SNN parameters. Then we propose a layer-by-layer calibration algorithm for these SNN parameters to eliminate RMPE. Finally, a two-stage ANN-SNN conversion scheme is proposed to eliminate the quantization error, the truncation error, and the RMPE separately. We evaluate our method on CIRFARs and ImageNet, and the experimental results show that the proposed ANN-SNN conversion method has a significant reduction in accuracy loss at ultra-low-latency. When T is \(\le 64\), our method requires about half the latency of other methods of similar accuracy on ImageNet. The code is available at https://​github.​com/​JominWink/​SNN_​Conversion_​Phase.​

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Bu, T., Fang, W., Ding, J., Dai, P., Yu, Z., Huang, T.: Optimal ANN-SNN conversion for high-accuracy and ultra-low-latency spiking neural networks. In: International Conference on Learning Representations (2022) Bu, T., Fang, W., Ding, J., Dai, P., Yu, Z., Huang, T.: Optimal ANN-SNN conversion for high-accuracy and ultra-low-latency spiking neural networks. In: International Conference on Learning Representations (2022)
2.
go back to reference Pfeiffer, M., Pfeil, T.: Deep learning with spiking neurons: opportunities and challenges. Front. Neurosci. 12, 774 (2018)CrossRef Pfeiffer, M., Pfeil, T.: Deep learning with spiking neurons: opportunities and challenges. Front. Neurosci. 12, 774 (2018)CrossRef
3.
go back to reference Xiao, J., Guo, H., Zhou, J., Zhao, T., Yu, Q., Chen, Y.: Tiny object detection with context enhancement and feature purification. Expert Syst. Appl. 211, 118665–118674 (2023)CrossRef Xiao, J., Guo, H., Zhou, J., Zhao, T., Yu, Q., Chen, Y.: Tiny object detection with context enhancement and feature purification. Expert Syst. Appl. 211, 118665–118674 (2023)CrossRef
4.
go back to reference Xiao, J., Wu, Y., Chen, Y., Wang, S., Wang, Z., Ma, J.: LSTFE-net: Long short-term feature enhancement network for video small object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14613–14622 (2023) Xiao, J., Wu, Y., Chen, Y., Wang, S., Wang, Z., Ma, J.: LSTFE-net: Long short-term feature enhancement network for video small object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14613–14622 (2023)
5.
go back to reference Lee, J.H., Delbruck, T., Pfeiffer, M.: Training deep spiking neural networks using backpropagation. Front. Neurosci. 10, 508 (2016)CrossRef Lee, J.H., Delbruck, T., Pfeiffer, M.: Training deep spiking neural networks using backpropagation. Front. Neurosci. 10, 508 (2016)CrossRef
6.
go back to reference Tavanaei, A., Maida, A.: BP-STDP: approximating backpropagation using spike timing dependent plasticity. Neurocomputing 330, 39–47 (2019)CrossRef Tavanaei, A., Maida, A.: BP-STDP: approximating backpropagation using spike timing dependent plasticity. Neurocomputing 330, 39–47 (2019)CrossRef
7.
go back to reference Cao, Y., Chen, Y., Khosla, D.: Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vision 113(1), 54–66 (2015)MathSciNetCrossRef Cao, Y., Chen, Y., Khosla, D.: Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vision 113(1), 54–66 (2015)MathSciNetCrossRef
8.
go back to reference Chen, Y., Mai, Y., Feng, R., Xiao, J.: An adaptive threshold mechanism for accurate and efficient deep spiking convolutional neural networks. Neurocomputing 469, 189–197 (2022)CrossRef Chen, Y., Mai, Y., Feng, R., Xiao, J.: An adaptive threshold mechanism for accurate and efficient deep spiking convolutional neural networks. Neurocomputing 469, 189–197 (2022)CrossRef
9.
go back to reference Diehl, P.U., Neil, D., Binas, J., Cook, M., Liu, S.C., Pfeiffer, M.: Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In: International Joint Conference on Neural Networks, pp. 1–8 (2015) Diehl, P.U., Neil, D., Binas, J., Cook, M., Liu, S.C., Pfeiffer, M.: Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In: International Joint Conference on Neural Networks, pp. 1–8 (2015)
10.
go back to reference Bu, T., Ding, J., yu, Z., Huang, T.: Optimized potential initialization for low-latency spiking neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 11–20, June 2022 Bu, T., Ding, J., yu, Z., Huang, T.: Optimized potential initialization for low-latency spiking neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 11–20, June 2022
11.
go back to reference Li, Y., Deng, S., Dong, X., Gong, R., Gu, S.: A free lunch from ANN: towards efficient, accurate spiking neural networks calibration. In: International Conference on Machine Learning, pp. 6316–6325 (2021) Li, Y., Deng, S., Dong, X., Gong, R., Gu, S.: A free lunch from ANN: towards efficient, accurate spiking neural networks calibration. In: International Conference on Machine Learning, pp. 6316–6325 (2021)
12.
go back to reference Mueller, E., Hansjakob, J., Auge, D., Knoll, A.: Minimizing inference time: Optimization methods for converted deep spiking neural networks. In: International Joint Conference on Neural Networks, pp. 1–8 (2021) Mueller, E., Hansjakob, J., Auge, D., Knoll, A.: Minimizing inference time: Optimization methods for converted deep spiking neural networks. In: International Joint Conference on Neural Networks, pp. 1–8 (2021)
13.
go back to reference Deng, S., Gu, S.: Optimal conversion of conventional artificial neural networks to spiking neural networks. ArXiv abs/2103.00476 (2021) Deng, S., Gu, S.: Optimal conversion of conventional artificial neural networks to spiking neural networks. ArXiv abs/2103.00476 (2021)
14.
go back to reference Datta, G., Beerel, P.A.: Can deep neural networks be converted to ultra low-latency spiking neural networks? In: Automation & Test in Europe Conference & Exhibition, pp. 718–723 (2022) Datta, G., Beerel, P.A.: Can deep neural networks be converted to ultra low-latency spiking neural networks? In: Automation & Test in Europe Conference & Exhibition, pp. 718–723 (2022)
15.
go back to reference Rueckauer, B., Liu, S.C.: Conversion of analog to spiking neural networks using sparse temporal coding. In: 2018 IEEE International Symposium on Circuits and Systems, pp. 1–5 (2018) Rueckauer, B., Liu, S.C.: Conversion of analog to spiking neural networks using sparse temporal coding. In: 2018 IEEE International Symposium on Circuits and Systems, pp. 1–5 (2018)
16.
go back to reference Rueckauer, B., Lungu, I.A., Hu, Y., Pfeiffer, M., Liu, S.C.: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. 11, 682 (2017)CrossRef Rueckauer, B., Lungu, I.A., Hu, Y., Pfeiffer, M., Liu, S.C.: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. 11, 682 (2017)CrossRef
17.
go back to reference Han, B., Srinivasan, G., Roy, K.: RMP-SNN: residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 13558–13567 (2020) Han, B., Srinivasan, G., Roy, K.: RMP-SNN: residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 13558–13567 (2020)
18.
go back to reference Liu, F., Zhao, W., Chen, Y., Wang, Z., Jiang, L.: Spikeconverter: an efficient conversion framework zipping the gap between artificial neural networks and spiking neural networks, vol. 36, pp. 1692–1701 (2022) Liu, F., Zhao, W., Chen, Y., Wang, Z., Jiang, L.: Spikeconverter: an efficient conversion framework zipping the gap between artificial neural networks and spiking neural networks, vol. 36, pp. 1692–1701 (2022)
19.
go back to reference Meng, Q., Yan, S., Xiao, M., Wang, Y., Lin, Z., Luo, Z.Q.: Training much deeper spiking neural networks with a small number of time-steps. Neural Netw. 153, 254–268 (2022)CrossRef Meng, Q., Yan, S., Xiao, M., Wang, Y., Lin, Z., Luo, Z.Q.: Training much deeper spiking neural networks with a small number of time-steps. Neural Netw. 153, 254–268 (2022)CrossRef
21.
go back to reference Horowitz, M.: 1.1 computing’s energy problem (and what we can do about it). In: IEEE International Solid-State Circuits Conference Digest of Technical Papers, pp. 10–14 (2014) Horowitz, M.: 1.1 computing’s energy problem (and what we can do about it). In: IEEE International Solid-State Circuits Conference Digest of Technical Papers, pp. 10–14 (2014)
22.
go back to reference Li, Y., Deng, S.W., Dong, X., Gu, S.: Converting artificial neural networks to spiking neural networks via parameter calibration. ArXiv abs/2205.10121 (2022) Li, Y., Deng, S.W., Dong, X., Gu, S.: Converting artificial neural networks to spiking neural networks via parameter calibration. ArXiv abs/2205.10121 (2022)
Metadata
Title
RMPE:Reducing Residual Membrane Potential Error for Enabling High-Accuracy and Ultra-low-latency Spiking Neural Networks
Authors
Yunhua Chen
Zhimin Xiong
Ren Feng
Pinghua Chen
Jinsheng Xiao
Copyright Year
2024
Publisher
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-99-8067-3_7

Premium Partner