Skip to main content

2024 | OriginalPaper | Buchkapitel

5. Digital Circuits and CIM Integrated NN Processor

verfasst von : Jinshan Yue

Erschienen in: High Energy Efficiency Neural Network Processor with Combined Digital and Computing-in-Memory Architecture

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This chapter first introduces the advantages of CIM in terms of energy efficiency compared to pure digital circuits based NN processors, and analyzes the deficiencies of CIM system chips, as well as the challenges of data reuse and sparsity optimization at the system level.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Horowitz M (2014) 1.1 computing’s energy problem (and what we can do about it). In: 2014 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 10–14 Horowitz M (2014) 1.1 computing’s energy problem (and what we can do about it). In: 2014 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 10–14
2.
Zurück zum Zitat Biswas A, Chandrakasan AP (2018) Conv-RAM: an energy-efficient SRAM with embedded convolution computation for low-power CNN-based machine learning applications. In: 2018 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 488–490 Biswas A, Chandrakasan AP (2018) Conv-RAM: an energy-efficient SRAM with embedded convolution computation for low-power CNN-based machine learning applications. In: 2018 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 488–490
3.
Zurück zum Zitat Khwa W-S, Chen J-J, Li J-F, Si X, Yang E-Y, Sun X, Liu R, Chen P-Y, Li Q, Yu S et al (2018) A 65 nm 4 kB algorithm-dependent computing-in-memory SRAM unit-macro with 2.3 ns and 55.8 TOPS/W fully parallel product-sum operation for binary DNN edge processors. In: 2018 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 496–498 Khwa W-S, Chen J-J, Li J-F, Si X, Yang E-Y, Sun X, Liu R, Chen P-Y, Li Q, Yu S et al (2018) A 65 nm 4 kB algorithm-dependent computing-in-memory SRAM unit-macro with 2.3 ns and 55.8 TOPS/W fully parallel product-sum operation for binary DNN edge processors. In: 2018 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 496–498
4.
Zurück zum Zitat Yang J, Kong Y, Wang Z, Liu Y, Wang B, Yin S, Shi L (2019) 24.4 sandwich-RAM: an energy-efficient in-memory BWN architecture with pulse-width modulation. In: 2019 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 394–396 Yang J, Kong Y, Wang Z, Liu Y, Wang B, Yin S, Shi L (2019) 24.4 sandwich-RAM: an energy-efficient in-memory BWN architecture with pulse-width modulation. In: 2019 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 394–396
5.
Zurück zum Zitat Si X, Chen J-J, Tu Y-N, Huang W-H, Wang J-H, Chiu Y-C, Wei W-C, Wu S-Y, Sun X, Liu R et al (2019) 24.5 A twin-8T SRAM computation-in-memory macro for multiple-bit CNN-based machine learning. In: 2019 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 396–398 Si X, Chen J-J, Tu Y-N, Huang W-H, Wang J-H, Chiu Y-C, Wei W-C, Wu S-Y, Sun X, Liu R et al (2019) 24.5 A twin-8T SRAM computation-in-memory macro for multiple-bit CNN-based machine learning. In: 2019 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 396–398
6.
Zurück zum Zitat Guo R, Liu Y, Zheng S, Wu S-Y, Ouyang P, Khwa W-S, Chen X, Chen J-J, Li X, Liu L et al (2019) A 5.1 pJ/neuron 127.3 \(\upmu \)s/inference RNN-based speech recognition processor using 16 computing-in-memory SRAM macros in 65 nm CMOS. In: 2019 symposium on VLSI circuits. IEEE, pp C120–C121 Guo R, Liu Y, Zheng S, Wu S-Y, Ouyang P, Khwa W-S, Chen X, Chen J-J, Li X, Liu L et al (2019) A 5.1 pJ/neuron 127.3 \(\upmu \)s/inference RNN-based speech recognition processor using 16 computing-in-memory SRAM macros in 65 nm CMOS. In: 2019 symposium on VLSI circuits. IEEE, pp C120–C121
7.
Zurück zum Zitat Su J-W, Si X, Chou Y-C, Chang T-W, Huang W-H, Tu Y-N, Liu R, Lu P-J, Liu T-W, Wang J-H et al (2020) 15.2 A 28 nm 64 kB inference-training two-way transpose multibit 6T SRAM compute-in-memory macro for AI edge chips. In: 2020 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 240–242 Su J-W, Si X, Chou Y-C, Chang T-W, Huang W-H, Tu Y-N, Liu R, Lu P-J, Liu T-W, Wang J-H et al (2020) 15.2 A 28 nm 64 kB inference-training two-way transpose multibit 6T SRAM compute-in-memory macro for AI edge chips. In: 2020 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 240–242
8.
Zurück zum Zitat Si X, Tu Y-N, Huanq W-H, Su J-W, Lu P-J, Wang J-H, Liu T-W, Wu S-Y, Liu R, Chou Y-C et al (2020) 15.5 A 28 nm 64 kB 6T SRAM computing-in-memory macro with 8b MAC operation for AI edge chips. In: 2020 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 246–248 Si X, Tu Y-N, Huanq W-H, Su J-W, Lu P-J, Wang J-H, Liu T-W, Wu S-Y, Liu R, Chou Y-C et al (2020) 15.5 A 28 nm 64 kB 6T SRAM computing-in-memory macro with 8b MAC operation for AI edge chips. In: 2020 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 246–248
9.
Zurück zum Zitat Imani M, Gupta S, Rosing T (2017) Ultra-efficient processing in-memory for data intensive applications. In: 2017 54th ACM/EDAC/IEEE design automation conference (DAC). IEEE, pp 1–6 Imani M, Gupta S, Rosing T (2017) Ultra-efficient processing in-memory for data intensive applications. In: 2017 54th ACM/EDAC/IEEE design automation conference (DAC). IEEE, pp 1–6
10.
Zurück zum Zitat Yue J, Liu Y, Yuan Z, Feng X, He Y, Sun W, Zhang Z, Si X, Liu R, Wang Z et al (2022) Sticker-IM: A 65 nm computing-in-memory NN processor using block-wise sparsity optimization and inter/intra-macro data reuse. IEEE J Sol-State Circ 57(8):2560–2573CrossRef Yue J, Liu Y, Yuan Z, Feng X, He Y, Sun W, Zhang Z, Si X, Liu R, Wang Z et al (2022) Sticker-IM: A 65 nm computing-in-memory NN processor using block-wise sparsity optimization and inter/intra-macro data reuse. IEEE J Sol-State Circ 57(8):2560–2573CrossRef
11.
Zurück zum Zitat Han S, Pool J, Tran J, Dally W (2015) Learning both weights and connections for efficient neural network. In: Advances in neural information processing systems, pp 1135–1143 Han S, Pool J, Tran J, Dally W (2015) Learning both weights and connections for efficient neural network. In: Advances in neural information processing systems, pp 1135–1143
12.
Zurück zum Zitat Wen W, Chunpeng W, Wang Y, Chen Y, Li H (2016) Learning structured sparsity in deep neural networks. Adv Neural Inf Process Syst 29:2074–2082 Wen W, Chunpeng W, Wang Y, Chen Y, Li H (2016) Learning structured sparsity in deep neural networks. Adv Neural Inf Process Syst 29:2074–2082
13.
Zurück zum Zitat Mao H, Han S, Pool J, Li W, Liu X, Wang Y, Dally WJ (2017) Exploring the granularity of sparsity in convolutional neural networks, pp 13–20 Mao H, Han S, Pool J, Li W, Liu X, Wang Y, Dally WJ (2017) Exploring the granularity of sparsity in convolutional neural networks, pp 13–20
14.
Zurück zum Zitat Zhang T, Ye S, Zhang K, Tang J, Wen W, Fardad M, Wang Y (2018) A systematic DNN weight pruning framework using alternating direction method of multipliers. In: Proceedings of the European conference on computer vision (ECCV), pp 184–199 Zhang T, Ye S, Zhang K, Tang J, Wen W, Fardad M, Wang Y (2018) A systematic DNN weight pruning framework using alternating direction method of multipliers. In: Proceedings of the European conference on computer vision (ECCV), pp 184–199
15.
Zurück zum Zitat Yue J, Yuan Z, Feng X, He Y, Zhang Z, Si X, Liu R, Chang M-F, Li X, Yang H et al (2020) 14.3 A 65 nm computing-in-memory-based CNN processor with 2.9-to-35.8 TOPS/W system energy efficiency using dynamic-sparsity performance-scaling architecture and energy-efficient inter/intra-macro data reuse. In: 2020 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 234–236 Yue J, Yuan Z, Feng X, He Y, Zhang Z, Si X, Liu R, Chang M-F, Li X, Yang H et al (2020) 14.3 A 65 nm computing-in-memory-based CNN processor with 2.9-to-35.8 TOPS/W system energy efficiency using dynamic-sparsity performance-scaling architecture and energy-efficient inter/intra-macro data reuse. In: 2020 IEEE international solid-state circuits conference (ISSCC). IEEE, pp 234–236
Metadaten
Titel
Digital Circuits and CIM Integrated NN Processor
verfasst von
Jinshan Yue
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-97-3477-1_5