skip to main content
10.1145/3195970.3196014acmconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article

Content addressable memory based binarized neural network accelerator using time-domain signal processing

Published:24 June 2018Publication History

ABSTRACT

Binarized neural network (BNN) is one of the most promising solution for low-cost convolutional neural network acceleration. Since BNN is based on binarized bit-level operations, there exist great opportunities to reduce power-hungry data transfers and complex arithmetic operations. In this paper, we propose a content addressable memory (CAM) based BNN accelerator. By using time-domain signal processing, the huge convolution operations of BNN can be effectively replaced to the CAM search operation. In addition, thanks to fully parallel search of CAM, the parallel convolution operations for non-overlapped filtering window is enabled for high throughput data processing. To verify the effectiveness of the proposed CAM based BNN accelerator, the convolutional layer of LeNet-5 model has been implemented using 65nm CMOS technology. The implementation results show that the proposed BNN accelerator achieves 9.4% and 38.5% of area and energy savings, respectively. The parallel convolution operation of the proposed approach also shows 2.4x improved processing time.

References

  1. Y. H. Chen et al. 2017. Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks. IEEE JSSC, 52, 1, (2017), 127--138.Google ScholarGoogle ScholarCross RefCross Ref
  2. S. Han et al. 2016. EIE: Efficient Inference Engine on Compressed Deep Neural Network. (2016). arXiv:1602.01528. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. M. Courbariaux et al. 2016. Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. (2016). arXiv:1602.02830.Google ScholarGoogle Scholar
  4. M. Rastegari et al. 2016. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks. In 2016 Computer Vision, 525-542.Google ScholarGoogle Scholar
  5. S. Zhou et al. 2016. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. (2016). arXiv:1606.06160.Google ScholarGoogle Scholar
  6. W. Tang et al. 2017. Wang, "How to Train a Compact Binary Neural Network with High Accuracy?. in 2017 AAAI, 2625--2631.Google ScholarGoogle Scholar
  7. D. Miyashita et al. 2017. A Neuromorphic Chip Optimized for Deep Learning and CMOS Technology With Time-Domain Analog and Digital Mixed-Signal Processing," IEEE JSSC. 52, 10, (2017), 2679--2689.Google ScholarGoogle ScholarCross RefCross Ref
  8. K. Pagiamtzis et al. 2006. Content-Addressable Memory (CAM) Circuits and Architectures: A Tutorial and Survey. IEEE JSSC, (2006). 41, 3, 712--727.Google ScholarGoogle ScholarCross RefCross Ref
  9. S. Ioffe et al. 2015. Batch Normalization: Accelerating Deep Network Training By Reducing Internal Covariate Shift. (2015). arXiv:1502.03167Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. H. Yonekawa et al. 2017. On-Chip Memory Based Binarized Convolutional Deep Neural Network Applying Batch Normalization Free Technique on an FPGA. in 2017 IPDPSW, 98--105.Google ScholarGoogle Scholar
  11. A. M. Abas et al. 2002. Time Difference Amplifier. Electron. Lett. 38, 23, (2002). 1437--1438.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    DAC '18: Proceedings of the 55th Annual Design Automation Conference
    June 2018
    1089 pages
    ISBN:9781450357005
    DOI:10.1145/3195970

    Copyright © 2018 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 24 June 2018

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate1,770of5,499submissions,32%

    Upcoming Conference

    DAC '24
    61st ACM/IEEE Design Automation Conference
    June 23 - 27, 2024
    San Francisco , CA , USA

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader