skip to main content
research-article

Cambricon: an instruction set architecture for neural networks

Authors Info & Claims
Published:18 June 2016Publication History
Skip Abstract Section

Abstract

Neural Networks (NN) are a family of models for a broad range of emerging machine learning and pattern recondition applications. NN techniques are conventionally executed on general-purpose processors (such as CPU and GPGPU), which are usually not energy-efficient since they invest excessive hardware resources to flexibly support various workloads. Consequently, application-specific hardware accelerators for neural networks have been proposed recently to improve the energy-efficiency. However, such accelerators were designed for a small set of NN techniques sharing similar computational patterns, and they adopt complex and informative instructions (control signals) directly corresponding to high-level functional blocks of an NN (such as layers), or even an NN as a whole. Although straightforward and easy-to-implement for a limited set of similar NN techniques, the lack of agility in the instruction set prevents such accelerator designs from supporting a variety of different NN techniques with sufficient flexibility and efficiency.

In this paper, we propose a novel domain-specific Instruction Set Architecture (ISA) for NN accelerators, called Cambricon, which is a load-store architecture that integrates scalar, vector, matrix, logical, data transfer, and control instructions, based on a comprehensive analysis of existing NN techniques. Our evaluation over a total of ten representative yet distinct NN techniques have demonstrated that Cambricon exhibits strong descriptive capacity over a broad range of NN techniques, and provides higher code density than general-purpose ISAs such as ×86, MIPS, and GPGPU. Compared to the latest state-of-the-art NN accelerator design DaDianNao [5] (which can only accommodate 3 types of NN techniques), our Cambricon-based accelerator prototype implemented in TSMC 65nm technology incurs only negligible latency/power/area overheads, with a versatile coverage of 10 different NN benchmarks.

References

  1. Srimat Chakradhar, Murugan Sankaradas, Venkata Jakkula, and Srihari Cadambi. A Dynamically Configurable Coprocessor for Convolutional Neural Networks. In Proceedings of the 37th Annual International Symposium on Computer Architecture, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Yun-Fan Chang, P. Lin, Shao-Hua Cheng, Kai-Hsuan Chan, Yi-Chong Zeng, Chia-Wei Liao, Wen-Tsung Chang, Yu-Chiang Wang, and Yu Tsao. Robust anchorperson detection based on audio streams using a hybrid I-vector and DNN system. In Proceedings of the 2014 Annual Summit and Conference on Asia-Pacific Signal and Information Processing Association, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  3. Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. DianNao: A Small-footprint High-throughput Accelerator for Ubiquitous Machine-learning. In Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. A High-Throughput Neural Network Accelerator. IEEE Micro, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  5. Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen, Zhiwei Xu, Ninghui Sun, and Olivier Temam. DaDianNao: A Machine-Learning Supercomputer. In Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Ping Chi, Shuangchen Li, Cong Xu, Tao Zhang, Jishen Zhao, Yongpan Liu, Yu Wang, and Yuan Xie. A Novel Processing-in-memory Architecture for Neural Network Computation in ReRAM-based Main Memory. In Proceedings of the 43rd International Symposium on Computer Architecture (ISCA), 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. Coates, B. Huval, T. Wang, D. J. Wu, and A. Y. Ng. Deep learning with cots hpc systems. In Proceedings of the 30th International Conference on Machine Learning, 2013.Google ScholarGoogle Scholar
  8. G.E. Dahl, T.N. Sainath, and G.E. Hinton. Improving deep neural networks for LVCSR using rectified linear units and dropout. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  9. V. Eijkhout. Introduction to High Performance Scientific Computing. In www.lulu.com, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. H. Esmaeilzadeh, P. Saeedi, B.N. Araabi, C. Lucas, and Sied Mehdi Fakhraie. Neural network stream processing core (NnSP) for embedded systems. In Proceedings of the 2006 IEEE International Symposium on Circuits and Systems, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  11. Hadi Esmaeilzadeh, Adrian Sampson, Luis Ceze, and Doug Burger. Neural Acceleration for General-Purpose Approximate Programs. In Proceedings of the 2012 IEEE/ACM International Symposium on Microarchitecture, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. C. Farabet, B. Martini, B. Corda, P. Akselrod, E. Culurciello, and Y. LeCun. NeuFlow: A runtime reconfigurable dataflow processor for vision. In Proceedings of the 2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  13. C. Farabet, C. Poulet, J.Y. Han, and Y. LeCun. CNP: An FPGA-based processor for Convolutional Networks. In Proceedings of the 2009 International Conference on Field Programmable Logic and Applications, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  14. V. Gokhale, Jonghoon Jin, A. Dundar, B. Martini, and E. Culurciello. A 240 G-ops/s Mobile Coprocessor for Deep Neural Networks. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. A. Graves and J. Schmidhuber. Framewise phoneme classification with bidirectional LSTM networks. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  16. Atif Hashmi, Andrew Nere, James Jamal Thomas, and Mikko Lipasti. A Case for Neuromorphic ISAs. In Proceedings of the 16th International Conference on Architectural Support for Programming Languages and Operating Systems, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning Deep Structured Semantic Models for Web Search Using Clickthrough Data. In Proceedings of the 22Nd ACM International Conference on Conference on Information & Knowledge Management, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. INTEL. AVX-512. https://software.intel.com/en-us/blogs/2013/avx-512-instructions.Google ScholarGoogle Scholar
  19. INTEL. MKL. https://software.intel.com/en-us/intel-mkl.Google ScholarGoogle Scholar
  20. Pineda Fernando J. Generalization of back-propagation to recurrent neural networks. Phys. Rev. Lett., 1987.Google ScholarGoogle Scholar
  21. Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An Introduction to Statistical Learning. 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the best multi-stage architecture for object recognition? In Proceedings of the 12th IEEE International Conference on Computer Vision, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  23. Shaoqing Ren Jian Sun Kaiming He, Xiangyu Zhang. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In arXiv:1502.01852, 2015.Google ScholarGoogle Scholar
  24. V. Kantabutra. On hardware for computing exponential and trigonometric functions. Computers, IEEE Transactions on, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Alex Krizhevsky, Sutskever Ilya, and Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, and Yoshua Bengio. An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation. In Proceedings of the 24th International Conference on Machine Learning, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Q.V. Le. Building high-level features using large scale unsupervised learning. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  28. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.Google ScholarGoogle ScholarCross RefCross Ref
  29. Daofu Liu, Tianshi Chen, Shaoli Liu, Jinhong Zhou, Shengyuan Zhou, Olivier Teman, Xiaobing Feng, Xuehai Zhou, and Yunji Chen. PuDianNao: A Polyvalent Machine Learning Accelerator. In Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Maashri, A.A. and DeBole, M. and Cotter, M. and Chandramoorthy, N. and Yang Xiao and Narayanan, V. and Chakrabarti, C. Accelerating neuromorphic vision algorithms for recognition. In Proceedings of the 49th ACM/EDAC/IEEE Design Automation Conference, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. G Marsaglia and W W. Tsang. The ziggurat method for generating random variables. Journal of statistical software, 2000.Google ScholarGoogle Scholar
  32. Paul A Merolla, John V Arthur, Rodrigo Alvarez-icaza, Andrew S Cassidy, Jun Sawada, Filipp Akopyan, Bryan L Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, Bernard Brezzo, Ivan Vo, Steven K Esser, Rathinakumar Appuswamy, Brian Taba, Arnon Amir, Myron D Flickner, William P Risk, Rajit Manohar, and Dharmendra S Modha. A million spiling-neuron interated circuit with a scalable communication network and interface. Science, 2014.Google ScholarGoogle Scholar
  33. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. In Nature, 2015.Google ScholarGoogle Scholar
  34. M.A. Motter. Control of the NASA Langley 16-foot transonic tunnel with the self-organizing map. In Proceedings of the 1999 American Control Conference, 1999.Google ScholarGoogle Scholar
  35. NVIDIA. CUBLAS. https://developer.nvidia.com/cublas.Google ScholarGoogle Scholar
  36. C.S. Oliveira and E. Del Hernandez. Forms of adapting patterns to Hopfield neural networks with larger number of nodes and higher storage capacity. In Proceedings of the 2004 IEEE International Joint Conference on Neural Networks, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  37. David A. Patterson and Carlo H. Sequin. RISC I: A Reduced Instruction Set VLSI Computer. In Proceedings of the 8th Annual Symposium on Computer Architecture, 1981. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. M. Peemen, A.A.A. Setio, B. Mesman, and H. Corporaal. Memory-centric accelerator design for Convolutional Neural Networks. In Proceedings of the 31st IEEE International Conference on Computer Design, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  39. R Salakhutdinov and G Hinton. An Efficient Learning Procedure for Deep Boltzmann Machines. Neural Computation, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. M. Sankaradas, V. Jakkula, S. Cadambi, S. Chakradhar, I. Durdanovic, E. Cosatto, and H.P. Graf. A Massively Parallel Coprocessor for Convolutional Neural Networks. In Proceedings of the 20th IEEE International Conference on Application-specific Systems, Architectures and Processors, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. R. Sarikaya, G.E. Hinton, and A. Deoras. Application of Deep Belief Networks for Natural Language Understanding. Audio, Speech, and Language Processing, IEEE/ACM Transactions on, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. P. Sermanet and Y. LeCun. Traffic sign recognition with multi-scale Convolutional Networks. In Proceedings of the 2011 International Joint Conference on Neural Networks, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  43. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going Deeper with Convolutions. In arXiv:1409.4842, 2014.Google ScholarGoogle Scholar
  44. O. Temam. A defect-tolerant accelerator for emerging high-performance applications. In Proceedings of the 39th Annual International Symposium on Computer Architecture, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. V. Vanhoucke, A. Senior, and M. Z. Mao. Improving the speed of neural networks on CPUs. In In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011.Google ScholarGoogle Scholar
  46. Yu Wang, Tianqi Tang, Lixue Xia, Boxun Li, Peng Gu, Huazhong Yang, Hai Li, and Yuan Xie. Energy Efficient RRAM Spiking Neural Network for Real Time Classification. In Proceedings of the 25th Edition on Great Lakes Symposium on VLSI, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Cong Xu, Dimin Niu, Naveen Muralimanohar, Rajeev Balasubramonian, Tao Zhang, Shimeng Yu, and Yuan Xie. Overcoming the Challenges of Cross-Point Resistive Memory Architectures. In Proceedings of the 21st International Symposium on High Performance Computer Architecture, 2015.Google ScholarGoogle Scholar
  48. Tao Xu, Jieping Zhou, Jianhua Gong, Wenyi Sun, Liqun Fang, and Yanli Li. Improved SOM based data mining of seasonal flu in mainland China. In Proceedings of the 2012 Eighth International Conference on Natural Computation, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  49. Xian-Hua Zeng, Si-Wei Luo, and Jiao Wang. Auto-Associative Neural Network System for Recognition. In Proceedings of the 2007 International Conference on Machine Learning and Cybernetics, 2007.Google ScholarGoogle Scholar
  50. Zhengyou Zhang, M. Lyons, M. Schuster, and S. Akamatsu. Comparison between geometry-based and Gabor-wavelets-based facial expression recognition using multi-layer perceptron. In Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Jishen Zhao, Guangyu Sun, Gabriel H. Loh, and Yuan Xie. Optimizing GPU energy efficiency with 3D die-stacking graphics memory and reconfigurable memory interface. ACM Transactions on Architecture and Code Optimization, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Cambricon: an instruction set architecture for neural networks
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM SIGARCH Computer Architecture News
      ACM SIGARCH Computer Architecture News  Volume 44, Issue 3
      ISCA'16
      June 2016
      730 pages
      ISSN:0163-5964
      DOI:10.1145/3007787
      Issue’s Table of Contents
      • cover image ACM Conferences
        ISCA '16: Proceedings of the 43rd International Symposium on Computer Architecture
        June 2016
        756 pages
        ISBN:9781467389471

      Copyright © 2016 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 June 2016

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader