skip to main content
10.1145/3359789.3359824acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacsacConference Proceedingsconference-collections
research-article
Artifacts Evaluated & Functional

Model inversion attacks against collaborative inference

Published:09 December 2019Publication History

ABSTRACT

The prevalence of deep learning has drawn attention to the privacy protection of sensitive data. Various privacy threats have been presented, where an adversary can steal model owners' private data. Meanwhile, countermeasures have also been introduced to achieve privacy-preserving deep learning. However, most studies only focused on data privacy during training, and ignored privacy during inference.

In this paper, we devise a new set of attacks to compromise the inference data privacy in collaborative deep learning systems. Specifically, when a deep neural network and the corresponding inference task are split and distributed to different participants, one malicious participant can accurately recover an arbitrary input fed into this system, even if he has no access to other participants' data or computations, or to prediction APIs to query this system. We evaluate our attacks under different settings, models and datasets, to show their effectiveness and generalization. We also study the characteristics of deep learning models that make them susceptible to such inference privacy threats. This provides insights and guidelines to develop more privacy-preserving collaborative systems and algorithms.

References

  1. 2018. https://pytorch.org/docs/0.4.0/torchvision/datasets.html.Google ScholarGoogle Scholar
  2. Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In ACM Conference on Computer and Communications Security.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Giuseppe Ateniese, Luigi V Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. 2015. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. International Journal of Security and Networks (2015).Google ScholarGoogle Scholar
  4. Raphael Bost, Raluca Ada Popa, Stephen Tu, and Shafi Goldwasser. 2015. Machine learning classification over encrypted data.. In Network and Distributed System Security Symposium.Google ScholarGoogle ScholarCross RefCross Ref
  5. Yinzhi Cao and Junfeng Yang. 2015. Towards Making Systems Forget with Machine Unlearning. In IEEE Symposium on Security and Privacy.Google ScholarGoogle Scholar
  6. Trishul Chilimbi, Yutaka Suzue, Johnson Apacible, and Karthik Kalyanaraman. 2014. Project adam: Building an efficient and scalable deep learning training system. In USENIX Symposium on Operating Systems Design and Implementation.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. 2012. Large scale distributed deep networks. In Advances in neural information processing systems.Google ScholarGoogle Scholar
  8. Amir Erfan Eshratifar, Mohammad Saeed Abrishami, and Massoud Pedram. 2018. JointDNN: an efficient training and inference engine for intelligent mobile cloud computing services. arXiv preprint arXiv:1801.08618 (2018).Google ScholarGoogle Scholar
  9. Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In ACM Conference on Computer and Communications Security.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. 2014. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing.. In USENIX Security Symposium.Google ScholarGoogle Scholar
  11. Karan Ganju, Qi Wang, Wei Yang, Carl A. Gunter, and Nikita Borisov. 2018. Property Inference A acks on Fully Connected Neural Networks using Permutation Invariant Representations. In ACM Conference on Computer and Communications Security.Google ScholarGoogle Scholar
  12. Karan Ganju, Qi Wang, Wei Yang, Carl A. Gunter, and Nikita Borisov. 2018. Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations. In ACM conference on computer and communications security. 619--633.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics. 249--256.Google ScholarGoogle Scholar
  14. Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. 2016. Deep learning. Vol. 1. MIT press Cambridge.Google ScholarGoogle Scholar
  15. Jihun Hamm, Adam C Champion, Guoxing Chen, Mikhail Belkin, and Dong Xuan. 2015. Crowd-ml: A privacy-preserving learning framework for a crowd of smart devices. In IEEE International Conference on Distributed Computing Systems.Google ScholarGoogle ScholarCross RefCross Ref
  16. Awni Y. Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, and Andrew Y. Ng. 2014. Deep Speech: Scaling Up End-to-end Speech Recognition. CoRR abs/1412.5567 (2014). arXiv:1412.5567 http://arxiv.org/abs/1412.5567Google ScholarGoogle Scholar
  17. Johann Hauswald, Thomas Manville, Qi Zheng, Ronald Dreslinski, Chaitali Chakrabarti, and Trevor Mudge. 2014. A hybrid approach to offloading mobile image classification. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 8375--8379.Google ScholarGoogle ScholarCross RefCross Ref
  18. Jamie Hayes, Luca Melis, George Danezis, and Emiliano De Cristofaro. 2017. LOGAN: evaluating privacy leakage of generative models using generative adversarial networks. arXiv preprint arXiv:1705.07663 (2017).Google ScholarGoogle Scholar
  19. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep Residual Learning for Image Recognition. CoRR abs/1512.03385 (2015). arXiv:1512.03385 http://arxiv.org/abs/1512.03385Google ScholarGoogle Scholar
  20. Zecheng He, Aswin Raghavan, Guangyuan Hu, Sek Chai, and Ruby Lee. 2019. Power-Grid Controller Anomaly Detection with Enhanced Temporal Deep Learning. In 18th IEEE International Conference on Trust, Security and Privacy in Computing and Communications.Google ScholarGoogle Scholar
  21. Zecheng He, Tianwei Zhang, and Ruby Lee. 2019. Sensitive-Sample Fingerprinting of Deep Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4729--4737.Google ScholarGoogle ScholarCross RefCross Ref
  22. Briland Hitaj, Giuseppe Ateniese, and Fernando Pérez-Cruz. 2017. Deep models under the GAN: information leakage from collaborative deep learning. In ACM Conference on Computer and Communications Security.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Weizhe Hua, Zhiru Zhang, and G Edward Suh. 2018. Reverse engineering convolutional neural networks through side-channel information leaks. In ACM/ESDA/IEEE Design Automation Conference.Google ScholarGoogle Scholar
  24. Tyler Hunt, Congzheng Song, Reza Shokri, Vitaly Shmatikov, and Emmett Witchel. 2018. Chiron: Privacy-preserving Machine Learning as a Service. arXiv preprint arXiv:1803.05961 (2018).Google ScholarGoogle Scholar
  25. Yiping Kang, Johann Hauswald, Cao Gao, Austin Rovinski, Trevor Mudge, Jason Mars, and Lingjia Tang. 2017. Neurosurgeon: Collaborative intelligence between the cloud and mobile edge. Acm Sigplan Notices 52, 4 (2017), 615--629.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).Google ScholarGoogle Scholar
  27. Jong Hwan Ko, Taesik Na, Mohammad Faisal Amir, and Saibal Mukhopadhyay. 2018. Edge-host partitioning of deep neural networks with feature space encoding for resource-constrained internet-of-things platforms. In IEEE International Conference on Advanced Video and Signal Based Surveillance.Google ScholarGoogle ScholarCross RefCross Ref
  28. Yann Le Cun, LD Jackel, B Boser, JS Denker, HP Graf, I Guyon, D Henderson, RE Howard, and W Hubbard. 1989. Handwritten Digit Recognition: Applications of Neural Network Chips and Automatic Learning. IEEE Communications Magazine 27, 11 (1989), 41--46.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Kin Sum Liu, Bo Li, and Jie Gao. 2018. Generative Model: Membership Attack, Generalization and Diversity. arXiv preprint arXiv:1805.09898 (2018).Google ScholarGoogle Scholar
  30. Yunhui Long, Vincent Bindschaedler, Lei Wang, Diyue Bu, Xiaofeng Wang, Haixu Tang, Carl A Gunter, and Kai Chen. 2018. Understanding Membership Inferences on Well-Generalized Learning Models. arXiv preprint arXiv:1802.04889 (2018).Google ScholarGoogle Scholar
  31. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attention-based Neural Machine Translation. CoRR abs/1508.04025 (2015). arXiv:1508.04025 http://arxiv.org/abs/1508.04025Google ScholarGoogle Scholar
  32. Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting unintended feature leakage in collaborative learning. In IEEE Symposium on Security and Privacy.Google ScholarGoogle ScholarCross RefCross Ref
  33. Seong Joon Oh, Max Augustin, Mario Fritz, and Bernt Schiele. 2018. Towards reverse-engineering black-box neural networks. In INternational Conference on Learning Representations.Google ScholarGoogle Scholar
  34. Olga Ohrimenko, Felix Schuster, Cédric Fournet, Aastha Mehta, Sebastian Nowozin, Kapil Vaswani, and Manuel Costa. 2016. Oblivious Multi-Party Machine Learning on Trusted Processors.. In USENIX Security Symposium.Google ScholarGoogle Scholar
  35. Herbert Robbins and Sutton Monro. 1951. A stochastic approximation method. The annals of mathematical statistics (1951), 400--407.Google ScholarGoogle Scholar
  36. Frank Rosenblatt. 1958. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological review 65, 6 (1958), 386.Google ScholarGoogle Scholar
  37. Leonid I Rudin, Stanley Osher, and Emad Fatemi. 1992. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60, 1-4 (1992), 259--268.Google ScholarGoogle Scholar
  38. David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1986. Learning Representations by Back-propagating Errors. nature 323, 6088 (1986), 533.Google ScholarGoogle Scholar
  39. Ahmed Salem, Yang Zhang, Mathias Humbert, Mario Fritz, and Michael Backes. 2018. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. In Network and Distributed System Security Symposium.Google ScholarGoogle Scholar
  40. Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In ACM conference on computer and communications security.Google ScholarGoogle Scholar
  41. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In IEEE Symposium on Security and Privacy.Google ScholarGoogle ScholarCross RefCross Ref
  42. Congzheng Song, Thomas Ristenpart, and Vitaly Shmatikov. 2017. Machine Learning Models that Remember Too Much. In ACM Conference on Computer and Communications Security.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Surat Teerapittayanon, Bradley McDanel, and HT Kung. 2017. Distributed deep neural networks over the cloud, the edge and end devices. In IEEE International Conference on Distributed Computing Systems.Google ScholarGoogle ScholarCross RefCross Ref
  44. Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing Machine Learning Models via Prediction APIs.. In USENIX Security Symposium.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Aleksei Triastcyn and Boi Faltings. 2018. Generating Artificial Data for Private Deep Learning. arXiv preprint arXiv:1803.03148 (2018).Google ScholarGoogle Scholar
  46. Binghui Wang and Neil Zhenqiang Gong. 2018. Stealing Hyperparameters in Machine Learning. In IEEE Symposium on Security and Privacy.Google ScholarGoogle Scholar
  47. Zhou Wang, Alan C Bovik, Hamid R Sheikh, Eero P Simoncelli, et al. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13, 4 (2004), 600--612.Google ScholarGoogle Scholar
  48. Lingxiao Wei, Bo Luo, Yu Li, Yannan Liu, and Qiang Xu. 2018. I know what you see: Power side-channel attack on convolutional neural network accelerators. In Annual Computer Security Applications Conference.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting. In IEEE Computer Security Foundations Symposium.Google ScholarGoogle ScholarCross RefCross Ref
  50. Hongxu Yin, Zeyu Wang, and Niraj K Jha. 2018. A hierarchical inference model for internet-of-things. IEEE Transactions on Multi-Scale Computing Systems 4, 3 (2018), 260--271.Google ScholarGoogle ScholarCross RefCross Ref
  51. Tianwei Zhang, Zecheng He, and Ruby B Lee. 2018. Privacy-preserving machine learning through data obfuscation. arXiv preprint arXiv:1807.01860 (2018).Google ScholarGoogle Scholar
  52. Xinyang Zhang, Shouling Ji, and Ting Wang. 2018. Differentially Private Releasing via Deep Generative Model (Technical Report). arXiv preprint arXiv:1801.01594 (2018).Google ScholarGoogle Scholar

Index Terms

  1. Model inversion attacks against collaborative inference

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        ACSAC '19: Proceedings of the 35th Annual Computer Security Applications Conference
        December 2019
        821 pages
        ISBN:9781450376280
        DOI:10.1145/3359789

        Copyright © 2019 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 9 December 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        ACSAC '19 Paper Acceptance Rate60of266submissions,23%Overall Acceptance Rate104of497submissions,21%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader