skip to main content
10.1145/3319535.3363201acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article
Public Access

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

Authors Info & Claims
Published:06 November 2019Publication History

ABSTRACT

In a membership inference attack, an attacker aims to infer whether a data sample is in a target classifier's training dataset or not. Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset. Membership inference attacks pose severe privacy and security threats to the training dataset. Most existing defenses leverage differential privacy when training the target classifier or regularize the training process of the target classifier. These defenses suffer from two key limitations: 1) they do not have formal utility-loss guarantees of the confidence score vectors, and 2) they achieve suboptimal privacy-utility tradeoffs. In this work, we propose MemGuard,the first defense with formal utility-loss guarantees against black-box membership inference attacks. Instead of tampering the training process of the target classifier, MemGuard adds noise to each confidence score vector predicted by the target classifier. Our key observation is that attacker uses a classifier to predict member or non-member and classifier is vulnerable to adversarial examples.Based on the observation, we propose to add a carefully crafted noise vector to a confidence score vector to turn it into an adversarial example that misleads the attacker's classifier. Specifically, MemGuard works in two phases. In Phase I, MemGuard finds a carefully crafted noise vector that can turn a confidence score vector into an adversarial example, which is likely to mislead the attacker's classifier to make a random guessing at member or non-member. We find such carefully crafted noise vector via a new method that we design to incorporate the unique utility-loss constraints on the noise vector. In Phase II, MemGuard adds the noise vector to the confidence score vector with a certain probability, which is selected to satisfy a given utility-loss budget on the confidence score vector. Our experimental results on three datasets show that MemGuard can effectively defend against membership inference attacks and achieve better privacy-utility tradeoffs than existing defenses. Our work is the first one to show that adversarial examples can be used as defensive mechanisms to defend against membership inference attacks.

Skip Supplemental Material Section

Supplemental Material

p259-jia.webm

webm

81.1 MB

References

  1. Martin Abadi, Andy Chu, Ian Goodfellow, Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep Learning with Differential Privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 308--318.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Giuseppe Ateniese, Giovanni Felici, Luigi V. Mancini, Angelo Spognardi, Antonio Villani, and Domenico Vitali. 2013. Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers. CoRR abs/1306.4447 (2013).Google ScholarGoogle Scholar
  3. Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In Proceedings of the 2018 International Conference on Machine Learning (ICML). JMLR, 274--283.Google ScholarGoogle Scholar
  4. Michael Backes, Pascal Berrang, Mathias Humbert, and Praveen Manoharan. 2016. Membership Privacy in MicroRNA-based Studies. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 319--330.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Michael Backes, Mathias Humbert, Jun Pang, and Yang Zhang. 2017. walk2friends: Inferring Social Links from Mobility Profiles. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 1943--1957.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Raef Bassily, Adam Smith, and Abhradeep Thakurta. 2014. Differentially Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds. In Proceedings of the 2014 Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 464--473.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Xiang Cai, Xin Cheng Zhang, Brijesh Joshi, and Rob Johnson. 2012. Touching from a Distance: Website Fingerprinting Attacks and Defenses. In Proceedings of the 2012 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 605--616.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Aylin Caliskan, Fabian Yamaguchi, Edwin Dauber, Richard Harang, Konrad Rieck, Rachel Greenstadt, and Arvind Narayanan. 2018. When Coding Style Survives Compilation: De-anonymizing Programmers from Executable Binaries. In Proceedings of the 2018 Network and Distributed System Security Symposium (NDSS). Internet Society.Google ScholarGoogle ScholarCross RefCross Ref
  9. Xiaoyu Cao and Neil Zhenqiang Gong. 2017. Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification. In Proceedings of the 2017 Annual Computer Security Applications Conference (ACSAC). ACM, 278--287.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Nicholas Carlini and David Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (S&P). IEEE, 39--57.Google ScholarGoogle ScholarCross RefCross Ref
  11. Abdelberi Chaabane, Gergely Acs, and Mohamed Ali Kaafar. 2012. You Are What You Like! Information Leakage Through Users' Interests. In Proceedings of the 2012 Network and Distributed System Security Symposium (NDSS). Internet Society.Google ScholarGoogle Scholar
  12. Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. 2011. Differentially Private Empirical Risk Minimization. Journal of Machine Learning Research (2011).Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating Noise to Sensitivity in Private Data Analysis. In Proceedings of the 2006 Theory of Cryptography Conference (TCC). Springer, 265--284.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In Proceedings of the 2015 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 1322--1333.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Matt Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. 2014. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing. In Proceedings of the 2014 USENIX Security Symposium (USENIX Security). USENIX, 17--32.Google ScholarGoogle Scholar
  16. Karan Ganju, Qi Wang, Wei Yang, Carl A. Gunter, and Nikita Borisov. 2018. Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 619--633.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Neil Zhenqiang Gong and Bin Liu. 2016. You are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors. In Proceedings of the 2016 USENIX Security Symposium (USENIX Security). USENIX, 979--995.Google ScholarGoogle Scholar
  18. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Proceedings of the 2014 Annual Conference on Neural Information Processing Systems (NIPS). NIPS.Google ScholarGoogle Scholar
  19. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In Proceedings of the 2015 International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  20. Inken Hagestedt, Yang Zhang, Mathias Humbert, Pascal Berrang, Haixu Tang, XiaoFeng Wang, and Michael Backes. 2019. MBeacon: Privacy-Preserving Beacons for DNA Methylation Data. In Proceedings of the 2019 Network and Distributed System Security Symposium (NDSS). Internet Society.Google ScholarGoogle ScholarCross RefCross Ref
  21. Jamie Hayes, Luca Melis, George Danezis, and Emiliano De Cristofaro. 2019. LOGAN: Evaluating Privacy Leakage of Generative Models Using Generative Adversarial Networks. Symposium on Privacy Enhancing Technologies Symposium (2019).Google ScholarGoogle Scholar
  22. Dominik Herrmann, Rolf Wendolsky, and Hannes Federrath. 2009. Website Fingerprinting: Attacking Popular Privacy Enhancing Technologies with the Multinomial Naive-Bayes Classifier. In Proceedings of the 2009 ACM Cloud Computing Security Workshop (CCSW). ACM, 31--41.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Nils Homer, Szabolcs Szelinger, Margot Redman, David Duggan, Waibhav Tembe, Jill Muehling, John V. Pearson, Dietrich A. Stephan, Stanley F. Nelson, and David W. Craig. 2008. Resolving Individuals Contributing Trace Amounts of DNA to Highly Complex Mixtures Using High-Density SNP Genotyping Microarrays. PLOS Genetics (2008).Google ScholarGoogle Scholar
  24. Roger Iyengar, Joseph P. Near, Dawn Xiaodong Song, Om Dipakbhai Thakkar, Abhradeep Thakurta, and Lun Wang. 2019. Towards Practical Differentially Private Convex Optimization. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (S&P). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  25. Bargav Jayaraman and David Evans. 2014. Evaluating Differentially Private Machine Learning in Practice. In Proceedings of the 2014 USENIX Security Symposium (USENIX Security). USENIX, 1895--1912.Google ScholarGoogle Scholar
  26. Jinyuan Jia and Neil Zhenqiang Gong. 2018. AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning. In Proceedings of the 2018 USENIX Security Symposium (USENIX Security). USENIX.Google ScholarGoogle Scholar
  27. Jinyuan Jia and Neil Zhenqiang Gong. 2019. Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges. CoRR abs/1909.08526 (2019).Google ScholarGoogle Scholar
  28. Jinyuan Jia, Binghui Wang, Le Zhang, and Neil Zhenqiang Gong. 2017. AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields. In Proceedings of the 2017 International Conference on World Wide Web (WWW). ACM, 1561--1569.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Marc Juarez, Sadia Afroz, Gunes Acar, Claudia Diaz, and Rachel Greenstadt. 2014. A Critical Evaluation of Website Fingerprinting Attacks. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 263--274.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Daniel Kifer, Adam Smith, and Abhradeep Thakurta. 2012. Private Convex Optimization for Empirical Risk Minimization with Applications to High-dimensional Regression. In Proceedings of the 2012 Annual Conference on Learning Theory (COLT). JMLR, 1--25.Google ScholarGoogle Scholar
  31. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial Examples in the Physical World. CoRR abs/1607.02533 (2016).Google ScholarGoogle Scholar
  32. Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2016. Delving into Transferable Adversarial Examples and Black-box Attacks. CoRR abs/1611.02770 (2016).Google ScholarGoogle Scholar
  33. Yunhui Long, Vincent Bindschaedler, and Carl A. Gunter. 2017. Towards Measuring Membership Privacy. CoRR abs/1712.09136 (2017).Google ScholarGoogle Scholar
  34. Yunhui Long, Vincent Bindschaedler, Lei Wang, Diyue Bu, Xiaofeng Wang, Haixu Tang, Carl A. Gunter, and Kai Chen. 2018. Understanding Membership Inferences on Well-Generalized Learning Models. CoRR abs/1802.04889 (2018).Google ScholarGoogle Scholar
  35. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In Proceedings of the 2018 International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  36. Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting Unintended Feature Leakage in Collaborative Learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (S&P). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  37. Dongyu Meng and Hao Chen. 2017. MagNet: A Two-Pronged Defense against Adversarial Examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 135--147.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Xiaozhu Meng, Barton P Miller, and Somesh Jha. 2018. Adversarial Binaries for Authorship Identification. CoRR abs/1809.08316 (2018).Google ScholarGoogle Scholar
  39. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal Adversarial Perturbations. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 1765--1773.Google ScholarGoogle ScholarCross RefCross Ref
  40. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2574--2582.Google ScholarGoogle ScholarCross RefCross Ref
  41. Arvind Narayanan, Hristo S. Paskov, Neil Zhenqiang Gong, John Bethencourt, Emil Stefanov, Eui Chul Richard Shin, and Dawn Song. 2012. On the Feasibility of Internet-Scale Author Identification. In Proceedings of the 2012 IEEE Symposium on Security and Privacy (S&P). IEEE, 300--314.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Milad Nasr, Reza Shokri, and Amir Houmansadr. 2018. Machine Learning with Membership Privacy using Adversarial Regularization. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (S&P). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  44. Seong Joon Oh, Max Augustin, Bernt Schiele, and Mario Fritz. 2018. Towards Reverse-Engineering Black-Box Neural Networks. In Proceedings of the 2018 International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  45. Simon Oya, Carmela Troncoso, and Fernando Pérez-González. 2017. Back to the Drawing Board: Revisiting the Design of Optimal Location Privacy-preserving Mechanisms. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 1943--1957.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Andriy Panchenko, Lukas Niessen, Andreas Zinnen, and Thomas Engel. 2011. Website Fingerprinting in Onion Routing Based Anonymization Networks. In Proceedings of the 2011 Workshop on Privacy in the Electronic Society (WPES). ACM, 103--114.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016a. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. CoRR abs/1605.07277 (2016).Google ScholarGoogle Scholar
  48. Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. 2018. SoK: Towards the Science of Security and Privacy in Machine Learning. In Proceedings of the 2018 IEEE European Symposium on Security and Privacy (Euro S&P). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  49. Nicolas Papernot, Patrick D. McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical Black-Box Attacks Against Machine Learning. In Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security (ASIACCS). ACM, 506--519.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016b. The Limitations of Deep Learning in Adversarial Settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy (Euro S&P). IEEE, 372--387.Google ScholarGoogle ScholarCross RefCross Ref
  51. Nicolas Papernot, Patrick D. McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016c. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In Proceedings of the 2016 IEEE Symposium on Security and Privacy (S&P). IEEE, 582--597.Google ScholarGoogle ScholarCross RefCross Ref
  52. Apostolos Pyrgelis, Carmela Troncoso, and Emiliano De Cristofaro. 2018. Knock Knock, Who's There? Membership Inference on Aggregate Location Data. In Proceedings of the 2018 Network and Distributed System Security Symposium (NDSS). Internet Society.Google ScholarGoogle ScholarCross RefCross Ref
  53. Apostolos Pyrgelis, Carmela Troncoso, and Emiliano De Cristofaro. 2019. Under the Hood of Membership Inference Attacks on Aggregate Location Time-Series. CoRR abs/1902.07456 (2019).Google ScholarGoogle Scholar
  54. Erwin Quiring, Alwin Maier, and Konrad Rieck. 2019. Misleading Authorship Attribution of Source Code using Adversarial Learning. In Proceedings of the 2019 USENIX Security Symposium (USENIX Security). USENIX, 479--496.Google ScholarGoogle Scholar
  55. Ahmed Salem, Apratim Bhattacharya, Michael Backes, Mario Fritz, and Yang Zhang. 2019 a. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning. CoRR abs/1904.01067 (2019).Google ScholarGoogle Scholar
  56. Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2019 b. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. In Proceedings of the 2019 Network and Distributed System Security Symposium (NDSS). Internet Society.Google ScholarGoogle ScholarCross RefCross Ref
  57. Reza Shokri and Vitaly Shmatikov. 2015. Privacy-Preserving Deep Learning. In Proceedings of the 2015 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 1310--1321.Google ScholarGoogle Scholar
  58. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership Inference Attacks Against Machine Learning Models. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (S&P). IEEE, 3--18.Google ScholarGoogle ScholarCross RefCross Ref
  59. Liwei Song, Reza Shokri, and Prateek Mittal. 2019. Privacy Risks of Securing Machine Learning Models against Adversarial Examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Shuang Song, Kamalika Chaudhuri, and Anand D. Sarwate. 2013. Stochastic Gradient Descent with Differentially Private Updates. In Proceedings of the 2013 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 245--248.Google ScholarGoogle Scholar
  61. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research (2014).Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing Properties of Neural Networks. CoRR abs/1312.6199 (2013).Google ScholarGoogle Scholar
  63. Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. Ensemble Adversarial Training: Attacks and Defenses. In Proceedings of the 2017 International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  64. Florian Tramér, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2016. Stealing Machine Learning Models via Prediction APIs. In Proceedings of the 2016 USENIX Security Symposium (USENIX Security). USENIX, 601--618.Google ScholarGoogle Scholar
  65. Binghui Wang and Neil Zhenqiang Gong. 2018. Stealing Hyperparameters in Machine Learning. In Proceedings of the 2018 IEEE Symposium on Security and Privacy (S&P). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  66. Di Wang, Minwei Ye, and Jinhui Xu. 2017. Differentially Private Empirical Risk Minimization Revisited: Faster and More General. In Proceedings of the 2017 Annual Conference on Neural Information Processing Systems (NIPS). NIPS, 2722--2731.Google ScholarGoogle Scholar
  67. Tao Wang, Xiang Cai, Rishab Nithyanand, Rob Johnson, and Ian Goldberg. 2014. Effective Attacks and Provable Defenses for Website Fingerprinting. In Proceedings of the 2014 USENIX Security Symposium (USENIX Security). USENIX, 143--157.Google ScholarGoogle Scholar
  68. Weilin Xu, David Evans, and Yanjun Qi. 2018. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In Proceedings of the 2018 Network and Distributed System Security Symposium (NDSS). Internet Society.Google ScholarGoogle ScholarCross RefCross Ref
  69. Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting. In Proceedings of the 2018 IEEE Computer Security Foundations Symposium (CSF). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  70. Lei Yu, Ling Liu, Calton Pu, Mehmet Emre Gursoy, and Stacey Truex. 2019. Differentially Private Model Publishing for Deep Learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (S&P). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  71. Xiaokuan Zhang, Jihun Hamm, Michael K. Reiter, and Yinqian Zhang. 2019. Statistical Privacy for Streaming Traffic. In Proceedings of the 2019 Network and Distributed System Security Symposium (NDSS). Internet Society.Google ScholarGoogle ScholarCross RefCross Ref
  72. Yang Zhang, Mathias Humbert, Tahleen Rahman, Cheng-Te Li, Jun Pang, and Michael Backes. 2018. Tagvisor: A Privacy Advisor for Sharing Hashtags. In Proceedings of the 2018 Web Conference (WWW). ACM, 287--296.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Yinqian Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2012. Cross-VM Side Channels and Their Use to Extract Private Keys. In Proceedings of the 2012 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 305--316.Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security
    November 2019
    2755 pages
    ISBN:9781450367479
    DOI:10.1145/3319535

    Copyright © 2019 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 6 November 2019

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    CCS '19 Paper Acceptance Rate149of934submissions,16%Overall Acceptance Rate1,261of6,999submissions,18%

    Upcoming Conference

    CCS '24
    ACM SIGSAC Conference on Computer and Communications Security
    October 14 - 18, 2024
    Salt Lake City , UT , USA

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader