Skip to main content
Top

2021 | OriginalPaper | Chapter

Comparing Local and Central Differential Privacy Using Membership Inference Attacks

Authors : Daniel Bernau, Jonas Robl, Philip W. Grassal, Steffen Schneider, Florian Kerschbaum

Published in: Data and Applications Security and Privacy XXXV

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Attacks that aim to identify the training data of neural networks represent a severe threat to the privacy of individuals in the training dataset. A possible protection is offered by anonymization of the training data or training function with differential privacy. However, data scientists can choose between local and central differential privacy, and need to select meaningful privacy parameters \(\epsilon \). A comparison of local and central differential privacy based on the privacy parameters furthermore potentially leads data scientists to incorrect conclusions, since the privacy parameters are reflecting different types of mechanisms.
Instead, we empirically compare the relative privacy-accuracy trade-off of one central and two local differential privacy mechanisms under a white-box membership inference attack. While membership inference only reflects a lower bound on inference risk and differential privacy formulates an upper bound, our experiments with several datasets show that the privacy-accuracy trade-off is similar for both types of mechanisms despite the large difference in their upper bound. This suggests that the upper bound is far from the practical susceptibility to membership inference. Thus, small \(\epsilon \) in central differential privacy and large \(\epsilon \) in local differential privacy result in similar membership inference risks, and local differential privacy can be a meaningful alternative to central differential privacy for differentially private deep learning besides the comparatively higher privacy parameters.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Footnotes
Literature
1.
go back to reference Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2016) Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2016)
2.
go back to reference Abowd, J.M., Schmutte, I.M.: An economic analysis of privacy protection and statistical accuracy as social choices. Am. Econ. Rev. 109(1), 171–202 (2019)CrossRef Abowd, J.M., Schmutte, I.M.: An economic analysis of privacy protection and statistical accuracy as social choices. Am. Econ. Rev. 109(1), 171–202 (2019)CrossRef
3.
go back to reference Backes, M., Berrang, P., Humbert, M., Manoharan, P.: Membership privacy in microRNA-based studies. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2016) Backes, M., Berrang, P., Humbert, M., Manoharan, P.: Membership privacy in microRNA-based studies. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2016)
4.
go back to reference Bassily, R., Smith, A., Thakurta, A.: Private empirical risk minimization. In: Proceedings of Symposium on Foundations of Computer Science (FOCS). IEEE Computer Society (2014) Bassily, R., Smith, A., Thakurta, A.: Private empirical risk minimization. In: Proceedings of Symposium on Foundations of Computer Science (FOCS). IEEE Computer Society (2014)
6.
go back to reference Carlini, N., Liu, C., Kos, J., Erlingsson, Ú., Song, D.: The secret sharer: measuring unintended neural network memorization and extracting secrets (2018) Carlini, N., Liu, C., Kos, J., Erlingsson, Ú., Song, D.: The secret sharer: measuring unintended neural network memorization and extracting secrets (2018)
7.
go back to reference Davis, J., Goadrich, M.: The relationship between precision-recall and ROC curves. In: Proceedings of Conference on Machine Learning (ICML). Omnipress (2006) Davis, J., Goadrich, M.: The relationship between precision-recall and ROC curves. In: Proceedings of Conference on Machine Learning (ICML). Omnipress (2006)
10.
go back to reference Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theoret. Comput. Sci. 9(3–4), 211–407 (2014)MathSciNetMATH Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theoret. Comput. Sci. 9(3–4), 211–407 (2014)MathSciNetMATH
11.
go back to reference Erlingsson, U., Feldman, V., Mironov, I., Raghunathan, A., Talwar, K., Thakurta, A.: Amplification by shuffling: from local to central differential privacy via anonymity. In: Proceedings of Symposium on Discrete Algorithms (SODA) (2019) Erlingsson, U., Feldman, V., Mironov, I., Raghunathan, A., Talwar, K., Thakurta, A.: Amplification by shuffling: from local to central differential privacy via anonymity. In: Proceedings of Symposium on Discrete Algorithms (SODA) (2019)
12.
go back to reference Erlingsson, U., Pihur, V., Korolova, A.: RAPPOR: randomized aggregatable privacy-preserving ordinal response. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2014) Erlingsson, U., Pihur, V., Korolova, A.: RAPPOR: randomized aggregatable privacy-preserving ordinal response. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2014)
13.
go back to reference Everingham, M., Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes challenge. Int. J. Comput. Vis. 88(2), 98–136 (2010)CrossRef Everingham, M., Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes challenge. Int. J. Comput. Vis. 88(2), 98–136 (2010)CrossRef
15.
go back to reference Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2015) Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2015)
16.
go back to reference Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In: Proceedings of USENIX Security Symposium. USENIX Association (2014) Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In: Proceedings of USENIX Security Symposium. USENIX Association (2014)
18.
go back to reference Grandvalet, Y., Canu, S.: Comments on “noise injection into inputs in back propagation learning’’. IEEE Trans. Syst. Man Cybernet. 25(4), 678–681 (1995)CrossRef Grandvalet, Y., Canu, S.: Comments on “noise injection into inputs in back propagation learning’’. IEEE Trans. Syst. Man Cybernet. 25(4), 678–681 (1995)CrossRef
19.
go back to reference Hay, M., Machanavajjhala, A., Miklau, G., Chen, Y., Zhang, D.: Principled evaluation of differentially private algorithms using DPBench. In: Proceedings of Conference on Management of Data (SIGMOD). ACM Press (2016) Hay, M., Machanavajjhala, A., Miklau, G., Chen, Y., Zhang, D.: Principled evaluation of differentially private algorithms using DPBench. In: Proceedings of Conference on Management of Data (SIGMOD). ACM Press (2016)
20.
go back to reference Hayes, J., Melis, L., Danezis, G., De Cristofaro, E.: LOGAN: membership inference attacks against generative models. Proc. Priv. Enhanc. Technol. (PoPETs) 2019(1), 133–152 (2019) Hayes, J., Melis, L., Danezis, G., De Cristofaro, E.: LOGAN: membership inference attacks against generative models. Proc. Priv. Enhanc. Technol. (PoPETs) 2019(1), 133–152 (2019)
22.
go back to reference Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. University of Massachusetts, Technical report (2007) Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. University of Massachusetts, Technical report (2007)
23.
go back to reference Iyengar, R., Near, J.P., Song, D., Thakkar, O.D., Thakurta, A., Wang, L.: Towards practical differentially private convex optimization. In: Proceedings of Symposium on Security and Privacy (S&P). IEEE Computer Society (2019) Iyengar, R., Near, J.P., Song, D., Thakkar, O.D., Thakurta, A., Wang, L.: Towards practical differentially private convex optimization. In: Proceedings of Symposium on Security and Privacy (S&P). IEEE Computer Society (2019)
24.
go back to reference Jayaraman, B., Evans, D.: Evaluating differentially private machine learning in practice. In: Proceedings of the USENIX Security Symposium. USENIX Association (2019) Jayaraman, B., Evans, D.: Evaluating differentially private machine learning in practice. In: Proceedings of the USENIX Security Symposium. USENIX Association (2019)
25.
go back to reference Kairouz, P., Oh, S., Viswanath, P.: The composition theorem for differential privacy. IEEE Trans. Inf. Theory 63(6), 4037–4049 (2017) Kairouz, P., Oh, S., Viswanath, P.: The composition theorem for differential privacy. IEEE Trans. Inf. Theory 63(6), 4037–4049 (2017)
26.
go back to reference Kasiviswanathan, S.P., Lee, H.K., Nissim, K., Raskhodnikova, S., Smith, A.: What can we learn privately? SIAM J. Comput. 40, 793–826 (2008)MathSciNetCrossRef Kasiviswanathan, S.P., Lee, H.K., Nissim, K., Raskhodnikova, S., Smith, A.: What can we learn privately? SIAM J. Comput. 40, 793–826 (2008)MathSciNetCrossRef
27.
go back to reference Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings. ICLR (2015) Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings. ICLR (2015)
28.
go back to reference Matsuoka, K.: Noise injection into inputs in back-propagation learning. IEEE Trans. Syst. Man Cybernet. 22(3), 36–440 (1992)CrossRef Matsuoka, K.: Noise injection into inputs in back-propagation learning. IEEE Trans. Syst. Man Cybernet. 22(3), 36–440 (1992)CrossRef
29.
go back to reference Mironov, I.: Rényi differential privacy. In: Proceedings of Computer Security Foundations Symposium (CSF). IEEE Computer Society (2017) Mironov, I.: Rényi differential privacy. In: Proceedings of Computer Security Foundations Symposium (CSF). IEEE Computer Society (2017)
30.
go back to reference MLPerf Website: MLPerf - Fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services (2018). https://mlperf.org/ MLPerf Website: MLPerf - Fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services (2018). https://​mlperf.​org/​
31.
go back to reference Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks (2018) Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks (2018)
32.
go back to reference Nasr, M., Shokri, R., Houmansadr, A.: Machine learning with membership privacy using adversarial regularization. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2018) Nasr, M., Shokri, R., Houmansadr, A.: Machine learning with membership privacy using adversarial regularization. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2018)
33.
go back to reference Papernot, N., Song, S., Mironov, I., Raghunathan, A., Talwar, K., Erlingsson, Ú.: Scalable private learning with pate (2018) Papernot, N., Song, S., Mironov, I., Raghunathan, A., Talwar, K., Erlingsson, Ú.: Scalable private learning with pate (2018)
34.
go back to reference Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: British Machine Vision Conference. BMVA Press (2015) Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: British Machine Vision Conference. BMVA Press (2015)
35.
go back to reference Rahman, M.A., Rahman, T., Laganière, R., Mohammed, N.: Membership inference attack against differentially private deep learning model. Trans. Data Priv. 11, 61–79 (2018) Rahman, M.A., Rahman, T., Laganière, R., Mohammed, N.: Membership inference attack against differentially private deep learning model. Trans. Data Priv. 11, 61–79 (2018)
36.
go back to reference Sankararaman, S., Obozinski, G., Jordan, M.I., Halperin, E.: Genomic privacy and limits of individual detection in a pool. Nature Genetics 41, 965–967 (2009)CrossRef Sankararaman, S., Obozinski, G., Jordan, M.I., Halperin, E.: Genomic privacy and limits of individual detection in a pool. Nature Genetics 41, 965–967 (2009)CrossRef
37.
go back to reference Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2015) Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of Conference on Computer and Communications Security (CCS). ACM Press (2015)
38.
go back to reference Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against ML models. In: Proceedings of Symposium on Security and Privacy (S&P). IEEE Computer Society (2017) Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against ML models. In: Proceedings of Symposium on Security and Privacy (S&P). IEEE Computer Society (2017)
39.
go back to reference Song, S., Chaudhuri, K., Sarwate, A.D.: Stochastic gradient descent with differentially private updates. In: Proceedings of Conference on Signal and Information Processing. IEEE Computer Society (2013) Song, S., Chaudhuri, K., Sarwate, A.D.: Stochastic gradient descent with differentially private updates. In: Proceedings of Conference on Signal and Information Processing. IEEE Computer Society (2013)
40.
go back to reference Wang, T., Blocki, J., Li, N., Jha, S.: Locally differentially private protocols for frequency estimation. In: Proceedings of USENIX Security Symposium. USENIX Association (2017) Wang, T., Blocki, J., Li, N., Jha, S.: Locally differentially private protocols for frequency estimation. In: Proceedings of USENIX Security Symposium. USENIX Association (2017)
41.
go back to reference Warner, S.L.: Randomized response: a survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc. 60(309), 63–69 (1965)CrossRef Warner, S.L.: Randomized response: a survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc. 60(309), 63–69 (1965)CrossRef
42.
go back to reference Wirth, R., Hipp, J.: Crisp-DM: towards a standard process model for data mining. In: Proceedings of Conference on Practical Applications of Knowledge Discovery and Data Mining. Practical Application Company (2000) Wirth, R., Hipp, J.: Crisp-DM: towards a standard process model for data mining. In: Proceedings of Conference on Practical Applications of Knowledge Discovery and Data Mining. Practical Application Company (2000)
43.
go back to reference Yeom, S., Fredrikson, M., Jha, S.: The unintended consequences of overfitting: training data inference attacks (2017) Yeom, S., Fredrikson, M., Jha, S.: The unintended consequences of overfitting: training data inference attacks (2017)
44.
go back to reference Yeom, S., Giacomelli, I., Fredrikson, M., Jha, S.: Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting (2018) Yeom, S., Giacomelli, I., Fredrikson, M., Jha, S.: Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting (2018)
Metadata
Title
Comparing Local and Central Differential Privacy Using Membership Inference Attacks
Authors
Daniel Bernau
Jonas Robl
Philip W. Grassal
Steffen Schneider
Florian Kerschbaum
Copyright Year
2021
DOI
https://doi.org/10.1007/978-3-030-81242-3_2

Premium Partner