Skip to main content
Top
Published in: Automatic Control and Computer Sciences 8/2023

01-12-2023

Confidentiality of Machine Learning Models

Authors: M. A. Poltavtseva, E. A. Rudnitskaya

Published in: Automatic Control and Computer Sciences | Issue 8/2023

Login to get access

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This article is about ensuring the confidentiality of models using machine learning systems. The aim of this study is to ensure the confidentiality of models when using machine learning systems. This study analyzes attacks aimed at violating the confidentiality of these models and methods of protection from this type of attack, as a result of which the task of protecting against this type of attack is formulated as a search for anomalies in the input data. A method is proposed for detecting abnormalities in the input data based on the statistical data, taking into consideration the resumption of the attack by the intruder under a different account. The results obtained can be used as a base for designing components of machine learning security systems.
Literature
1.
go back to reference Attacks against artificial intelligence. https://media.kaspersky.com/ru/business-security/attacks-on-artificial-intelligence-whitepaper.pdf. Cited December 25, 2021. Attacks against artificial intelligence. https://​media.​kaspersky.​com/​ru/​business-security/​attacks-on-artificial-intelligence-whitepaper.​pdf.​ Cited December 25, 2021.
6.
go back to reference Salem, A., Zhang, Ya., Humbert, M., Berrang, P., Fritz, M., and Backes, M., ML-Leaks: Model and data independent membership inference attacks and defenses on machine learning models, Proc. 2019 Annu. Network and Distributed System Security Symp. (NDSS), 2019, pp. 1–15. https://doi.org/10.48550/arXiv.1806.01246 Salem, A., Zhang, Ya., Humbert, M., Berrang, P., Fritz, M., and Backes, M., ML-Leaks: Model and data independent membership inference attacks and defenses on machine learning models, Proc. 2019 Annu. Network and Distributed System Security Symp. (NDSS), 2019, pp. 1–15. https://​doi.​org/​10.​48550/​arXiv.​1806.​01246
8.
go back to reference Rahman, Md.A., Rahman, T., Laganiere, R., Mohammed, N., and Wang, Y., Membership inference attack against differentially private deep learning model, Trans. Data Privacy, 2018, vol. 11, no. 1, pp. 61–79. Rahman, Md.A., Rahman, T., Laganiere, R., Mohammed, N., and Wang, Y., Membership inference attack against differentially private deep learning model, Trans. Data Privacy, 2018, vol. 11, no. 1, pp. 61–79.
9.
go back to reference Nasr, M., Shokri, R., and Houmansadr, A., Machine learning with membership privacy using adversarial regularization, Proc. 2018 ACM SIGSAC Conf. on Computer and Communications Security, Toronto, 2018, New York: Association for Computing Machinery, 2018, pp. 634–646. https://doi.org/10.1145/3243734.3243855 Nasr, M., Shokri, R., and Houmansadr, A., Machine learning with membership privacy using adversarial regularization, Proc. 2018 ACM SIGSAC Conf. on Computer and Communications Security, Toronto, 2018, New York: Association for Computing Machinery, 2018, pp. 634–646. https://​doi.​org/​10.​1145/​3243734.​3243855
10.
go back to reference Jia, J., Salem, A., Backes, M., Zhang, Ya., and Gong, N.Z., MemGuard: Defending against black-box membership inference attacks via adversarial examples, Proc. 2019 ACM SIGSAC Conf. on Computer and Communications Security, London, 2019, New York: Association for Computing Machinery, 2019, pp. 259–274. https://doi.org/10.1145/3319535.3363201 Jia, J., Salem, A., Backes, M., Zhang, Ya., and Gong, N.Z., MemGuard: Defending against black-box membership inference attacks via adversarial examples, Proc. 2019 ACM SIGSAC Conf. on Computer and Communications Security, London, 2019, New York: Association for Computing Machinery, 2019, pp. 259–274. https://​doi.​org/​10.​1145/​3319535.​3363201
12.
go back to reference Fredrikson, M., Jha, S., and Ristenpart, T., Model inversion attacks that exploit confidence information and basic countermeasures, Proc. 22nd ACM SIGSAC Conf. on Computer and Communications Security, Denver, Colo., 2015, New York: Association for Computing Machinery, 2015, pp. 1322–1333. https://doi.org/10.1145/2810103.2813677 Fredrikson, M., Jha, S., and Ristenpart, T., Model inversion attacks that exploit confidence information and basic countermeasures, Proc. 22nd ACM SIGSAC Conf. on Computer and Communications Security, Denver, Colo., 2015, New York: Association for Computing Machinery, 2015, pp. 1322–1333. https://​doi.​org/​10.​1145/​2810103.​2813677
14.
go back to reference Hidano, S., Murakami, T., Katsumata, S., Kiyomoto, S., and Hanaoka, G., Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes, 2017 15th Annu. Conf. on Privacy, Security and Trust (PST), Calgary, Canada, 2017, IEEE, 2017, pp. 115–126. https://doi.org/10.1109/pst.2017.00023 Hidano, S., Murakami, T., Katsumata, S., Kiyomoto, S., and Hanaoka, G., Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes, 2017 15th Annu. Conf. on Privacy, Security and Trust (PST), Calgary, Canada, 2017, IEEE, 2017, pp. 115–126. https://​doi.​org/​10.​1109/​pst.​2017.​00023
15.
go back to reference Wang, D., Si, Ch., and Xu, J., Regression model fitting under differential privacy and model inversion attack, Proc. 24th Int. Conf. on Artificial Intelligence, Buenos Aires, 2015, Yang, Q. and Wooldridge, M., Eds., AAAI Press, 2015, pp. 1003–1009. Wang, D., Si, Ch., and Xu, J., Regression model fitting under differential privacy and model inversion attack, Proc. 24th Int. Conf. on Artificial Intelligence, Buenos Aires, 2015, Yang, Q. and Wooldridge, M., Eds., AAAI Press, 2015, pp. 1003–1009.
17.
go back to reference He, Z., Zhang, T., and Lee, R.B., Model inversion attacks against collaborative inference, Proc. 35th Annu. Computer Security Applications Conf., San Juan, P.R., 2019, New York: Association for Computing Machinery, 2019, pp. 148–162. https://doi.org/10.1145/3359789.3359824 He, Z., Zhang, T., and Lee, R.B., Model inversion attacks against collaborative inference, Proc. 35th Annu. Computer Security Applications Conf., San Juan, P.R., 2019, New York: Association for Computing Machinery, 2019, pp. 148–162. https://​doi.​org/​10.​1145/​3359789.​3359824
18.
go back to reference Chandrasekaran, V., Chaudhuri, K., Giacomelli, I., Jha, S., and Yan, S., Exploring connections between active learning and model extraction, Proc. 29th USENIX Conf. on Security Symp., USENIX Association, 2020, pp. 1309–1326. https://www.usenix.org/conference/usenixsecurity20/presentation/chandrasekaran. Chandrasekaran, V., Chaudhuri, K., Giacomelli, I., Jha, S., and Yan, S., Exploring connections between active learning and model extraction, Proc. 29th USENIX Conf. on Security Symp., USENIX Association, 2020, pp. 1309–1326. https://​www.​usenix.​org/​conference/​usenixsecurity20​/​presentation/​chandrasekaran.​
19.
go back to reference Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., and Ristenpart, T., Stealing machine learning models via prediction APIs, 25th USENIX Security Symp., Austin, TX: USENIX Association, 2016, pp. 601–618. https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/tramer. Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., and Ristenpart, T., Stealing machine learning models via prediction APIs, 25th USENIX Security Symp., Austin, TX: USENIX Association, 2016, pp. 601–618. https://​www.​usenix.​org/​conference/​usenixsecurity16​/​technical-sessions/​presentation/​tramer.​
23.
go back to reference Yu, H., Yang, K., Zhang, T., Tsai, Yu.-Yu., Ho, Ts.-Yi., and Jin, Yi., CloudLeak: Large-scale deep learning models stealing through adversarial examples, Proc. 2020 Network and Distributed System Security Symp., Internet Society, 2020, pp. 1–16. https://doi.org/10.14722/ndss.2020.24178 Yu, H., Yang, K., Zhang, T., Tsai, Yu.-Yu., Ho, Ts.-Yi., and Jin, Yi., CloudLeak: Large-scale deep learning models stealing through adversarial examples, Proc. 2020 Network and Distributed System Security Symp., Internet Society, 2020, pp. 1–16. https://​doi.​org/​10.​14722/​ndss.​2020.​24178
27.
go back to reference Unguryanu, T.N. and Grjibovski, A.M., Brief recommendations on description, statistical analysis, and representation of data in scientific publications, Ekol. Chel., 2014, no. 5, pp. 55–60. Unguryanu, T.N. and Grjibovski, A.M., Brief recommendations on description, statistical analysis, and representation of data in scientific publications, Ekol. Chel., 2014, no. 5, pp. 55–60.
28.
go back to reference CIFAR10. https://www.tensorflow.org/datasets/catalog/cifar10?hl=ru. Cited December 10, 2022. CIFAR10. https://​www.​tensorflow.​org/​datasets/​catalog/​cifar10?​hl=​ru.​ Cited December 10, 2022.
29.
go back to reference MNIST. https://www.tensorflow.org/datasets/catalog/mnist?hl=ru. Cited December 10, 2022. MNIST. https://​www.​tensorflow.​org/​datasets/​catalog/​mnist?​hl=​ru.​ Cited December 10, 2022.
Metadata
Title
Confidentiality of Machine Learning Models
Authors
M. A. Poltavtseva
E. A. Rudnitskaya
Publication date
01-12-2023
Publisher
Pleiades Publishing
Published in
Automatic Control and Computer Sciences / Issue 8/2023
Print ISSN: 0146-4116
Electronic ISSN: 1558-108X
DOI
https://doi.org/10.3103/S0146411623080242

Other articles of this Issue 8/2023

Automatic Control and Computer Sciences 8/2023 Go to the issue