Skip to main content
Top
Published in: Automatic Control and Computer Sciences 8/2022

01-12-2022

Adversarial Machine Learning Protection Using the Example of Evasion Attacks on Medical Images

Authors: E. A. Rudnitskaya, M. A. Poltavtseva

Published in: Automatic Control and Computer Sciences | Issue 8/2022

Login to get access

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This work considers evasion attacks on machine learning (ML) systems that use medical images in their analysis. Their systematization and a practical assessment of feasibility are carried out. Existing protection techniques against ML evasion attacks are presented and analyzed. The features of medical images are given and the formulation of the problem of evasion attack protection for these images based on several protective methods is provided. The authors have identified, implemented, and tested the most relevant protection methods on practical examples: an analysis of images of patients with COVID-19.
Literature
2.
go back to reference Hospital viruses: Fake cancerous nodes in CT scans, created by malware, trick radiologists, The Washington Post, 2019. https://www.washingtonpost.com/technology/2019/04/03/hospital-viruses-fake-cancerous-nodes-ct-scans-created-by-malware-trick-radiologists/. Cited February 15, 2021. Hospital viruses: Fake cancerous nodes in CT scans, created by malware, trick radiologists, The Washington Post, 2019. https://​www.​washingtonpost.​com/​technology/​2019/​04/​03/​hospital-viruses-fake-cancerous-nodes-ct-scans-created-by-malware-trick-radiologists/​.​ Cited February 15, 2021.
4.
go back to reference Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., and Mukhopadhyay, D., Adversarial attacks and defences: a survey, 2018. arXiv:1810.00069 [cs.LG] Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., and Mukhopadhyay, D., Adversarial attacks and defences: a survey, 2018. arXiv:1810.00069 [cs.LG]
5.
go back to reference Barreno, M., Nelson, B., Sears, R., Joseph, A.D., and Tygar, J.D., Can machine learning be secure?, ASIACC-S ’06: Proc. 2006 ACM Symp. on Information, Computer and Communication Security, Taipei, Taiwan, 2006, New York: Association for Computing Machinery, 2006, pp. 16–25. https://doi.org/10.1145/1128817.1128824 Barreno, M., Nelson, B., Sears, R., Joseph, A.D., and Tygar, J.D., Can machine learning be secure?, ASIACC-S ’06: Proc. 2006 ACM Symp. on Information, Computer and Communication Security, Taipei, Taiwan, 2006, New York: Association for Computing Machinery, 2006, pp. 16–25.  https://​doi.​org/​10.​1145/​1128817.​1128824
8.
go back to reference Taghanaki, S.A., Das, A., Hamarneh, G., Vulnerability analysis of chest x-ray image classification against adversarial attacks, Understanding and Interpreting Machine Learning in Medical Image Computing Applications, Stoyanov, D., Taylor, Z., Kia, S.M., Eds., Lecture Notes in Computer Science, vol. 11038, Cham: Springer, 2018, pp. 87–94. https://doi.org/10.1007/978-3-030-02628-8_10CrossRef Taghanaki, S.A., Das, A., Hamarneh, G., Vulnerability analysis of chest x-ray image classification against adversarial attacks, Understanding and Interpreting Machine Learning in Medical Image Computing Applications, Stoyanov, D., Taylor, Z., Kia, S.M., Eds., Lecture Notes in Computer Science, vol. 11038, Cham: Springer, 2018, pp. 87–94.  https://​doi.​org/​10.​1007/​978-3-030-02628-8_​10CrossRef
9.
go back to reference Voynov, D.M. and Kovalev, V.A., Experimental assessment of adversarial attacks to the deep neural networks in medical image recognition, Informatika, 2019, vol. 16, no. 3, pp. 14–22. Voynov, D.M. and Kovalev, V.A., Experimental assessment of adversarial attacks to the deep neural networks in medical image recognition, Informatika, 2019, vol. 16, no. 3, pp. 14–22.
12.
go back to reference Tramer, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P., Ensemble adversarial training: attacks and defenses, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018. Tramer, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P., Ensemble adversarial training: attacks and defenses, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018.
14.
go back to reference Xie, C., Wang, J., Zhang, Z., Ren, Z., and Yuille, A., Mitigating adversarial effects through randomization, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018. Xie, C., Wang, J., Zhang, Z., Ren, Z., and Yuille, A., Mitigating adversarial effects through randomization, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018.
15.
go back to reference Liu, X., Cheng, M., Zhang, H., and Hsieh, Cho-J., Towards robust neural networks via random self-ensemble, Computer Vision–ECCV 2018, Ferrari, V., Hebert, M., Sminchisescu, C., and Weiss, Y., Eds., Lecture Notes in Computer Science, vol. 11211, Cham: Springer, 2018, pp. 381–397. https://doi.org/10.1007/978-3-030-01234-2_23CrossRef Liu, X., Cheng, M., Zhang, H., and Hsieh, Cho-J., Towards robust neural networks via random self-ensemble, Computer Vision–ECCV 2018, Ferrari, V., Hebert, M., Sminchisescu, C., and Weiss, Y., Eds., Lecture Notes in Computer Science, vol. 11211, Cham: Springer, 2018, pp. 381–397.  https://​doi.​org/​10.​1007/​978-3-030-01234-2_​23CrossRef
16.
go back to reference Dhillon GS, Azizzadenesheli K, Lipton ZC, Bernstein J, Kossaifi J, Khanna A, and Anandkumar, A., Stochastic activation pruning for robust adversarial defense, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018. Dhillon GS, Azizzadenesheli K, Lipton ZC, Bernstein J, Kossaifi J, Khanna A, and Anandkumar, A., Stochastic activation pruning for robust adversarial defense, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018.
18.
go back to reference Samangouei, P., Kabkab, M., and Chellappa, R., Defense-GAN: protecting classifiers against adversarial attacks using generative models, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018. Samangouei, P., Kabkab, M., and Chellappa, R., Defense-GAN: protecting classifiers against adversarial attacks using generative models, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018.
19.
go back to reference Shen, S., Jin, G., Gao, K., and Zhang, Y., APE-GAN: Adversarial perturbation elimination with GA, 2017. arXiv:1707.05474 [cs.CV] Shen, S., Jin, G., Gao, K., and Zhang, Y., APE-GAN: Adversarial perturbation elimination with GA, 2017. arXiv:1707.05474 [cs.CV]
21.
go back to reference Meng, D. and Chen, H., MagNet: A two-pronged defense against adversarial examples, CCS ’17: Proc. 2017 ACM SIGSAC Conf. on Computer and Communications Security, Dallas, 2017, New York: Association for Computing Machinery, 2017, pp. 135–147. https://doi.org/10.1145/3133956.3134057 Meng, D. and Chen, H., MagNet: A two-pronged defense against adversarial examples, CCS ’17: Proc. 2017 ACM SIGSAC Conf. on Computer and Communications Security, Dallas, 2017, New York: Association for Computing Machinery, 2017, pp. 135–147.  https://​doi.​org/​10.​1145/​3133956.​3134057
22.
go back to reference Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., and Zhu, J., Defense against adversarial attacks using high-level representation guided denoiser, 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Salt Lake City, Utah, 2018, IEEE, 2018, pp. 1778–1787. https://doi.org/10.1109/CVPR.2018.00191 Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., and Zhu, J., Defense against adversarial attacks using high-level representation guided denoiser, 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Salt Lake City, Utah, 2018, IEEE, 2018, pp. 1778–1787.  https://​doi.​org/​10.​1109/​CVPR.​2018.​00191
23.
go back to reference TensorFlow library. https://www.tensorflow.org/. Cited January 25, 2021. TensorFlow library. https://​www.​tensorflow.​org/​.​ Cited January 25, 2021.
24.
go back to reference Curated chest X-ray image dataset for COVID-19 detection, Kaggle. https://www.kaggle.com/unaissait/curated-chest-xray-image-dataset-for-covid19?select=Curated+X-Ray+Dataset. Cited February 20, 2021. Curated chest X-ray image dataset for COVID-19 detection, Kaggle. https://​www.​kaggle.​com/​unaissait/​curated-chest-xray-image-dataset-for-covid19?​select=​Curated+X-Ray+Dataset.​ Cited February 20, 2021.
25.
go back to reference Chest Xray for COVID-19 detection, Kaggle, https://www.kaggle.com/fusicfenta/chest-xray-for-covid19-detection/. Cited February 20, 2021. Chest Xray for COVID-19 detection, Kaggle, https://​www.​kaggle.​com/​fusicfenta/​chest-xray-for-covid19-detection/​.​ Cited February 20, 2021.
26.
go back to reference COVID-19 chest X-ray image dataset, Kaggle. https://www.kaggle.com/alifrahman/covid19-chest-xray-image-dataset/. Cited February 20, 2021. COVID-19 chest X-ray image dataset, Kaggle. https://​www.​kaggle.​com/​alifrahman/​covid19-chest-xray-image-dataset/​.​ Cited February 20, 2021.
27.
go back to reference COVID-19 radiography database, Kaggle. https://www.kaggle.com/tawsifurrahman/covid19-radiography-database/. Cited February 20, 2021. COVID-19 radiography database, Kaggle. https://​www.​kaggle.​com/​tawsifurrahman/​covid19-radiography-database/​.​ Cited February 20, 2021.
28.
go back to reference Zegzhda, D.P., Pavlenko, E., and Shtyrkina, A., Cybersecurity and control sustainability in digital economy and advanced production, The Economics of Digital Transformation, Devezas, T., Leitão, J., Sarygulov, A., Eds., Studies on Entrepreneurship, Structural Change and Industrial Dynamics, Cham: Springer, 2021, pp. 173–185. https://doi.org/10.1007/978-3-030-59959-1_11 Zegzhda, D.P., Pavlenko, E., and Shtyrkina, A., Cybersecurity and control sustainability in digital economy and advanced production, The Economics of Digital Transformation, Devezas, T., Leitão, J., Sarygulov, A., Eds., Studies on Entrepreneurship, Structural Change and Industrial Dynamics, Cham: Springer, 2021, pp. 173–185. https://​doi.​org/​10.​1007/​978-3-030-59959-1_​11
Metadata
Title
Adversarial Machine Learning Protection Using the Example of Evasion Attacks on Medical Images
Authors
E. A. Rudnitskaya
M. A. Poltavtseva
Publication date
01-12-2022
Publisher
Pleiades Publishing
Published in
Automatic Control and Computer Sciences / Issue 8/2022
Print ISSN: 0146-4116
Electronic ISSN: 1558-108X
DOI
https://doi.org/10.3103/S0146411622080211

Other articles of this Issue 8/2022

Automatic Control and Computer Sciences 8/2022 Go to the issue