Skip to main content
Erschienen in: Automatic Control and Computer Sciences 8/2022

01.12.2022

Adversarial Machine Learning Protection Using the Example of Evasion Attacks on Medical Images

verfasst von: E. A. Rudnitskaya, M. A. Poltavtseva

Erschienen in: Automatic Control and Computer Sciences | Ausgabe 8/2022

Einloggen, um Zugang zu erhalten

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This work considers evasion attacks on machine learning (ML) systems that use medical images in their analysis. Their systematization and a practical assessment of feasibility are carried out. Existing protection techniques against ML evasion attacks are presented and analyzed. The features of medical images are given and the formulation of the problem of evasion attack protection for these images based on several protective methods is provided. The authors have identified, implemented, and tested the most relevant protection methods on practical examples: an analysis of images of patients with COVID-19.
Literatur
2.
Zurück zum Zitat Hospital viruses: Fake cancerous nodes in CT scans, created by malware, trick radiologists, The Washington Post, 2019. https://www.washingtonpost.com/technology/2019/04/03/hospital-viruses-fake-cancerous-nodes-ct-scans-created-by-malware-trick-radiologists/. Cited February 15, 2021. Hospital viruses: Fake cancerous nodes in CT scans, created by malware, trick radiologists, The Washington Post, 2019. https://​www.​washingtonpost.​com/​technology/​2019/​04/​03/​hospital-viruses-fake-cancerous-nodes-ct-scans-created-by-malware-trick-radiologists/​.​ Cited February 15, 2021.
4.
Zurück zum Zitat Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., and Mukhopadhyay, D., Adversarial attacks and defences: a survey, 2018. arXiv:1810.00069 [cs.LG] Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., and Mukhopadhyay, D., Adversarial attacks and defences: a survey, 2018. arXiv:1810.00069 [cs.LG]
5.
Zurück zum Zitat Barreno, M., Nelson, B., Sears, R., Joseph, A.D., and Tygar, J.D., Can machine learning be secure?, ASIACC-S ’06: Proc. 2006 ACM Symp. on Information, Computer and Communication Security, Taipei, Taiwan, 2006, New York: Association for Computing Machinery, 2006, pp. 16–25. https://doi.org/10.1145/1128817.1128824 Barreno, M., Nelson, B., Sears, R., Joseph, A.D., and Tygar, J.D., Can machine learning be secure?, ASIACC-S ’06: Proc. 2006 ACM Symp. on Information, Computer and Communication Security, Taipei, Taiwan, 2006, New York: Association for Computing Machinery, 2006, pp. 16–25.  https://​doi.​org/​10.​1145/​1128817.​1128824
8.
Zurück zum Zitat Taghanaki, S.A., Das, A., Hamarneh, G., Vulnerability analysis of chest x-ray image classification against adversarial attacks, Understanding and Interpreting Machine Learning in Medical Image Computing Applications, Stoyanov, D., Taylor, Z., Kia, S.M., Eds., Lecture Notes in Computer Science, vol. 11038, Cham: Springer, 2018, pp. 87–94. https://doi.org/10.1007/978-3-030-02628-8_10CrossRef Taghanaki, S.A., Das, A., Hamarneh, G., Vulnerability analysis of chest x-ray image classification against adversarial attacks, Understanding and Interpreting Machine Learning in Medical Image Computing Applications, Stoyanov, D., Taylor, Z., Kia, S.M., Eds., Lecture Notes in Computer Science, vol. 11038, Cham: Springer, 2018, pp. 87–94.  https://​doi.​org/​10.​1007/​978-3-030-02628-8_​10CrossRef
9.
Zurück zum Zitat Voynov, D.M. and Kovalev, V.A., Experimental assessment of adversarial attacks to the deep neural networks in medical image recognition, Informatika, 2019, vol. 16, no. 3, pp. 14–22. Voynov, D.M. and Kovalev, V.A., Experimental assessment of adversarial attacks to the deep neural networks in medical image recognition, Informatika, 2019, vol. 16, no. 3, pp. 14–22.
12.
Zurück zum Zitat Tramer, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P., Ensemble adversarial training: attacks and defenses, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018. Tramer, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P., Ensemble adversarial training: attacks and defenses, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018.
14.
Zurück zum Zitat Xie, C., Wang, J., Zhang, Z., Ren, Z., and Yuille, A., Mitigating adversarial effects through randomization, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018. Xie, C., Wang, J., Zhang, Z., Ren, Z., and Yuille, A., Mitigating adversarial effects through randomization, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018.
15.
Zurück zum Zitat Liu, X., Cheng, M., Zhang, H., and Hsieh, Cho-J., Towards robust neural networks via random self-ensemble, Computer Vision–ECCV 2018, Ferrari, V., Hebert, M., Sminchisescu, C., and Weiss, Y., Eds., Lecture Notes in Computer Science, vol. 11211, Cham: Springer, 2018, pp. 381–397. https://doi.org/10.1007/978-3-030-01234-2_23CrossRef Liu, X., Cheng, M., Zhang, H., and Hsieh, Cho-J., Towards robust neural networks via random self-ensemble, Computer Vision–ECCV 2018, Ferrari, V., Hebert, M., Sminchisescu, C., and Weiss, Y., Eds., Lecture Notes in Computer Science, vol. 11211, Cham: Springer, 2018, pp. 381–397.  https://​doi.​org/​10.​1007/​978-3-030-01234-2_​23CrossRef
16.
Zurück zum Zitat Dhillon GS, Azizzadenesheli K, Lipton ZC, Bernstein J, Kossaifi J, Khanna A, and Anandkumar, A., Stochastic activation pruning for robust adversarial defense, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018. Dhillon GS, Azizzadenesheli K, Lipton ZC, Bernstein J, Kossaifi J, Khanna A, and Anandkumar, A., Stochastic activation pruning for robust adversarial defense, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018.
18.
Zurück zum Zitat Samangouei, P., Kabkab, M., and Chellappa, R., Defense-GAN: protecting classifiers against adversarial attacks using generative models, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018. Samangouei, P., Kabkab, M., and Chellappa, R., Defense-GAN: protecting classifiers against adversarial attacks using generative models, 6th Int. Conf. on Learning Representations, ICLR 2018–Conf. Track Proc., Vancouver, 2018.
19.
Zurück zum Zitat Shen, S., Jin, G., Gao, K., and Zhang, Y., APE-GAN: Adversarial perturbation elimination with GA, 2017. arXiv:1707.05474 [cs.CV] Shen, S., Jin, G., Gao, K., and Zhang, Y., APE-GAN: Adversarial perturbation elimination with GA, 2017. arXiv:1707.05474 [cs.CV]
21.
Zurück zum Zitat Meng, D. and Chen, H., MagNet: A two-pronged defense against adversarial examples, CCS ’17: Proc. 2017 ACM SIGSAC Conf. on Computer and Communications Security, Dallas, 2017, New York: Association for Computing Machinery, 2017, pp. 135–147. https://doi.org/10.1145/3133956.3134057 Meng, D. and Chen, H., MagNet: A two-pronged defense against adversarial examples, CCS ’17: Proc. 2017 ACM SIGSAC Conf. on Computer and Communications Security, Dallas, 2017, New York: Association for Computing Machinery, 2017, pp. 135–147.  https://​doi.​org/​10.​1145/​3133956.​3134057
22.
Zurück zum Zitat Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., and Zhu, J., Defense against adversarial attacks using high-level representation guided denoiser, 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Salt Lake City, Utah, 2018, IEEE, 2018, pp. 1778–1787. https://doi.org/10.1109/CVPR.2018.00191 Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., and Zhu, J., Defense against adversarial attacks using high-level representation guided denoiser, 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Salt Lake City, Utah, 2018, IEEE, 2018, pp. 1778–1787.  https://​doi.​org/​10.​1109/​CVPR.​2018.​00191
23.
Zurück zum Zitat TensorFlow library. https://www.tensorflow.org/. Cited January 25, 2021. TensorFlow library. https://​www.​tensorflow.​org/​.​ Cited January 25, 2021.
24.
Zurück zum Zitat Curated chest X-ray image dataset for COVID-19 detection, Kaggle. https://www.kaggle.com/unaissait/curated-chest-xray-image-dataset-for-covid19?select=Curated+X-Ray+Dataset. Cited February 20, 2021. Curated chest X-ray image dataset for COVID-19 detection, Kaggle. https://​www.​kaggle.​com/​unaissait/​curated-chest-xray-image-dataset-for-covid19?​select=​Curated+X-Ray+Dataset.​ Cited February 20, 2021.
25.
Zurück zum Zitat Chest Xray for COVID-19 detection, Kaggle, https://www.kaggle.com/fusicfenta/chest-xray-for-covid19-detection/. Cited February 20, 2021. Chest Xray for COVID-19 detection, Kaggle, https://​www.​kaggle.​com/​fusicfenta/​chest-xray-for-covid19-detection/​.​ Cited February 20, 2021.
26.
Zurück zum Zitat COVID-19 chest X-ray image dataset, Kaggle. https://www.kaggle.com/alifrahman/covid19-chest-xray-image-dataset/. Cited February 20, 2021. COVID-19 chest X-ray image dataset, Kaggle. https://​www.​kaggle.​com/​alifrahman/​covid19-chest-xray-image-dataset/​.​ Cited February 20, 2021.
27.
Zurück zum Zitat COVID-19 radiography database, Kaggle. https://www.kaggle.com/tawsifurrahman/covid19-radiography-database/. Cited February 20, 2021. COVID-19 radiography database, Kaggle. https://​www.​kaggle.​com/​tawsifurrahman/​covid19-radiography-database/​.​ Cited February 20, 2021.
28.
Zurück zum Zitat Zegzhda, D.P., Pavlenko, E., and Shtyrkina, A., Cybersecurity and control sustainability in digital economy and advanced production, The Economics of Digital Transformation, Devezas, T., Leitão, J., Sarygulov, A., Eds., Studies on Entrepreneurship, Structural Change and Industrial Dynamics, Cham: Springer, 2021, pp. 173–185. https://doi.org/10.1007/978-3-030-59959-1_11 Zegzhda, D.P., Pavlenko, E., and Shtyrkina, A., Cybersecurity and control sustainability in digital economy and advanced production, The Economics of Digital Transformation, Devezas, T., Leitão, J., Sarygulov, A., Eds., Studies on Entrepreneurship, Structural Change and Industrial Dynamics, Cham: Springer, 2021, pp. 173–185. https://​doi.​org/​10.​1007/​978-3-030-59959-1_​11
Metadaten
Titel
Adversarial Machine Learning Protection Using the Example of Evasion Attacks on Medical Images
verfasst von
E. A. Rudnitskaya
M. A. Poltavtseva
Publikationsdatum
01.12.2022
Verlag
Pleiades Publishing
Erschienen in
Automatic Control and Computer Sciences / Ausgabe 8/2022
Print ISSN: 0146-4116
Elektronische ISSN: 1558-108X
DOI
https://doi.org/10.3103/S0146411622080211

Weitere Artikel der Ausgabe 8/2022

Automatic Control and Computer Sciences 8/2022 Zur Ausgabe

Neuer Inhalt