Skip to main content
Erschienen in: Wireless Personal Communications 4/2021

13.02.2021

Security Threats and Defensive Approaches in Machine Learning System Under Big Data Environment

verfasst von: Chen Hongsong, Zhang Yongpeng, Cao Yongrui, Bharat Bhargava

Erschienen in: Wireless Personal Communications | Ausgabe 4/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Under big data environment, machine learning has been rapidly developed and widely used. It has been successfully applied in computer vision, natural language processing, computer security and other application fields. However, there are many security problems in machine learning under big data environment. For example, attackers can add “poisoned” sample to the data source, and big data process system will process these “poisoned” sample and use machine learning methods to train model, which will directly lead to wrong prediction results. In this paper, machine learning system and machine learning pipeline are proposed. The security problems that maybe occur in each stage of machine learning system under big data processing pipeline are analyzed comprehensively. We use four different attack methods to compare the attack experimental results.The security problems are classified comprehensively, and the defense approaches to each security problem are analyzed. Drone-deploy MapEngine is selected as a case study, we analyze the security threats and defense approaches in the Drone-Cloud machine learning application envirolment. At last,the future development drections of security issues and challenages in the machine learning system are proposed.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., Swami, A. (2016). The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P) (pp. 372–387). IEEE. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., Swami, A. (2016). The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P) (pp. 372–387). IEEE.
2.
Zurück zum Zitat Kos, J., Fischer, I., Song, D. (2018). Adversarial examples for generative models. In 2018 IEEE security and privacy workshops (spw) (pp. 36–42). IEEE. Kos, J., Fischer, I., Song, D. (2018). Adversarial examples for generative models. In 2018 IEEE security and privacy workshops (spw) (pp. 36–42). IEEE.
3.
Zurück zum Zitat Nguyen, A., Yosinski, J., Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 427–436). Nguyen, A., Yosinski, J., Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 427–436).
4.
Zurück zum Zitat Chen, B., Wan, J., Lan, Y., Imran, M., Li, D., & Guizani, N. (2019). Improving cognitive ability of edge intelligent IIoT through machine learning. IEEE Network, 33(5), 61–67.CrossRef Chen, B., Wan, J., Lan, Y., Imran, M., Li, D., & Guizani, N. (2019). Improving cognitive ability of edge intelligent IIoT through machine learning. IEEE Network, 33(5), 61–67.CrossRef
5.
Zurück zum Zitat Shailaja, K., Seetharamulu, B., Jabbar, M. A. (2018). Machine learning in healthcare: A review. In 2018 second international conference on electronics, communication and aerospace technology (ICECA) (pp. 910–914). IEEE. Shailaja, K., Seetharamulu, B., Jabbar, M. A. (2018). Machine learning in healthcare: A review. In 2018 second international conference on electronics, communication and aerospace technology (ICECA) (pp. 910–914). IEEE.
6.
Zurück zum Zitat Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A., & Jha, N. K. (2014). Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE journal of biomedical and health informatics, 19(6), 1893–1905.CrossRef Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A., & Jha, N. K. (2014). Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE journal of biomedical and health informatics, 19(6), 1893–1905.CrossRef
8.
Zurück zum Zitat Okuyama, T., Gonsalves, T., & Upadhay, J. (2018 March). Autonomous driving system based on deep q learnig. In 2018 International conference on intelligent autonomous systems (ICoIAS) (pp. 201–205). IEEE. Okuyama, T., Gonsalves, T., & Upadhay, J. (2018 March). Autonomous driving system based on deep q learnig. In 2018 International conference on intelligent autonomous systems (ICoIAS) (pp. 201–205). IEEE.
9.
Zurück zum Zitat Pei, X., Tian, S., Yu, L., et al. (2020). A two-stream network based on capsule networks and sliced recurrent neural networks for DGA botnet detection. Journal of Network and Systems Management, 28, 1694–1721.CrossRef Pei, X., Tian, S., Yu, L., et al. (2020). A two-stream network based on capsule networks and sliced recurrent neural networks for DGA botnet detection. Journal of Network and Systems Management, 28, 1694–1721.CrossRef
10.
Zurück zum Zitat Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., & Song, D. (2018). Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1625–1634). Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., & Song, D. (2018). Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1625–1634).
12.
Zurück zum Zitat Carlini, N., & Wagner, D. (2017, May). Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy (sp) (pp. 39–57). IEEE. Carlini, N., & Wagner, D. (2017, May). Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy (sp) (pp. 39–57). IEEE.
14.
Zurück zum Zitat Biggio, B., & Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84, 317–331.CrossRef Biggio, B., & Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84, 317–331.CrossRef
15.
Zurück zum Zitat Papernot, N., McDaniel, P., Sinha, A., & Wellman, M. P. (2018). SoK: Security and privacy in machine learning. In 2018 IEEE European Symposium on Security and Privacy (EuroS&P) (pp. 399–414). IEEE. Papernot, N., McDaniel, P., Sinha, A., & Wellman, M. P. (2018). SoK: Security and privacy in machine learning. In 2018 IEEE European Symposium on Security and Privacy (EuroS&P) (pp. 399–414). IEEE.
16.
Zurück zum Zitat Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., & Leung, V. C. (2018). A survey on security threats and defensive techniques of machine learning: A data driven view. IEEE Access, 6, 12103–12117.CrossRef Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., & Leung, V. C. (2018). A survey on security threats and defensive techniques of machine learning: A data driven view. IEEE Access, 6, 12103–12117.CrossRef
17.
Zurück zum Zitat Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410–14430.CrossRef Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410–14430.CrossRef
18.
Zurück zum Zitat Idris, N., & Ahmad, K. (2011). Managing Data Source quality for data warehouse in manufacturing services. In Proceedings of the 2011 IEEE International conference on electrical engineering and informatics (pp. 1–6). Idris, N., & Ahmad, K. (2011). Managing Data Source quality for data warehouse in manufacturing services. In Proceedings of the 2011 IEEE International conference on electrical engineering and informatics (pp. 1–6).
19.
Zurück zum Zitat Xiao, Q., Li, K., Zhang, D., & Xu, W. (2018). Security risks in deep learning implementations. In 2018 IEEE Security and privacy workshops (SPW) (pp. 123–128) Xiao, Q., Li, K., Zhang, D., & Xu, W. (2018). Security risks in deep learning implementations. In 2018 IEEE Security and privacy workshops (SPW) (pp. 123–128)
20.
Zurück zum Zitat Ji, Y., Zhang, X., Ji, S., Luo, X., & Wang, T. (2018). Model-reuse attacks on deep learning systems. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security (pp. 349–363). Ji, Y., Zhang, X., Ji, S., Luo, X., & Wang, T. (2018). Model-reuse attacks on deep learning systems. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security (pp. 349–363).
21.
Zurück zum Zitat Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., & Ristenpart, T. (2016). Stealing machine learning models via prediction apis. In 25th {USENIX} security symposium ({USENIX} security 16) (pp. 601–618). Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., & Ristenpart, T. (2016). Stealing machine learning models via prediction apis. In 25th {USENIX} security symposium ({USENIX} security 16) (pp. 601–618).
22.
Zurück zum Zitat Shi, Y., Sagduyu, Y., & Grushin, A. (2017). How to steal a machine learning classifier with deep learning. In 2017 IEEE International symposium on technologies for homeland security (HST) (pp. 1–5) Shi, Y., Sagduyu, Y., & Grushin, A. (2017). How to steal a machine learning classifier with deep learning. In 2017 IEEE International symposium on technologies for homeland security (HST) (pp. 1–5)
23.
Zurück zum Zitat Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., & Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security (pp. 506–519). Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., & Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security (pp. 506–519).
24.
Zurück zum Zitat Chen, P. Y., Zhang, H., Sharma, Y., Yi, J., & Hsieh, C. J. (2017). Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM workshop on artificial intelligence and security (pp. 15–26). Chen, P. Y., Zhang, H., Sharma, Y., Yi, J., & Hsieh, C. J. (2017). Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM workshop on artificial intelligence and security (pp. 15–26).
25.
Zurück zum Zitat Shi, Y., Wang, S., & Han, Y. (2019). Curls & whey: Boosting black-box adversarial attacks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6519–6527). Shi, Y., Wang, S., & Han, Y. (2019). Curls & whey: Boosting black-box adversarial attacks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6519–6527).
26.
Zurück zum Zitat Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., & Li, J. (2018). Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9185–9193). Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., & Li, J. (2018). Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9185–9193).
27.
Zurück zum Zitat Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5), 828–841.CrossRef Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5), 828–841.CrossRef
28.
Zurück zum Zitat Chen, J., Jordan, M. I., & Wainwright, M. J. (2020). Hopskipjumpattack: A query-efficient decision-based attack. In 2020 IEEE symposium on security and privacy (sp) (pp. 1277–1294). Chen, J., Jordan, M. I., & Wainwright, M. J. (2020). Hopskipjumpattack: A query-efficient decision-based attack. In 2020 IEEE symposium on security and privacy (sp) (pp. 1277–1294).
35.
Zurück zum Zitat Moosavi-Dezfooli, S. M., Fawzi, A., & Frossard, P. (2016). Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2574–2582). Moosavi-Dezfooli, S. M., Fawzi, A., & Frossard, P. (2016). Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2574–2582).
36.
Zurück zum Zitat Moosavi-Dezfooli, S. M., Fawzi, A., Fawzi, O., & Frossard, P. (2017). Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1765–1773). Moosavi-Dezfooli, S. M., Fawzi, A., Fawzi, O., & Frossard, P. (2017). Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1765–1773).
39.
Zurück zum Zitat Dalvi, N., Domingos, P., Sanghai, S., & Verma, D. (2004, August). Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 99–108). Dalvi, N., Domingos, P., Sanghai, S., & Verma, D. (2004, August). Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 99–108).
40.
Zurück zum Zitat Biggio, B., Fumera, G., Pillai, I., & Roli, F. (2011). A survey and experimental evaluation of image spam filtering techniques. Pattern recognition letters, 32(10), 1436–1446.CrossRef Biggio, B., Fumera, G., Pillai, I., & Roli, F. (2011). A survey and experimental evaluation of image spam filtering techniques. Pattern recognition letters, 32(10), 1436–1446.CrossRef
41.
Zurück zum Zitat Biggio, B., Corona, I., Nelson, B., Rubinstein, B. I., Maiorca, D., Fumera, G., & Roli, F. (2014). Security evaluation of support vector machines in adversarial environments. In Support Vector Machines Applications (pp. 105–153). Cham: Springer. Biggio, B., Corona, I., Nelson, B., Rubinstein, B. I., Maiorca, D., Fumera, G., & Roli, F. (2014). Security evaluation of support vector machines in adversarial environments. In Support Vector Machines Applications (pp. 105–153). Cham: Springer.
43.
Zurück zum Zitat Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016). Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP) (pp. 582–597). Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016). Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP) (pp. 582–597).
44.
Zurück zum Zitat Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. (2016). Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 acm sigsac conference on computer and communications security (pp. 1528–1540). Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. (2016). Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 acm sigsac conference on computer and communications security (pp. 1528–1540).
46.
Zurück zum Zitat Li, H., Zhou, S., Yuan, W., Li, J., & Leung, H. (2019). Adversarial-example attacks toward android malware detection system. IEEE Systems Journal, 14(1), 653–656.CrossRef Li, H., Zhou, S., Yuan, W., Li, J., & Leung, H. (2019). Adversarial-example attacks toward android malware detection system. IEEE Systems Journal, 14(1), 653–656.CrossRef
47.
Zurück zum Zitat Ayub, M. A., Johnson, W. A., Talbert, D. A., & Siraj, A. (2020). Model evasion attack on intrusion detection systems using adversarial machine learning. In 2020 IEEE 54th annual conference on information sciences and systems (CISS) (pp. 1–6) Ayub, M. A., Johnson, W. A., Talbert, D. A., & Siraj, A. (2020). Model evasion attack on intrusion detection systems using adversarial machine learning. In 2020 IEEE 54th annual conference on information sciences and systems (CISS) (pp. 1–6)
48.
Zurück zum Zitat He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).
49.
Zurück zum Zitat Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248–255). Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248–255).
50.
Zurück zum Zitat LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.CrossRef LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.CrossRef
52.
Zurück zum Zitat Zhang, F., Chan, P. P., Biggio, B., Yeung, D. S., & Roli, F. (2015). Adversarial feature selection against evasion attacks. IEEE Transactions on Cybernetics, 46(3), 766–777.CrossRef Zhang, F., Chan, P. P., Biggio, B., Yeung, D. S., & Roli, F. (2015). Adversarial feature selection against evasion attacks. IEEE Transactions on Cybernetics, 46(3), 766–777.CrossRef
53.
Zurück zum Zitat Naghibijouybari, H., Neupane, A., Qian, Z., & Abu-Ghazaleh, N. (2018). Rendered insecure: GPU side channel attacks are practical. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security (pp. 2139–2153). Naghibijouybari, H., Neupane, A., Qian, Z., & Abu-Ghazaleh, N. (2018). Rendered insecure: GPU side channel attacks are practical. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security (pp. 2139–2153).
56.
Zurück zum Zitat Shi, Y., Sagduyu, Y. E., Davaslioglu, K., & Li, J. H. (2018). Active deep learning attacks under strict rate limitations for online API calls. In 2018 IEEE International Symposium on Technologies for Homeland Security (HST) (pp. 1–6). Shi, Y., Sagduyu, Y. E., Davaslioglu, K., & Li, J. H. (2018). Active deep learning attacks under strict rate limitations for online API calls. In 2018 IEEE International Symposium on Technologies for Homeland Security (HST) (pp. 1–6).
Metadaten
Titel
Security Threats and Defensive Approaches in Machine Learning System Under Big Data Environment
verfasst von
Chen Hongsong
Zhang Yongpeng
Cao Yongrui
Bharat Bhargava
Publikationsdatum
13.02.2021
Verlag
Springer US
Erschienen in
Wireless Personal Communications / Ausgabe 4/2021
Print ISSN: 0929-6212
Elektronische ISSN: 1572-834X
DOI
https://doi.org/10.1007/s11277-021-08284-8

Weitere Artikel der Ausgabe 4/2021

Wireless Personal Communications 4/2021 Zur Ausgabe

Neuer Inhalt