Skip to main content
Top
Published in: Wireless Personal Communications 4/2021

13-02-2021

Security Threats and Defensive Approaches in Machine Learning System Under Big Data Environment

Authors: Chen Hongsong, Zhang Yongpeng, Cao Yongrui, Bharat Bhargava

Published in: Wireless Personal Communications | Issue 4/2021

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Under big data environment, machine learning has been rapidly developed and widely used. It has been successfully applied in computer vision, natural language processing, computer security and other application fields. However, there are many security problems in machine learning under big data environment. For example, attackers can add “poisoned” sample to the data source, and big data process system will process these “poisoned” sample and use machine learning methods to train model, which will directly lead to wrong prediction results. In this paper, machine learning system and machine learning pipeline are proposed. The security problems that maybe occur in each stage of machine learning system under big data processing pipeline are analyzed comprehensively. We use four different attack methods to compare the attack experimental results.The security problems are classified comprehensively, and the defense approaches to each security problem are analyzed. Drone-deploy MapEngine is selected as a case study, we analyze the security threats and defense approaches in the Drone-Cloud machine learning application envirolment. At last,the future development drections of security issues and challenages in the machine learning system are proposed.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., Swami, A. (2016). The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P) (pp. 372–387). IEEE. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., Swami, A. (2016). The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P) (pp. 372–387). IEEE.
2.
go back to reference Kos, J., Fischer, I., Song, D. (2018). Adversarial examples for generative models. In 2018 IEEE security and privacy workshops (spw) (pp. 36–42). IEEE. Kos, J., Fischer, I., Song, D. (2018). Adversarial examples for generative models. In 2018 IEEE security and privacy workshops (spw) (pp. 36–42). IEEE.
3.
go back to reference Nguyen, A., Yosinski, J., Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 427–436). Nguyen, A., Yosinski, J., Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 427–436).
4.
go back to reference Chen, B., Wan, J., Lan, Y., Imran, M., Li, D., & Guizani, N. (2019). Improving cognitive ability of edge intelligent IIoT through machine learning. IEEE Network, 33(5), 61–67.CrossRef Chen, B., Wan, J., Lan, Y., Imran, M., Li, D., & Guizani, N. (2019). Improving cognitive ability of edge intelligent IIoT through machine learning. IEEE Network, 33(5), 61–67.CrossRef
5.
go back to reference Shailaja, K., Seetharamulu, B., Jabbar, M. A. (2018). Machine learning in healthcare: A review. In 2018 second international conference on electronics, communication and aerospace technology (ICECA) (pp. 910–914). IEEE. Shailaja, K., Seetharamulu, B., Jabbar, M. A. (2018). Machine learning in healthcare: A review. In 2018 second international conference on electronics, communication and aerospace technology (ICECA) (pp. 910–914). IEEE.
6.
go back to reference Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A., & Jha, N. K. (2014). Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE journal of biomedical and health informatics, 19(6), 1893–1905.CrossRef Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A., & Jha, N. K. (2014). Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE journal of biomedical and health informatics, 19(6), 1893–1905.CrossRef
8.
go back to reference Okuyama, T., Gonsalves, T., & Upadhay, J. (2018 March). Autonomous driving system based on deep q learnig. In 2018 International conference on intelligent autonomous systems (ICoIAS) (pp. 201–205). IEEE. Okuyama, T., Gonsalves, T., & Upadhay, J. (2018 March). Autonomous driving system based on deep q learnig. In 2018 International conference on intelligent autonomous systems (ICoIAS) (pp. 201–205). IEEE.
9.
go back to reference Pei, X., Tian, S., Yu, L., et al. (2020). A two-stream network based on capsule networks and sliced recurrent neural networks for DGA botnet detection. Journal of Network and Systems Management, 28, 1694–1721.CrossRef Pei, X., Tian, S., Yu, L., et al. (2020). A two-stream network based on capsule networks and sliced recurrent neural networks for DGA botnet detection. Journal of Network and Systems Management, 28, 1694–1721.CrossRef
10.
go back to reference Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., & Song, D. (2018). Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1625–1634). Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., & Song, D. (2018). Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1625–1634).
12.
go back to reference Carlini, N., & Wagner, D. (2017, May). Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy (sp) (pp. 39–57). IEEE. Carlini, N., & Wagner, D. (2017, May). Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy (sp) (pp. 39–57). IEEE.
14.
go back to reference Biggio, B., & Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84, 317–331.CrossRef Biggio, B., & Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84, 317–331.CrossRef
15.
go back to reference Papernot, N., McDaniel, P., Sinha, A., & Wellman, M. P. (2018). SoK: Security and privacy in machine learning. In 2018 IEEE European Symposium on Security and Privacy (EuroS&P) (pp. 399–414). IEEE. Papernot, N., McDaniel, P., Sinha, A., & Wellman, M. P. (2018). SoK: Security and privacy in machine learning. In 2018 IEEE European Symposium on Security and Privacy (EuroS&P) (pp. 399–414). IEEE.
16.
go back to reference Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., & Leung, V. C. (2018). A survey on security threats and defensive techniques of machine learning: A data driven view. IEEE Access, 6, 12103–12117.CrossRef Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., & Leung, V. C. (2018). A survey on security threats and defensive techniques of machine learning: A data driven view. IEEE Access, 6, 12103–12117.CrossRef
17.
go back to reference Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410–14430.CrossRef Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410–14430.CrossRef
18.
go back to reference Idris, N., & Ahmad, K. (2011). Managing Data Source quality for data warehouse in manufacturing services. In Proceedings of the 2011 IEEE International conference on electrical engineering and informatics (pp. 1–6). Idris, N., & Ahmad, K. (2011). Managing Data Source quality for data warehouse in manufacturing services. In Proceedings of the 2011 IEEE International conference on electrical engineering and informatics (pp. 1–6).
19.
go back to reference Xiao, Q., Li, K., Zhang, D., & Xu, W. (2018). Security risks in deep learning implementations. In 2018 IEEE Security and privacy workshops (SPW) (pp. 123–128) Xiao, Q., Li, K., Zhang, D., & Xu, W. (2018). Security risks in deep learning implementations. In 2018 IEEE Security and privacy workshops (SPW) (pp. 123–128)
20.
go back to reference Ji, Y., Zhang, X., Ji, S., Luo, X., & Wang, T. (2018). Model-reuse attacks on deep learning systems. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security (pp. 349–363). Ji, Y., Zhang, X., Ji, S., Luo, X., & Wang, T. (2018). Model-reuse attacks on deep learning systems. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security (pp. 349–363).
21.
go back to reference Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., & Ristenpart, T. (2016). Stealing machine learning models via prediction apis. In 25th {USENIX} security symposium ({USENIX} security 16) (pp. 601–618). Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., & Ristenpart, T. (2016). Stealing machine learning models via prediction apis. In 25th {USENIX} security symposium ({USENIX} security 16) (pp. 601–618).
22.
go back to reference Shi, Y., Sagduyu, Y., & Grushin, A. (2017). How to steal a machine learning classifier with deep learning. In 2017 IEEE International symposium on technologies for homeland security (HST) (pp. 1–5) Shi, Y., Sagduyu, Y., & Grushin, A. (2017). How to steal a machine learning classifier with deep learning. In 2017 IEEE International symposium on technologies for homeland security (HST) (pp. 1–5)
23.
go back to reference Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., & Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security (pp. 506–519). Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., & Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security (pp. 506–519).
24.
go back to reference Chen, P. Y., Zhang, H., Sharma, Y., Yi, J., & Hsieh, C. J. (2017). Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM workshop on artificial intelligence and security (pp. 15–26). Chen, P. Y., Zhang, H., Sharma, Y., Yi, J., & Hsieh, C. J. (2017). Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM workshop on artificial intelligence and security (pp. 15–26).
25.
go back to reference Shi, Y., Wang, S., & Han, Y. (2019). Curls & whey: Boosting black-box adversarial attacks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6519–6527). Shi, Y., Wang, S., & Han, Y. (2019). Curls & whey: Boosting black-box adversarial attacks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6519–6527).
26.
go back to reference Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., & Li, J. (2018). Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9185–9193). Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., & Li, J. (2018). Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9185–9193).
27.
go back to reference Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5), 828–841.CrossRef Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5), 828–841.CrossRef
28.
go back to reference Chen, J., Jordan, M. I., & Wainwright, M. J. (2020). Hopskipjumpattack: A query-efficient decision-based attack. In 2020 IEEE symposium on security and privacy (sp) (pp. 1277–1294). Chen, J., Jordan, M. I., & Wainwright, M. J. (2020). Hopskipjumpattack: A query-efficient decision-based attack. In 2020 IEEE symposium on security and privacy (sp) (pp. 1277–1294).
35.
go back to reference Moosavi-Dezfooli, S. M., Fawzi, A., & Frossard, P. (2016). Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2574–2582). Moosavi-Dezfooli, S. M., Fawzi, A., & Frossard, P. (2016). Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2574–2582).
36.
go back to reference Moosavi-Dezfooli, S. M., Fawzi, A., Fawzi, O., & Frossard, P. (2017). Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1765–1773). Moosavi-Dezfooli, S. M., Fawzi, A., Fawzi, O., & Frossard, P. (2017). Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1765–1773).
39.
go back to reference Dalvi, N., Domingos, P., Sanghai, S., & Verma, D. (2004, August). Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 99–108). Dalvi, N., Domingos, P., Sanghai, S., & Verma, D. (2004, August). Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 99–108).
40.
go back to reference Biggio, B., Fumera, G., Pillai, I., & Roli, F. (2011). A survey and experimental evaluation of image spam filtering techniques. Pattern recognition letters, 32(10), 1436–1446.CrossRef Biggio, B., Fumera, G., Pillai, I., & Roli, F. (2011). A survey and experimental evaluation of image spam filtering techniques. Pattern recognition letters, 32(10), 1436–1446.CrossRef
41.
go back to reference Biggio, B., Corona, I., Nelson, B., Rubinstein, B. I., Maiorca, D., Fumera, G., & Roli, F. (2014). Security evaluation of support vector machines in adversarial environments. In Support Vector Machines Applications (pp. 105–153). Cham: Springer. Biggio, B., Corona, I., Nelson, B., Rubinstein, B. I., Maiorca, D., Fumera, G., & Roli, F. (2014). Security evaluation of support vector machines in adversarial environments. In Support Vector Machines Applications (pp. 105–153). Cham: Springer.
43.
go back to reference Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016). Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP) (pp. 582–597). Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016). Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP) (pp. 582–597).
44.
go back to reference Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. (2016). Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 acm sigsac conference on computer and communications security (pp. 1528–1540). Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. (2016). Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 acm sigsac conference on computer and communications security (pp. 1528–1540).
46.
go back to reference Li, H., Zhou, S., Yuan, W., Li, J., & Leung, H. (2019). Adversarial-example attacks toward android malware detection system. IEEE Systems Journal, 14(1), 653–656.CrossRef Li, H., Zhou, S., Yuan, W., Li, J., & Leung, H. (2019). Adversarial-example attacks toward android malware detection system. IEEE Systems Journal, 14(1), 653–656.CrossRef
47.
go back to reference Ayub, M. A., Johnson, W. A., Talbert, D. A., & Siraj, A. (2020). Model evasion attack on intrusion detection systems using adversarial machine learning. In 2020 IEEE 54th annual conference on information sciences and systems (CISS) (pp. 1–6) Ayub, M. A., Johnson, W. A., Talbert, D. A., & Siraj, A. (2020). Model evasion attack on intrusion detection systems using adversarial machine learning. In 2020 IEEE 54th annual conference on information sciences and systems (CISS) (pp. 1–6)
48.
go back to reference He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).
49.
go back to reference Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248–255). Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248–255).
50.
go back to reference LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.CrossRef LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.CrossRef
52.
go back to reference Zhang, F., Chan, P. P., Biggio, B., Yeung, D. S., & Roli, F. (2015). Adversarial feature selection against evasion attacks. IEEE Transactions on Cybernetics, 46(3), 766–777.CrossRef Zhang, F., Chan, P. P., Biggio, B., Yeung, D. S., & Roli, F. (2015). Adversarial feature selection against evasion attacks. IEEE Transactions on Cybernetics, 46(3), 766–777.CrossRef
53.
go back to reference Naghibijouybari, H., Neupane, A., Qian, Z., & Abu-Ghazaleh, N. (2018). Rendered insecure: GPU side channel attacks are practical. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security (pp. 2139–2153). Naghibijouybari, H., Neupane, A., Qian, Z., & Abu-Ghazaleh, N. (2018). Rendered insecure: GPU side channel attacks are practical. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security (pp. 2139–2153).
56.
go back to reference Shi, Y., Sagduyu, Y. E., Davaslioglu, K., & Li, J. H. (2018). Active deep learning attacks under strict rate limitations for online API calls. In 2018 IEEE International Symposium on Technologies for Homeland Security (HST) (pp. 1–6). Shi, Y., Sagduyu, Y. E., Davaslioglu, K., & Li, J. H. (2018). Active deep learning attacks under strict rate limitations for online API calls. In 2018 IEEE International Symposium on Technologies for Homeland Security (HST) (pp. 1–6).
Metadata
Title
Security Threats and Defensive Approaches in Machine Learning System Under Big Data Environment
Authors
Chen Hongsong
Zhang Yongpeng
Cao Yongrui
Bharat Bhargava
Publication date
13-02-2021
Publisher
Springer US
Published in
Wireless Personal Communications / Issue 4/2021
Print ISSN: 0929-6212
Electronic ISSN: 1572-834X
DOI
https://doi.org/10.1007/s11277-021-08284-8

Other articles of this Issue 4/2021

Wireless Personal Communications 4/2021 Go to the issue