Skip to main content

2021 | Buch

Adversary-Aware Learning Techniques and Trends in Cybersecurity

insite
SUCHEN

Über dieses Buch

This book is intended to give researchers and practitioners in the cross-cutting fields of artificial intelligence, machine learning (AI/ML) and cyber security up-to-date and in-depth knowledge of recent techniques for improving the vulnerabilities of AI/ML systems against attacks from malicious adversaries. The ten chapters in this book, written by eminent researchers in AI/ML and cyber-security, span diverse, yet inter-related topics including game playing AI and game theory as defenses against attacks on AI/ML systems, methods for effectively addressing vulnerabilities of AI/ML operating in large, distributed environments like Internet of Things (IoT) with diverse data modalities, and, techniques to enable AI/ML systems to intelligently interact with humans that could be malicious adversaries and/or benign teammates. Readers of this book will be equipped with definitive information on recent developments suitable for countering adversarial threats in AI/ML systems towards making them operate in a safe, reliable and seamless manner.

Inhaltsverzeichnis

Frontmatter

Game-Playing AI and Game Theory-Based Techniques for Cyber Defenses

Frontmatter
Rethinking Intelligent Behavior as Competitive Games for Handling Adversarial Challenges to Machine Learning
Abstract
Adversarial machine learning necessitates revisiting conventional machine learning paradigms and how they embody intelligent behavior. Effective repudiation of adversarial challenges adds a new dimension to intelligent behavior above and beyond that exemplified by widely used machine learning techniques such as supervised learning. For a learner to be resistant to adversarial attack, it must have two capabilities: a primary capability that it performs normally; and a second capability of resistance to attacks from adversaries. A possible means to achieve the second capability is to develop an understanding of different attack related attributes such as who generates attacks and why, how and when attacks are generated, and, what previously unseen attacks might look like. We trace the idea that this involves an additional dimension of intelligent behavior to the basic structure by which the problem may be solved. We posit that modeling this scenario as competitive, multi-player games comprising strategic interactions between different players with contradictory and competing objectives provides a systematic and structured means towards understanding and analyzing the problem. Exploring further in this direction, we discuss relevant features of different multi-player gaming environments that are being investigated as research platforms for addressing open problems and challenges towards developing artificial intelligence algorithms that are capable of super human intelligence.
Joseph B. Collins, Prithviraj Dasgupta
Security of Distributed Machine Learning
A Game-Theoretic Approach to Design Secure DSVM
Abstract
Distributed machine learning algorithms play a significant role in processing massive data sets over large networks. However, the increasing reliance on machine learning on information and communication technologies (ICTs) makes it inherently vulnerable to cyber threats. This work aims to develop secure distributed algorithms to protect the learning from data poisoning and network attacks. We establish a game-theoretic framework to capture the conflicting goals of a learner who uses distributed support vector machines (SVMs) and an attacker who is capable of modifying training data and labels. We develop a fully distributed and iterative algorithm to capture real-time reactions of the learner at each node to adversarial behaviors. The numerical results show that distributed SVM is prone to fail in different types of attacks, and their impact has a strong dependence on the network structure and attack capabilities.
Rui Zhang, Quanyan Zhu
Be Careful When Learning Against Adversaries: Imitative Attacker Deception in Stackelberg Security Games
Abstract
One of the key challenges in the influential research field of Stackelberg security games (SSG) is to address the challenge of uncertainty regarding the attacker’s payoffs, capabilities and other characteristics. An extensive line of recent work in SSGs has focused on learning the optimal defense strategy from observed attack data. This however raises the concern that the strategic attacker may mislead the defender by deceptively reacting to the learning algorithms, which is particularly nature in such competitive strategic interactions. This paper focuses on understanding how such attacker deception affects the equilibrium of the game. We examine a basic deception strategy termed imitative deception, in which the attacker simply pretends to have a different payoff given that his true payoff is unknown to the defender. We provide a clean characterization about the game equilibrium under unconstrained deception strategy space as well as optimal algorithms to compute the equilibrium in the constrained case. Our numerical experiments illustrate significant defender loss due to imitative attacker deception, suggesting the potential side effect of learning from the attacker.
Haifeng Xu, Thanh H. Nguyen

Data Modalities and Distributed Architectures for Countering Adversarial Cyber Attacks

Frontmatter
Adversarial Machine Learning in Text: A Case Study of Phishing Email Detection with RCNN Model
Abstract
With the exponential increase in processing power and availability of big data, deep learning has pushed the performance of previously considered hard problems. Deep learning refers to the many layers of complex neural networks, where learnable parameters are on the order of millions, sometimes billions. The massive models are trained by leveraging GPUs and large amounts of data.
Studies have shown that these precise models are susceptible to the adversarial environment. This chapter explores the different facets of the adversarial environment and how this further affects the domain of text and natural language processing.
Daniel Lee, Rakesh M. Verma
Overview of GANs for Image Synthesis and Detection Methods
Abstract
This chapter provides an overview of Generative Adversarial Network (GAN) architecture, the use of conditional GAN networks in image synthesis, and detection methods for facial manipulation in images and videos. GANs are a type of neural network architecture that utilize adversarial competition between a generator and discriminator to optimize its output. Recently, conditional GANs have been shown to achieve realistic image synthesis. These computer-generated images are difficult to distinguish from photographic images even for human observers. Effective detection methods are important to combat malicious spread and use of damaging fake media. Among the detection methods, Convolutional Neural Network (CNN) have been adopted to classify images taken from videos and to decide if images are real or fake. The current models are able to detect facial manipulations with acceptable accuracy. The chapter also discusses future research directions including benchmarks, challenges, and competitions in improving detection methods against new attacks.
Eric Tjon, Melody Moh, Teng-Sheng Moh
Robust Machine Learning Using Diversity and Blockchain
Abstract
Machine Learning (ML) algorithms are used in several smart city-based applications. However, ML is vulnerable to adversarial examples that significantly alter its desired output. Therefore, making ML safe and secure is an important research problem to enable smart city-based applications. In this chapter, a mechanism to make ML robust against adversarial examples for predictive analytics-based applications is described. The chapter introduces the concept of diversity where a single predictive analytic task is separately performed using heterogeneous datasets, or ML algorithms, or both. The given diversity components are implemented in distributed platforms using federated learning and edge computing. The diversity components use blockchain to ensure that the data is transferred safely and securely between distributed components such that edge and federated learning devices. The chapter also describes some of the challenges that should be met while adopting diversity mechanism, distributed computation, and blockchain to secure ML.
Raj Mani Shukla, Shahriar Badsha, Deepak Tosh, Shamik Sengupta

Human Machine Interactions and Roles in Automated Cyber Defenses

Frontmatter
Automating the Investigation of Sophisticated Cyber Threats with Cognitive Agents
Abstract
This chapter presents an approach to orchestrating security incident response investigations using cognitive agents trained to detect sophisticated cyber threats and integrated into cybersecurity operations centers. After briefly introducing advanced persistent threats (APTs), it overviews the APT detection model and how agents are trained. It then describes how hypotheses that may explain security alerts are generated using collected data and threat intelligence, how the analyses of these hypotheses guide the collection of additional evidence, the design of the Collection Manager software, used to integrate cognitive agents with selected collection agents, how results of searches are added to the knowledge base as evidence, and how the generated hypotheses are tested using this evidence. These concepts are illustrated with an example of detecting an APT attack. We finally overview our experimental method and results.
Steven Meckl, Gheorghe Tecuci, Dorin Marcu, Mihai Boicu
Integrating Human Reasoning and Machine Learning to Classify Cyber Attacks
Abstract
The US Department of Defense (DoD) computer networks, like many other enterprise computer networks, require strong cyber security because adversaries deploy increasingly sophisticated malicious activities against the networks. Human cyber analysts currently have to sift through the big data manually to look for the suspicious activities. New big data analytical tools and technologies bring fresh hope to help human analysts to automate the data analysis. The authors presented a use case and process that integrates human reasoning and domain knowledge to identify critical effects of cyber attacks to effectively narrow down and isolate infected computers. The authors first applied exploratory visualization to view big cyber event data. The authors then applied an unsupervised learning method, i.e., lexical link analysis (LLA), to compute the associations, statistics, and centrality measures for nodes (computers) and derived metrics to predict hacked or hacking nodes. The human reasoning and LLA allowed us to identify a single best metric that identifies the top 14% of the sorted nodes (computers) attributed to 62% hacked or hacking nodes, while the bottom 40% of the nodes (computers) are 100% normal, and can be eliminated from examination. Human reasoning and machine learning need to be integrated systematically to handle cyber security and rapidly isolate the effects of attacks. The use case is a causality analysis or causal learning in action where cause can be learned using a small number of effects because the human knowledge is applied.
Ying Zhao, Lauren Jones
Homology as an Adversarial Attack Indicator
Abstract
In this paper we show how classical topological information can be automated with machine learning to lessen the threat of an adversarial attack. This paper is a proof of concept which lays the groundwork for future research in this area.
Ira S. Moskowitz, Nolan Bay, Brian Jalaian, Arnold Tunick
Cyber-(in)Security, Revisited: Proactive Cyber-Defenses, Interdependence and Autonomous Human-Machine Teams (A-HMTs)
Abstract
The risks from cyber threats are increasing. Cyber threats propagate primarily with deception. Threats from deception derive from insider and outsider attacks. These threats are countered by reactive and proactive cyber-defenses. Alarmingly, increasing proactive cyber defenses simulate conflicts from the Wild West of yesteryear. However, deception is inadequately addressed with traditional theories based on methodological individualism (MI). Worse, MI’s rational choice theory breaks down in the presence of social conflict. In contrast, interdependence theory addresses barriers, deception to penetrate the vulnerabilities of barriers and the conflict which ensues, topics where interdependence theory thrives. Interdependence includes the effects of the constructive or destructive interference that constitute every social interaction. Our research primarily addresses the application of interdependence theory to autonomous human-machine teams (A-HMTs), which entails artificial intelligence (AI) and AI’s sub-field of machine learning (ML). A-HMTs require defenses that protect a team from cyberthreats, adverse interference and other vulnerabilities while affording the opportunity for a team to become autonomous. In this chapter, we focus on an introduction that includes a review of traditional methodological individualism and rational choice theory. Our introduction is followed by a discussion of deception and its implications for reactive cyber defenses. We next cover proactive cyber defenses in a separate section. Then we review our research on interdependence theory and its application to A-HMTs. We conclude that while cyber-risks are increasing, so too is the teamwork that strengthens cyber-defenses. The future belongs to a theory of interdependence that improves cyber-defenses, teams and the science that generalizes to A-HMTs.
William F. Lawless, Ranjeev Mittu, Ira S. Moskowitz, Donald A. Sofge, Stephen Russell
Backmatter
Metadaten
Titel
Adversary-Aware Learning Techniques and Trends in Cybersecurity
herausgegeben von
Prithviraj Dasgupta
Joseph B. Collins
Ranjeev Mittu
Copyright-Jahr
2021
Electronic ISBN
978-3-030-55692-1
Print ISBN
978-3-030-55691-4
DOI
https://doi.org/10.1007/978-3-030-55692-1

Premium Partner