Skip to main content

2023 | Buch

Autonomous Intelligent Cyber Defense Agent (AICA)

A Comprehensive Guide

insite
SUCHEN

Über dieses Buch

This book offers a structured overview and a comprehensive guide to the emerging field of Autonomous Intelligent Cyber Defense Agents (AICA). The book discusses the current technical issues in autonomous cyber defense and offers information on practical design approaches. The material is presented in a way that is accessible to non-specialists, with tutorial information provided in the initial chapters and as needed throughout the book. The reader is provided with clear and comprehensive background and reference material for each aspect of AICA.

Today’s cyber defense tools are mostly watchers. They are not active doers. They do little to plan and execute responses to attacks, and they don’t plan and execute recovery activities. Response and recovery – core elements of cyber resilience – are left to human cyber analysts, incident responders and system administrators. This is about to change. The authors advocate this vision, provide detailed guide to how such a vision can be realized in practice, and its current state of the art.

This book also covers key topics relevant to the field, including functional requirements and alternative architectures of AICA, how it perceives and understands threats and the overall situation, how it plans and executes response and recovery, how it survives threats, and how human operators deploy and control AICA. Additionally, this book covers issues of testing, risk, and policy pertinent to AICA, and provides a roadmap towards future R&D in this field.

This book targets researchers and advanced students in the field of cyber defense and resilience. Professionals working in this field as well as developers of practical products for cyber autonomy will also want to purchase this book.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Autonomous Intelligent Cyber-defense Agent: Introduction and Overview
Abstract
This chapter introduces the concept of Autonomous Intelligent Cyber-defense Agents (AICAs), and briefly explains the importance of this field and the motivation for its emergence. AICA is a software agent that resides on a system, and is responsible for defending the system from cyber compromises and enabling the response and recovery of the system, usually autonomously. The autonomy of the agent is a necessity because of the growing scarcity of human cyber-experts who could defend systems, either remotely or onsite, and because sophisticated malware could degrade or spoof the communications of a system that uses a remote monitoring center. An AICA Reference Architecture has been proposed and defines five main functions: (1) sensing and world state identification, (2) planning and action selection, (3) collaboration and negotiation, (4) action execution and (5) learning and knowledge improvement. The chapter reviews the details of AICA’s environment, functions and operations. As AICA is intended to make changes within its environment, there is a risk that an agent’s action could harm a friendly computer. This risk must be balanced against the losses that could occur if the agent does not act. The chapter discusses means by which this risk can be managed and how AICA’s design features could help build trust among its users.
Alexander Kott
Chapter 2. Alternative Architectural Approaches
Abstract
An Autonomous Intelligent Cyber-defence Agent (AICA) is an essentially software agent that is embedded within a device, system or network, alone or as part of a team of AICAs, to perform on their own all the actions required to combat enemy malware, from the monitoring of their host perimeter to piloting the countermeasures that will defeat it. An AICA will make decisions by itself, or collectively when working in a team. AICAs will be embarked within a wide variety of systems and threats landscapes. An AICA’s architecture is pivotal to guarantee its adequacy to the variety of its contexts of use. This chapter presents an architectural approach to AICAs conceived to answer the multiple challenges and requirements associated with their implementation. The Multi Agent System Centric AICA Reference Architecture (MASCARA) relies on the Multi-Agent System (MAS) paradigm. Each AICA is then conceived as a MAS. Under this perspective, each function of an AICA is then conceived as a “MicroAgent”. An AICA is then a Multi MicroAgent System. Three layers of definition of MASCARA have been identified: general, detailed, technical. Like MicroAgents between themselves, a team of AICAs requires interoperability via cooperation and messaging protocols. The internal, MAS-based, architecture of AICAs and the global organisation of a team of AICAs are discussed in this chapter. We provide thoughts about MASCARA taken from various seminal research works run between 2019 and 2021. Some lessons drawn from early AICA prototypes, such as the AICAproto21 built using containers, help to highlight various streams of R & D in which the AICA community should now engage.
Paul Theron
Chapter 3. Perception of the Environment
Abstract
This chapter discusses the intricacies of cybersecurity agents’ perception. It addresses the complexity of perception and illuminates how perception is shaping and influencing the decision-making process. It then explores the necessary considerations when crafting the world representation and discusses the power and bandwidth constraints of perception and the underlying issues of AICA’s trust in perception. On these foundations, it provides the reader with a guide to developing perception models for AICA, discussing the trade-offs of each objective state approximation. The guide is written in the context of the CYST cybersecurity simulation engine, which aims to closely model cybersecurity interactions and can be used as a basis for developing AICA. Because CYST is freely available, the reader is welcome to try implementing and evaluating the proposed methods for themselves.
Martin Drasar
Chapter 4. Perception of Cyber Threats
Abstract
This chapter presents an approach to improve cyber threat perception using Autonomous Intelligent Cyber-defence Agents (AICA). Recent research has surveyed the potential benefits of leveraging artificial intelligence (AI) and machine learning (ML) approaches to train AICA. A discussion of different AI/ML-based AICA architectures for perceiving cyber threats is presented. In some instances, a centralized AICA architecture is reasonable for smaller or homogenous cyber networks. However, for large, heterogeneous networks, a hierarchical and distributed architecture would provide better cyber threat perception. In this scenario, teams of lower-level and higher-level agents can collaborate to perform perception tasks. There is increasing research into integrating AI/ML algorithms into these agents to improve their autonomous capabilities. Early research into AICA prototypes, including defensive cyber deception agents, are explored, providing motivation for continued research required for adoption in real-world cyber-defense solutions. The chapter also includes a discussion about the combination of automation, in the form of Security Orchestration and Automated Response (SOAR), and AI/ML to further enhance AICA perception capabilities, through such tasks as diverse cyber data collection and correlation. Finally, the chapter concludes with a short discussion on future research questions to further the adoption of AICA into regular cyber defense operations and practice.
Kevin Kornegay, Kofi Nyarko, Jeffrey S. Chavis, Ahmad Ridley
Chapter 5. Situational Understanding and Diagnostics
Abstract
This chapter describes situational understanding and diagnostics for autonomous cyber-defense agents. It covers architectural patterns, functional aspects, and interfaces with other agent capabilities. It motivates the need for situational understanding and diagnostics, outlines the major challenges to be met, and considers several illustrative examples. The material centers on the core requirements of situational understanding: diagnosing the nature of a situation, projecting possible future states, assessing associated risks, and triggering responses when situations warrant them. From a functional standpoint, this chapter describes how an agent processes sensed information, continually updates its knowledge base, and assesses a given situation. It examines how these functions must consider adversarial presence, system vulnerabilities, potential adversarial movement, and cyber-to-mission dependencies. This chapter describes agent world models to be instantiated for a given situation, which span the agent, the defended system, perceived threats, and mission context. It also considers methods for managing model and algorithmic complexity and for adapting to new situations, along with practical concerns for agents deployed within operational environments.
Steven Noel, Vipin Swarup
Chapter 6. Learning About the Adversary
Abstract
The evolving nature of the tactics, techniques, and procedures used by cyber adversaries have made signature and template based methods of modeling adversary behavior almost infeasible. We are moving into an era of data-driven autonomous cyber defense agents that learn contextually meaningful adversary behaviors from observables. In this chapter, we explore what can be learnt about cyber adversaries from observable data, such as intrusion alerts, network traffic, and threat intelligence feeds. We describe the challenges of building autonomous cyber defense agents, such as learning from noisy observables with no ground truth, and the brittle nature of deep learning based agents that can be easily evaded by adversaries. We illustrate three state-of-the-art autonomous cyber defense agents that model adversary behavior from traffic induced observables without a priori expert knowledge or ground truth labels. We close with recommendations and directions for future work.
Azqa Nadeem, Sicco Verwer, Shanchieh Jay Yang
Chapter 7. Response Planning
Abstract
Given a perception of the environment and threat, and an overall assessment of the situation, cyber response planning is invoked to generate a course of action (COA) or multiple COAs intended to defeat the threat and minimize damage to the system. This chapter describes the characteristics of the response planning problem, which provide a context to understand the requirements for a response planner. A response planner must represent the system being defended. It must also understand the capabilities of the attacker. It must represent the system processes and functions and understand how changes to the system state can result in operational impacts. And it must represent how the set of response actions affect the state of the system and/or the ability of an attacker to compromise components. This chapter then provides an overview of the variety of computational techniques that can be employed toward accomplishing response planning goals and discusses their advantages and limitations. The chapter finishes with a description of an operational cyber response planner prototype that has been developed to bring together a set of capabilities that fulfill the described response planner requirements.
Scott Musman, Lashon Booker
Chapter 8. Recovery Planning
Abstract
Despite the rapid development of cybersecurity, recovery of the operation of the impacted cyber-physical system (CPS) after a cyber-attack, as a core element of cyber resilience, is often left to human decision-makers. There is a high demand for an autonomous intelligent cyber defense agent (AICA) for planning a rapid recovery. In this chapter, we first show an overview of the state-of-the-art work in recovery planning. Then, we introduce and demonstrate a system for recovery planning using simulation-based predictive monitoring to recover the system from attacks (cyber, physical, or hardware) and disruptions automatically. The recovery planning system first evaluates the impact of system degradation and generates courses of actions (COAs) for recovery efficiently. Then, it evaluates these COAs through integrated heterogeneous simulations that account for unavoidable uncertainty. By formalizing security and safety requirements, it formally verifies recovery COAs with confidence guarantees, and obtains the optimal recovery COAs. We present two recovery scenarios in smart cities to demonstrate the effectiveness of our recovery planning system.
Meiyi Ma, Himanshu Neema, Janos Sztipanovits
Chapter 9. Strategic Cyber Camouflage
Abstract
One of the most fundamental tasks for an AICA agent will be to manipulate information that an adversary can observe, either about a network or the AICA agent itself. This includes taking actions to conceal or camouflage the agent or specific network assets and taking actions to deceive or otherwise affect the beliefs of an adversary conducting reconnaissance activities. In this chapter we provide an overview of tactics that have been proposed in the literature for implementing cyber camouflage and deception actions, as well as some foundational models in AI from game theory and machine learning that have been used to deploy these tactics strategically. We go into detail on three particular models; the first uses game theory to optimize the use of decoys or modified signals, the second uses game theory to consider the modification of features for both real and fake objects to confuse attackers, and the third applies machine learning methods to scale up feature modifications to create more effective deceptive objects at scale. All of these models can be customized to different types of strategic questions around effectively deploying camouflage to affect an adversary, and they serve as a starting point for implementing autonomous strategies that use camouflage tactics. We end by discussing some of the different ways that camouflage and deception have been evaluated so far in the literature, noting that more work is needed to assess AICA agents using these strategies in realistic environments.
Christopher Kiekintveld, Aron Laszka, Mohammad Sujan Miah, Shanto Roy, Nazia Sharmin
Chapter 10. Adaptivity and Antifragility
Abstract
A resilient system can survive attacks and failures by autonomously adapting and managing its own functionality. An antifragile system is not only resilient but is also able to enhance its capabilities and become more resilient as a result of endogenous and exogenous stressors. This makes antifragility a highly desirable property of cyber defense systems operating in dynamic, contested environments. In this chapter, we outline how antifragility can be achieved in AICA systems through self-management (self-adaptivity) and self-improvement. We introduce the concept of a self-* (S*) agent and – after elucidating the various self-* properties which such agents would be capable of realizing – we present a conceptual framework for S* multi-agent systems, encompassing S* agent architectures and macro−/micro-level design concepts, and describe a corresponding generic self-management/improvement approach. We then present an overview of AWaRE 2.0 as a concrete example of an S* system and middleware framework for antifragility. Throughout our exposition, we explain how S* agents and S* multi-agent systems are related to AICA agents and AICA cyber defense systems, and how the former can help the latter achieve resilience and antifragility. Finally, we discuss several key challenges related to coordination and agent organization for self-management and learning for self-improvement.
Anton V. Uzunov, Bao Vo, Hoa Khanh Dam, Charles Harold, Mohan Baruwal Chhetri, Alan Colman, Saad Sajid Hashmi
Chapter 11. Collaboration and Negotiation
Abstract
Collaboration and Negotiation is a critical high-level function of an Autonomous Intelligent Cyber-Defense Agent (AICA) that enables communication among agents, central cyber command-and-control (C2), and human operators. Maintaining the Confidentiality, Integrity, and Availability (CIA) triad while achieving mission goals requires stealthy AICA agents to exercise: (1) minimal communication as needed for avoiding detection, (2) verification of information received with limited resources, and (3) active learning during operations to address dynamic conditions. Moreover, negotiations to jointly identify and execute a Course of Action (COA) will require building consensus under distributed and/or decentralized multi-agent settings with information uncertainties. This chapter presents algorithmic approaches for enabling the collaboration and negotiation function. Strengths and limitations of potential techniques are identified, and a representative example is illustrated. Specifically, a two-tier Multi-Agent Reinforcement Learning (MARL) algorithm was implemented to learn joint strategies among agents in a navigation and communication simulation environment. Based on simulation experiments, emergent collaborative team behaviors among agents were observed under information uncertainties. Recommendations for future development are also discussed.
Samrat Chatterjee, Arnab Bhattacharya, Ashutosh Dutta, Aowabin Rahman, Thiagarajan Ramachandran, Satish Chikkagoudar, Ramesh Bharadwaj
Chapter 12. Human Interactions
Abstract
Human interactions with an Autonomous Intelligent Cyber-defense Agent (AICA) need to be systematically considered and evaluated across all stages of design and employment. Although some cyber and AI knowledgeable users will interact with AICAs at various touchpoints, the primary AICA users are frontline operators who are likely to have little to no cyber or artificial intelligence (AI) expertise. These users operate the equipment and systems with one or more embedded AICAs programmed to defend their systems from attacks and will need to understand and effectively interact with AICAs whose output and decisions can be impacting their ability to complete their tasks. This requires interaction and measurement models that support understanding cyber, AI and operational considerations for iterative human-centered design and testing to ensure safe, efficient, effective and satisfying operations that support task and overall outcomes. Human interaction considerations and methods relevant to AICA design are provided. The Human-Agent Mental Model and observe-orient-decide-act trace framework are described in reference to the current AICA reference architecture as an example of how to capture information requirements for AICA’s and design in a human and AI understandable way.
Eric Holder, Jessie Y. C. Chen, Kristen Liggett, Phillip Bridgham, Neil Briscombe, Thomas Eskridge, Marco Carvalho, Lavinia Burski
Chapter 13. Testing and Measurements
Abstract
One of the key issues in developing autonomous cyber agents is the difficulty of the cybersecurity problem domain. It combines issues of navigating a complex environment with those of responding to a clever human adversary. A concern that must be addressed before using a cyber-agent in real world situations is its resilience in response to successful attacks, and its robustness to environments and situations it has not been trained to handle. Testing and measuring the effectiveness of cyber agents faces issues similar to those encountered in autonomous robotics, where the ‘reality gap’, or the ability of behaviour in simulation to translate to real world systems, must be addressed. This chapter summarises the key issues in testing and measuring AICA and presents a survey of existing approaches. The chapter finishes with a case study of the CybORG Cyber Operations Research Gym, a framework for training and testing autonomous cyber defence agents that attempts to address some of these issues.
Toby J. Richer, Maxwell Standen
Chapter 14. Deployment and Operation
Abstract
Autonomous Intelligence Cyber-defense Agents are expected to operate in a continuous, unmanned, collaborative capacity in a variety of target network or battlefield environments. They should be able to maintain situational awareness of the nature of the cyber environment and other “agents” within it, monitor for activity that presents a potential threat or advantage, incorporate new knowledge into their environmental model, share parameters of such a model with peers, and take appropriate actions to maximize their own mission success and/or survival (potentially in a collaborative manner). In this chapter, we analyze several scenarios to consider the types of threats such agents might be expected to encounter and what actions would potentially be beneficial for them to take in response. These scenarios include an unmanned automated system (UAS, or “drone”) – solo or as part of a swarm, an electrical distribution grid, an orbital or deep-space communication network, and a large-scale computational array (such as found in a cloud vendor offering or high-performance computing).
Benjamin Blakely, William Horsthemke, Daniel Harkness, Nate Evans
Chapter 15. Command in AICA-Intensive Operations
Abstract
AICA-intensive Operations – be they military missions, crisis response, or counter-terrorism – comprise complex, intractable and risk-laden tasks, requiring resolute and determined action under extreme conditions. Operational characteristics are highly dynamic and non-linear; minor events, decisions and actions may have serious and irreversible consequences for the entire operation. A central part of managing these challenges is recognizing and accepting complexity as a driver of these critical mission characteristics, and by developing a Multi-Domain Operations perspective at the individual, team and organizational levels. Additionally, success in AICA-intensive Operations requires highly capable situation understanding, with the abilities of perception and interpretation of a particular situation to provide the situational awareness, context, insight, foresight and task knowledge required for effective command and execution. Finally, the turbulent operational environment in which these units operate stresses the need for organizational agility, ensuring internal operations are matching the degree of turmoil in external environments, a principle known as requisite variety. This requires adaptive and versatile command and execution principles, supported by cross-hierarchical decision-making along with agile high-performance organizational structures.
Arne Norlander
Chapter 16. Risk Management
Abstract
AICA represent a unique opportunity to mitigate cybersecurity risks. This is because autonomous intelligent agents can defend a system against cyberattacks with time to detect, response time, and scale of action that are “superhuman,” i.e. with time and scale that would be impossible or impractical if performed by humans alone. However, AICA introduce new types of risks that may be bigger than the risks the agents are intended to mitigate – in other works, the cure might be more harmful than the disease. Here we show the main types of risks introduced by AICA, their potential consequences, as well as strategies to mitigate these risks. Types of AICA-specific risks include erroneous predictions or actions, tampering or coordination failures. The consequences of these risks may be of functional, safety, security, ethical, moral, or fairness nature. Strategies may be categorized in human-centric, software-centric, or based is specific approaches such as simulation. Understanding and controlling AICA-specific risks whenever possible will result in agents that are not only more effective but more trustworthy, since they will be able to perform their cyber-defense functions with minimum risk of causing unintended side effects.
Alexandre K. Ligo, Alexander Kott, Haley Dozier, Igor Linkov
Chapter 17. Policy Issues
Abstract
The application of Artificial Intelligence Cyber defense Agents (or AICA) poses policy challenges to civil and military doctrine- and policy-makers. These challenges are made more complex by the changing nature of the global information technology eco-system in which AICA tools may be deployed. This eco-system, consisting of billions of Internet of Things (IoT) devices, global 5G (and even next-G and space-based) backbones and cloud architectures, and interdependent infrastructures (e.g., energy, transportation, water, policing, civil governance, etc.) will make difficult the use of AICA tools whose effects are limited and precise. New civil and military operational doctrine, including “persistent engagement,” an approach under which the United States engages in cyberspace with adversaries outside our borders, and “defend forward” in which the United States disrupts adversary operations on their networks, means that AICA use must be calibrated to achieve only the effects desired in a world in which miscalculation and escalation are possible. This chapter explore the new, complex information technology eco-system in which AICA will operate, and some of the policy and doctrinal concepts the United States and its allies are using to defend the cyberspace on which we depend.
Samuel Sanders Visner
Chapter 18. AICA Development Challenges
Abstract
In this chapter we explore the development challenges that must be tackled before fulfilling the great potential of Autonomous Intelligent Cyberdefense Agent (AICA). We propose dividing development challenges into two kinds: the ones that are associated with the AICA engineering ecosystem and the ones that are associated with the AICA research ecosystem. This is reasonable because adequately addressing engineering challenges requires to tackling a range of research challenges. Moreover, engineering and research have different ways of thinking; in general, engineering focuses on narrower aspects and is often built on technical breakthroughs resulting from fundamental research. The engineering ecosystem has six components: design; implementation; individual test & certification; composition; composite test & certification; and deployment. The research ecosystem also accommodating six components: models; architectures; mechanisms; testing and certification; operations; and social, ethical, and legal aspects. To show how the challenges associated with these components are related to each other, we make connections between these two ecosystems by describing how tackling challenges in the research ecosystem would contribute to tackling the challenges that are encountered when engineering AICAs. We draw insights into the gaps between the state-of-the-art technology and the desired ultimate goals and propose research directions to bridge them. We hope this chapter will serve as a milestone in guiding the development (i.e., engineering and research) activities in fulfilling the vision of AICAs.
Shouhuai Xu
Chapter 19. Case Study A: A Prototype Autonomous Intelligent Cyber-Defense Agent
Abstract
The AICA International Working Group (IWG) spent 2021 collaboratively developing an initial prototype implementation of the AICA reference architecture, AICAproto21. This prototype was built using open-source software components in a containerized manner to allow for the quickest time-to-completion with maximum flexibility for future capabilities. This prototype was a fully self-contained demonstration of the ability of the agent to respond to an indicated attack with a defensive action, though the scope of scenarios was constrained due to the primary focus on the construction of the framework itself. Future work would include incorporation of computational intelligence (i.e., knowledge representation and automated reasoning components) and additional scenarios. The authors found that the chosen approach did lead to a very easy-to-scale solution that is likely to work in a cross-platform manner. Complicating factors encountered include the difficulty in constructing the framework to operate with various external systems in a generalizable way, and the likely host-system impact of needing to run multiple containers simultaneously to achieve desired functionality, especially when host systems could be low-power “things” such as drones, weapons platforms, et cetera. A critical question to answer as work on AICAproto21 and related experimentation continues is whether the effort required to build a more “ground-up” monolithic application is justified by the potential savings in resource consumption and optimization for the specified purpose.
Benjamin Blakely, William Horsthemke, Nate Evans, Daniel Harkness
Chapter 20. Case Study B: AI Agents for the Tactical Edge
Abstract
This chapter explores military aspects of building and deploying AI Agents for cyber defense. We concentrate on those aspects which are particularly characteristic of AI Agent deployment at the “tactical edge,” by which we mean warfighters directly involved in executing the mission. This choice of emphasis is motivated by two observations. First, the tactical edge is clearly of preeminent importance in warfare, as it is where actual fighting takes place. Second, and not coincidentally, it is where the general environment for an AI cyber defense agent is least similar to their classic, and still typical, habitat – the enterprise-scale network. The tactical edge presents manifold challenges to the deployment of AI agents as conventionally considered. These challenges stem from generally austere conditions, characterized by low availability of computing power, poor to nonexistent connectivity to the cloud and enterprise-scale resources in general, and porous borders between the cyber domain as conventionally considered and the physical and electronic warfare (EW) domains. These challenges suggest a minimalist approach to AI agents at the tactical edge, emphasizing resilience, graceful degradation, and close cooperation with human warfighters. We elaborate examples of such AI agents derived from our experience applying AI to Blue Force Spectrum Awareness (BFSA).
Pierre Trepagnier, Allan Wollaber
Chapter 21. Case Study C: Sentinels for Cyber Resilience
Abstract
This chapter describes an approach to cyber resilience in which dedicated hardware/software subsystems called sentinels are used to monitor critical system functions for abnormal or unacceptable performance, as defined by the resilience engineer. The detection of abnormal or unacceptable function then provides the trigger for control actions, such as switching to redundant functions or components, that form the basis for recovery of system function. This chapter discusses how sentinels detect attacks, what they do after detection, and how to choose where to put them. The discussion covers several specific engineering patterns for sentinels and resilient models of operation, as well as more general topics including operational and life cycle issues in sentinel-based cyber resilience and the roles of humans vs. autonomy (e.g. manual, semi-automated, automated) in controlling sentinels and reconfiguration actions. The chapter concludes with a case study on a hypothetical weapons system.
Peter A. Beling, Tim Sherburne, Barry Horowitz
Metadaten
Titel
Autonomous Intelligent Cyber Defense Agent (AICA)
herausgegeben von
Alexander Kott
Copyright-Jahr
2023
Electronic ISBN
978-3-031-29269-9
Print ISBN
978-3-031-29268-2
DOI
https://doi.org/10.1007/978-3-031-29269-9