Skip to main content
main-content

Über dieses Buch

This book features a wide spectrum of the latest computer science research relating to cyber warfare, including military and policy dimensions. It is the first book to explore the scientific foundation of cyber warfare and features research from the areas of artificial intelligence, game theory, programming languages, graph theory and more. The high-level approach and emphasis on scientific rigor provides insights on ways to improve cyber warfare defense worldwide. Cyber Warfare: Building the Scientific Foundation targets researchers and practitioners working in cyber security, especially government employees or contractors. Advanced-level students in computer science and electrical engineering with an interest in security will also find this content valuable as a secondary textbook or reference.

Inhaltsverzeichnis

Frontmatter

1. Cyber War Games: Strategic Jostling Among Traditional Adversaries

Abstract
Cyber warfare has been simmering for a long time and has gradually morphed into a key strategic weapon in international conflicts. Doctrines of several countries consider cyber warfare capability as essential to gain strategic superiority or as a counterbalance to military inferiority. Countries are attempting to reach consensus on confidence building measures in cyber space while racing with each other to acquire cyber weaponry. These attempts are strongly influenced by the problem of clear attribution of cyber incidents as well as political imperatives. Game theory has been used in the past for such problems in international relations where players compete with each other and the actions of the players are interdependent. Problems in cyber warfare can benefit from similar game theoretic concepts. We discuss in this book chapter the state of cyber warfare, the key imperatives for the countries, and articulate how countries are jostling with each other in the cyber domain especially in the context of poor attribution and verification in the cyber domain. We present game theoretic models for a few representative problems in the cyber warfare domain.
Sanjay Goel, Yuan Hong

2. Alternatives to Cyber Warfare: Deterrence and Assurance

Abstract
Deterrence as practiced during the Cold War was largely defined in terms of capabilities to impose punishment in response to an attack; however, with growing concern over the proliferation of cyber technologies, deterrence has evolved to be understood more generally in terms of cost/benefit calculi, viewed from not only a national perspective, but also recognizing the importance of both friendly and adversary perspectives. With this approach, the primary instruments used for deterrence are those which encourage restraint on the part of all affected parties. The use of a multiple lever approach to deterrence offers a path to an integrated strategy that not only addresses the cost/benefit calculus of the primary attacker, but also provides opportunities to influence the calculus of mercenary cyber armies for hire, patriotic hackers, or other groups. For this multiple lever approach to be effective a capability to assess the effects of cyber attacks on operations is needed. Such a capability based on multi-formalism modeling to model, analyze, and evaluate the effect of cyber exploits on the coordination in decision making organizations is presented. The focus is on the effect that cyber exploits, such as availability and integrity attacks, have on information sharing and task synchronization. Colored Petri Nets are used to model the decision makers in the organization and computer network models to represent their interactions. Two new measures of performance are then introduced: information consistency and synchronization. The approach and the computation of the measures of performance are illustrated though a simple example based on a variation of the Pacifica scenario.
Robert J. Elder, Alexander H. Levis, Bahram Yousefi

3. Identifying and Exploiting the Cyber High Ground for Botnets

Abstract
For over 2000 years, military strategists have recognized the importance of capturing and holding the physical “high ground.” As cyber warfare strategy and tactics mature, it is important to explore the counterpart of “high ground” in the cyber domain. To this end, we develop the concept for botnet operations. Botnets have gained a great deal of attention in recent years due to their use in criminal activities. The criminal goal is typically focused on stealing information, hijacking resources, or denying service from legitimate users. In such situations, the scale of the botnet is of key importance. Bigger is better. However, several recent botnets have been designed for industrial or national espionage. These attacks highlight the importance of where the bots are located, not only how many there are. Just as in kinetic warfare, there is a distinct advantage to identifying, controlling, and exploiting an appropriately defined high ground. For targeted denial of confidentiality, integrity, and availability attacks thecyber high ground can be defined and realized in a physical network topology. An attacker who controls this cyber high ground gains a superior capability to achieve his mission objectives. Our results show that such an attacker may reduce their botnet’s footprint and increase its dwell time by up to 87 % and 155× respectively over a random or ill-informed attacker.
Patrick Sweeney, George Cybenko

4. Attribution, Temptation, and Expectation: A Formal Framework for Defense-by-Deception in Cyberwarfare

Abstract
Defense-by-deception is an effective technique to address the asymmetry challenges in cyberwarfare. It allows for not only misleading attackers to non-harmful goals but also systematic depletion of attacker resources. In this paper, we developed a game theocratic framework that considersattribution, temptation andexpectation, as the major components for planning a successful deception plan. We developed as a case study a game strategy to proactively deceive remote fingerprinting attackers without causing significant performance degradation to benign clients. We model and analyze the interaction between a fingerprinter and a target as a signaling game. We derive the Nash equilibrium strategy profiles based on the information gain analysis. Based on our game results, we designDeceiveGame, a mechanism to prevent or to significantly slow down fingerprinting attacks. Our performance analysis shows thatDeceiveGame can reduce the probability of success of the fingerprinter significantly, without deteriorating the overall performance of other clients. Beyond the DeceiveGame application, our formal framework can be generally used to synthesize correct-by-construction cyber deception plans against other attacks.
Ehab Al-Shaer, Mohammad Ashiqur Rahman

5. Game-Theoretic Foundations for the Strategic Use of Honeypots in Network Security

Abstract
An important element in the mathematical and scientific foundations for security is modeling the strategic use of deception and information manipulation. We argue that game theory provides an important theoretical framework for reasoning about information manipulation in adversarial settings, including deception and randomization strategies. In addition, game theory has practical uses in determining optimal strategies for randomized patrolling and resource allocation. We discuss three game-theoretic models that capture aspects of how honeypots can be used in network security. Honeypots are fake hosts introduced into a network to gather information about attackers and to distract them from real targets. They are a limited resource, so there are important strategic questions about how to deploy them to the greatest effect, which is fundamentally about deceiving attackers into choosing fake targets instead of real ones to attack. We describe several game models that address strategies for deploying honeypots, including a basic honeypot selection game, an extension of this game that allows additional probing actions by the attacker, and finally a version in which attacker strategies are represented using attack graphs. We conclude with a discussion of the strengths and limitations of game theory in the context of network security.
Christopher Kiekintveld, Viliam Lisý, Radek Píbil

6. Cyber Counterdeception: How to Detect Denial & Deception (D&D)

Abstract
In this chapter we explore cyber-counterdeception (cyber-CD), what it is, and how it works, and how to incorporate counterdeception into cyber defenses. We review existing theories and techniques of counterdeception and relate counterdeception to the concepts of cyber attack kill chains and intrusion campaigns. We adapt theories and techniques of counterdeception to the concepts of cyber defenders’ deception chains and deception campaigns. We describe the utility of conducting cyber wargames and exercises to develop the techniques of cyber-denial & deception (cyber-D&D) and cyber-CD. Our goal is to suggest how cyber defenders can use cyber-CD, in conjunction with defensive cyber-D&D campaigns, to detect and counter cyber attackers.
Kristin E. Heckman, Frank J. Stech

7. Automated Adversary Profiling

Abstract
Cyber warfare is currently an information poor environment, where knowledge of adversary identity, goals, and resources is critical, yet difficult to come by. Reliably identifying adversaries through direct attribution of cyber activities is not currently a realistic option, but it may be possible to deduce the presence of an adversary within a collection of network observables, and build a profile consistent with those observations. In this paper, we explore the challenges of automatically generating cyber adversary profiles from network observations in the face of highly sophisticated adversaries whose goals, objectives, and perceptions may be very different from ours, and who may be utilizing deceptive activities to disguise their activities and intentions.
Samuel N. Hamilton

8. Cyber Attribution: An Argumentation-Based Approach

Abstract
Attributing a cyber-operation through the use of multiple pieces of technical evidence (i.e., malware reverse-engineering and source tracking) and conventional intelligence sources (i.e., human or signals intelligence) is a difficult problem not only due to the effort required to obtain evidence, but the ease with which an adversary can plant false evidence. In this paper, we introduce a formal reasoning system called the InCA (Intelligent Cyber Attribution) framework that is designed to aid an analyst in the attribution of a cyber-operation even when the available information is conflicting and/or uncertain. Our approach combines argumentation-based reasoning, logic programming, and probabilistic models to not only attribute an operation but also explain to the analyst why the system reaches its conclusions.
Paulo Shakarian, Gerardo I. Simari, Geoffrey Moores, Simon Parsons

9. The Human Factor in Cybersecurity: Robust & Intelligent Defense

Abstract
In this chapter, we review the pervasiveness of cyber threats and the roles of both attackers and cyber users (i.e. the targets of the attackers); the lack of awareness of cyber-threats by users; the complexity of the new cyber environment, including cyber risks; engineering approaches and tools to mitigate cyber threats; and current research to identify proactive steps that users and groups can take to reduce cyber-threats. In addition, we review the research needed on the psychology of users that poses risks to users from cyber-attacks. For the latter, we review the available theory at the individual and group levels that may help individual users, groups and organizations take actions against cyber threats. We end with future research needs and conclusions. In our discussion, we first agreed that cyber threats are making cyber environments more complex and uncomfortable for average users; second, we concluded that various factors are important (e.g., timely actions are often necessary in cyber space to counter the threats of the attacks that commonly occur at internet speeds, but also the ‘slow and low’ attacks that are difficult to detect, threats that occur only after pre-specified conditions have been satisfied that trigger an unsuspecting attack). Third, we concluded that advanced persistent threats (APTs) pose a risk to users but also to national security (viz., the persistent threats posed by other Nations). Fourth, we contend that using “red” teams to search cyber defenses for vulnerabilities encourages users and organizations to better defend themselves. Fifth, the current state of theory leaves many questions unanswered that researchers must pursue to mitigate or neutralize present and future threats. Lastly, we agree with the literature that cyber space has had a dramatic impact on American life and that the cyber domain is a breeding ground for disorder. However, we also believe that actions by users and researchers can be taken to stay safe and ahead of existing and future threats.
Julie L. Marble, W. F. Lawless, Ranjeev Mittu, Joseph Coyne, Myriam Abramson, Ciara Sibley

10. CyberWar Game: A Paradigm for Understanding New Challenges of Cyber War

Abstract
Cyber-war is a growing form of threat to our society that involves multiple players executing simultaneously offensive and defensive operations. Given that cyber space is hyper dimensional and dynamic, human decision making must also incorporate numerous attributes and must be agile and adaptive. In this chapter, we review how computational models of human cognition can be scaled up from an individual model of a defender operating in a hostile environment, through a pair of models representing a defender and an attacker to multi-agents in a cyber-war. Following, we propose to study the decision making processes that drive the dynamics of cyber-war using a multi-agent model comprising of cognitive agents that learn to make decisions according to Instance-Based Learning Theory (IBLT). In this paradigm, the CyberWar game, assets and power are two key attributes that influence the decisions of agents. Assets represent the key resource that an agent is protecting from attacks while power represents technical prowess of an agent’s cyber security. All the agents share the same goal of maximizing their assets and they learn from experience to attack other agents and defend themselves in order to meet this goal. Importantly, they don’t learn by using predefined strategies, as many multi-agent models do, but instead they learn from experience according to the situation and actions of others, as suggested by the IBLT’s process. This chapter contributes to current research by: proposing a novel paradigm to study behavior in cyber-war, using a well-known cognitive model of decisions from experience to predict what possible human behavior would be in a simulated cyber-war, and demonstrating novel predictions regarding the effects of power and assets, two main contributors to cyber-war.
Noam Ben-Asher, Cleotilde Gonzalez

11. Active Discovery of Hidden Profiles in Social Networks Using Malware

Abstract
In this study we investigate the problem of diffusion in social networks, an issue which is relevant in areas such as cyber intelligence. Contrary to related work that focuses on the identification of invisible areas of a social network, our work focuses on finding the most effective nodes for placing seeds in order to effectively reveal hidden nodes in a focused manner. The seeds may consist of malware that propagates in social networks and is capable of revealing hidden invisible nodes. The malware has only limited time to function and operate in stealth mode so as not to alert the hidden node, thus there is a need to identify and utilize the visible nodes that are most effective at spreading the malware across the hidden nodes with minimal effect on the visible nodes. We empirically evaluate the ability of the Weighted Closeness metric (WC) among visible nodes to improve diffusion focus and reach invisible nodes in a social network. Experiments performed with a variety of social network topologies validated the effectiveness of the proposed method.
Rami Puzis, Yuval Elovici

12. A Survey of Community Detection Algorithms Based On Analysis-Intent

Abstract
There has been a significant amount of research dedicated to identifying community structures within graphs. Most of these studies have focused on partitioning techniques and the resultant quality of discovered groupings (communities) without regard for the intent of the analysis being conducted (analysis-intent). In many cases, a given network community can be composed of significantly different elements depending upon the context in which a partitioning technique is used or applied. Moreover, the number of communities within a network will vary greatly depending on the analysis-intent and thus the discretion quality and performance of algorithms will similarly vary. In this survey we review several algorithms from the literature developed to discover community structure within networks. We review these approaches from two analysis perspectives: role/process focused (category-based methods) and topological structure or connection focused (event-based methods). We discuss the strengths and weaknesses of each algorithm and provide suggestions on the algorithms’ use depending on analysis context.
Napoleon C. Paxton, Stephen Russell, Ira S. Moskowitz, Paul Hyden

13. Understanding the Vulnerability Lifecycle for Risk Assessment and Defense Against Sophisticated Cyber Attacks

Abstract
The security of deployed and actively used systems is a moving target, influenced by factors that are not captured in the existing security models and metrics. For example, estimating the number of vulnerabilities in source code does not account for the fact that cyber attackers never exploit some of the discovered vulnerabilities, in the presence of reduced attack surfaces and of technologies that render exploits less likely to succeed. Conversely, some vulnerabilities are exploited stealthily before their public disclosure, in zero-day attacks, and old vulnerabilities continue to impact security in the wild until all vulnerable hosts are patched. As such,we currently do not know how to assess the security of systems in active use. In this chapter, we report on empirical studies of security in the real world, using field data collected on 10+ million real hosts that are targeted by cyber attacks (rather than on honeypots or in small-scale lab settings). Our empirical findings and the novel metrics we evaluate on this field data will enable a more accurate assessment of the risk of cyber attacks, by taking into account the vulnerabilities and attacks that matter most in practice.
Tudor Dumitraş

14. Graph Mining for Cyber Security

Abstract
How does malware propagate? How do software patches propagate? Given a set of malware samples, how to identify all malware variants that exist in a database? Which human behaviors may lead to increased malware attacks? These are challenging problems in their own respect, especially as they depend on having access to extensive, field-gathered data that highlight the current trends. These datasets are increasingly easier to collect, are large in size, and also high in complexity. Hence data mining can play an important role in cyber-security by answering these questions in an empirical data-driven manner. In this chapter, we discuss how related problems in cyber-security can be tackled via techniques from graph mining (specifically mining network propagation) on large field datasets collected on millions of hosts.
B. Aditya Prakash

15. Programming Language Theoretic Security in the Real World: A Mirage or the Future?

Abstract
The last decade has seen computer security rise from a niche field to a household term. Previously, executive level responses to computer security were disbelief and dismissal, while today the responses are questions of budget and risk. Computer security is a complicated issue with many moving parts and it is difficult to present a coherent view of its issues and problems. We believe that computer security issues have their root in programming languages and language runtime decisions. We argue that computer intrusion, malware, and network security issues all fundamentally arise from tradeoffs made in programming language design and the structure of the benign programs that are exploited. We present a case for addressing fundamental computer security problems at this root, by using advancements in programming language technology. We also present a case against relying on advancements in programming language technology, arguing that even when using the most sophisticated programming language technology available today, attacks are still possible, and that the current state of research is insufficient to guarantee security. We also discuss practical issues relating to the implementation of large-scale reforms in software development based on advancements in programming language technology.
Andrew Ruef, Chris Rohlf
Weitere Informationen

Premium Partner

    Bildnachweise