Skip to main content

Über dieses Buch

This Festschrift is in honor of Chris Hankin, Professor at the Imperial College in London, UK, on the Occasion of His 65th Birthday.

Chris Hankin is a Fellow of the Institute for Security Science and Technology and a Professor of Computing Science.

His research is in cyber security, data analytics and semantics-based program analysis. He leads multidisciplinary projects focused on developing advanced visual analytics and providing better decision support to defend against cyber attacks.

This Festschrift is a collection of scientific contributions related to the topics that have marked the research career of Professor Chris Hankin. The contributions have been written to honour Chris' career and on the occasion of his retirement.





Cables, Trains and Types

Many concepts of computing science can be illustrated in ways that do not require programming. CS Unplugged is a well-known resource for that purpose. However, the examples in CS Unplugged and elsewhere focus on topics such as algorithmics, cryptography, logic and data representation, to the neglect of topics in programming language foundations, such as semantics and type theory.
This paper begins to redress the balance by illustrating the principles of static type systems in two non-programming scenarios where there are physical constraints on forming connections between components. The first scenario involves serial cables and the ways in which they can be connected. The second example involves model railway layouts and the ways in which they can be constructed from individual pieces of track. In both cases, the physical constraints can be viewed as a type system, such that typable systems satisfy desirable semantic properties.
Simon J. Gay

Cathoristic Logic

A Logic for Capturing Inferences Between Atomic Sentences
Cathoristic logic is a multi-modal logic where negation is replaced by a novel operator allowing the expression of incompatible sentences. We present the syntax and semantics of the logic including complete proof rules, and establish a number of results such as compactness, a semantic characterisation of elementary equivalence, the existence of a quadratic-time decision procedure, and Brandom’s incompatibility semantics property. We demonstrate the usefulness of the logic as a language for knowledge representation.
Richard Evans, Martin Berger

A Type Theory for Probabilistic –calculus

We present a theory of types where formulas may contain a choice constructor. This constructor allows for the selection of a particular type among a finite set of options, each corresponding to a given probabilistic term. We show that this theory induces a type assignment system for the probabilistic \(\lambda \)–calculus introduced in an earlier work by Chris Hankin, Herbert Wiklicky and the author, where probabilistic terms represent probability distributions on classical terms of the simply typed \(\lambda \)–calculus. We prove the soundness of the type assignment with respect to a probabilistic term reduction and a normalization property of the latter.
Alessandra Di Pierro

Program Analysis


Galois Connections for Recursive Types

Building a static analyser for a real language involves modeling of large domains capturing the many available data types. To scale domain design and support efficient development of project-specific analyzers, it is desirable to be able to build, extend, and change abstractions in a systematic and modular fashion. We present a framework for modular design of abstract domains for recursive types and higher-order functions, based on the theory of solving recursive domain equations. We show how to relate computable abstract domains to our framework, and illustrate the potential of the construction by modularizing a monolithic domain for regular tree grammars. A prototype implementation in the dependently typed functional language Agda shows how the theoretical solution can be used in practice to construct static analysers.
Ahmad Salim Al-Sibahi, Thomas Jensen, Rasmus Ejlers Møgelberg, Andrzej Wąsowski

Incremental Abstract Interpretation

Non-incremental static analysis by abstract interpretation has to be rerun every time the code to be analyzed changes. For large code bases, this incurs a significant overhead, in particular, if the individual changes to the code are small. In order to accelerate the analysis on changing code bases, incremental static analysis reuses analysis results computed for earlier versions of the source code where possible. We show that this behavior can seamlessly be achieved for the analysis of C programs if a local generic solver such as the top-down solver is used as the fixed-point engine. This solver maintains a set of stable unknowns for which fixpoint iteration has already stabilized and it recursively destabilizes dependent unknowns on change. We indicate how this machinery can be applied to selectively invalidate results for those unknowns that may be directly or indirectly affected by program changes. We also explain the technical difficulties faced when realizing this basic idea within an analysis infra-structure such as Goblint. We also report the results of a preliminary experimental evaluation concerning the impact of incrementalization on analysis performance.
Helmut Seidl, Julian Erhard, Ralf Vogler

Correctly Slicing Extended Finite State Machines

We consider slicing extended finite state machines. Extended finite state machines (EFSMs) combine a finite state machine with a store and can model a range of computational phenomena, from high-level software to cyber-physical systems. EFSMs are essentially interactive, possibly non-terminating or with multiple exit states and may be nondeterministic, so standard techniques for slicing, developed for control flow graphs of programs with a functional semantics, are not immediately applicable.
This paper addresses the various aspects of correctness for slicing of EFSMs, and provides syntactic criteria that we prove are sufficient for our proposed notions of semantic correctness. The syntactic criteria are based on the “weak commitment” and “strong commitment” properties highlighted by Danicic et alia. We provide polynomial-time algorithms to compute the least sets satisfying each of these two properties. We have conducted experiments using widely-studied benchmark and industrial EFSMs that compare our slicing algorithms with those using existing definitions of control dependence. We found that our algorithms produce the smallest average slices sizes, 21% of the original EFSMs when “weak commitment” is sufficient and 58% when “strong commitment” is needed (to preserve termination properties).
Torben Amtoft, Kelly Androutsopoulos, David Clark



Secure Guarded Commands

We develop a lightweight approach to information flow control that interacts with the use of cryptographic schemes. The language is a version of Dijkstra’s Guarded Commands language extended with parallelism, communication and symmetric cryptography. Information flow is modelled using security labels that are sets of hashed symmetric keys expressing the capabilities needed for access to data. In essence, encryption is used to encapsulate the protection offered by the information flow policy. We develop a type system aimed at tracking explicit, implicit, bypassing and correlation flows arising due to the parallel processes and the internal non-determinism inherent in Guarded Commands. The development is facilitated by the parallel processes having disjoint memories and is illustrated on a multiplexer scenario previously addressed using content-dependent information flow policies.
Flemming Nielson, Hanne Riis Nielson

Modelling the Impact of Threat Intelligence on Advanced Persistent Threat Using Games

System administrator time is not dedicated to just cyber security tasks. With a wide variety of activities that need to be undertaken being able to monitor and respond to cyber security incidents is not always possible. Advanced persistent threats to critical systems make this even harder to manage.
The model presented in this paper looks at the Lockheed Martin Cyber Kill Chain as a method of representing advanced persistent threats to a system. The model identifies the impact that using threat intelligence gains over multiple attacks to help better defend a system.
Presented as a game between a persistent attacker and a dedicated defender, findings are established by utilising simulations of repeated attacks. Experimental methods are used to identify the impact that threat intelligence has on the capability for the defender to reduce the likelihood of harm to the system.
Andrew Fielder

Security Metrics at Work on the Things in IoT Systems

The Internet of Things (IoT) is deeply changing our society. Daily we use smart devices that automatically collect, aggregate and exchange data about our lives. These data are often pivotal when they are used e.g. to train learning algorithms, to control cyber-physical systems, and to guide administrators to take crucial decisions. As a consequence, security attacks on devices can cause severe damages on IoT systems that take care of essential services, such as delivering power, water, transport, and so on. The difficulty of preventing intrusions or attacks is magnified by the big amount of devices and components IoT systems are composed of. Therefore, it is crucial to identify the most critical components in a network of devices and to understand their level of vulnerability, so as to detect where it is better to intervene for improving security. In this paper, we start from the modelling language IoT-LySa and from the results of Control Flow Analysis that statically predict the manipulation of data and their possible trajectories. On this basis, we aim at deriving possible graphs of how data move and which are their dependencies. These graphs can be analysed, by exploiting some security metrics - among which those introduced by Barrere, Hankin et al. - offering system administrators different estimates of the security level of their systems.
Chiara Bodei, Pierpaolo Degano, Gian-Luigi Ferrari, Letterio Galletta

New Program Abstractions for Privacy

Static program analysis, once seen primarily as a tool for optimising programs, is now increasingly important as a means to provide quality guarantees about programs. One measure of quality is the extent to which programs respect the privacy of user data. Differential privacy is a rigorous quantified definition of privacy which guarantees a bound on the loss of privacy due to the release of statistical queries. Among the benefits enjoyed by the definition of differential privacy are compositionality properties that allow differentially private analyses to be built from pieces and combined in various ways. This has led to the development of frameworks for the construction of differentially private program analyses which are private-by-construction. Past frameworks assume that the sensitive data is collected centrally, and processed by a trusted curator. However, the main examples of differential privacy applied in practice - for example in the use of differential privacy in Google Chrome’s collection of browsing statistics, or Apple’s training of predictive messaging in iOS 10 -use a purely local mechanism applied at the data source, thus avoiding the collection of sensitive data altogether. While this is a benefit of the local approach, with systems like Apple’s, users are required to completely trust that the analysis running on their system has the claimed privacy properties.
In this position paper we outline some key challenges in developing static analyses for analysing differential privacy, and propose novel abstractions for describing the behaviour of probabilistic programs not previously used in static analyses.
Sebastian Hunt, David Sands

Optimizing Investments in Cyber Hygiene for Protecting Healthcare Users

Cyber hygiene measures are often recommended for strengthening an organization’s security posture, especially for protecting against social engineering attacks that target the human element. However, the related recommendations are typically the same for all organizations and their employees, regardless of the nature and the level of risk for different groups of users. Building upon an existing cybersecurity investment model, this paper presents a tool for optimal selection of cyber hygiene safeguards, which we refer as the Optimal Safeguards Tool (OST). The model combines game theory and combinatorial optimization (0-1 Knapsack) taking into account the probability of each user group to being attacked, the value of assets accessible by each group, and the efficacy of each control for a particular group. The model considers indirect cost as the time employees could require for learning and trainning against an implemented control. Utilizing a game-theoretic framework to support the Knapsack optimization problem permits us to optimally select safeguards’ application levels minimizing the aggregated expected damage within a security investment budget.
We evaluate OST in a healthcare domain use case. In particular, on the Critical Internet Security (CIS) Control group 17 for implementing security awareness and training programs for employees belonging to the ICT, clinical and administration personnel of a hospital. We compare the strategies implemented by OST against alternative common-sense defending approaches for three different types of attackers: Nash, Weighted and Opportunistic. Our results show that Nash defending strategies are consistently better than the competing strategies for all attacker types with a minor exception where the Nash defending strategy, for a specific game, performs at least as good as other common-sense approaches. Finally, we illustrate the alternative investment strategies on different Nash equilibria (called plans) and discuss the optimal choice using the framework of 0-1 Knapsack optimization.
Sakshyam Panda, Emmanouil Panaousis, George Loukas, Christos Laoudias


Weitere Informationen

Premium Partner