Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the workshops which complemented the 23rd Symposium on Formal Methods, FM 2019, held in Porto, Portugal, in October 2019. This volume presents the papers that have been accepted for the following workshops: Third Workshop on Practical Formal Verification for Software Dependability, AFFORD 2019; 8th International Symposium From Data to Models and Back, DataMod 2019; First Formal Methods for Autonomous Systems Workshop, FMAS 2019; First Workshop on Formal Methods for Blockchains, FMBC 2019; 8th International Workshop on Formal Methods for Interactive Systems, FMIS 2019; First History of Formal Methods Workshop, HFM 2019; 8th International Workshop on Numerical and Symbolic Abstract Domains, NSAD 2019; 9th International Workshop on Open Community Approaches to Education, Research and Technology, OpenCERT 2019; 17th Overture Workshop, Overture 2019; 19th Refinement Workshop, Refine 2019; First International Workshop on Reversibility in Programming, Languages, and Automata, RPLA 2019; 10th International Workshop on Static Analysis and Systems Biology, SASB 2019; and the 10th Workshop on Tools for Automatic Program Analysis, TAPAS 2019.

Inhaltsverzeichnis

Frontmatter

HFM 2019 - History of Formal Methods Workshop

Flow Diagrams, Assertions, and Formal Methods

This paper examines the early history of the flow diagram notation developed by Herman Goldstine and John von Neumann in the mid-1940s. It emphasizes the motivation for the notation’s mathematical aspects and the provision made for formally checking the consistency of diagrams. Goldstine and von Neumann’s introduction of assertion boxes is considered in detail. The practical use of flow diagrams is considered briefly, and the paper then reads Turing’s 1949 essay on “Checking a large routine” in the light of his likely knowledge of the Goldstine/von Neumann notation. In particular, his different use of the term “assertion” is considered, and related to the earlier work.

Mark Priestley

The School of Squiggol

A History of the Bird–Meertens Formalism

The Bird–Meertens Formalism, colloquially known as “Squiggol”, is a calculus for program transformation by equational reasoning in a function style, developed by Richard Bird and Lambert Meertens and other members of IFIP Working Group 2.1 for about two decades from the mid 1970s. One particular characteristic of the development of the Formalism is fluctuating emphasis on novel ‘squiggly’ notation: sometimes favouring notational exploration in the quest for conciseness and precision, and sometimes reverting to simpler and more rigid notational conventions in the interests of accessibility. This paper explores that historical ebb and flow.

Jeremy Gibbons

Reasoning About Shared-Variable Concurrency: Interactions Between Research Threads

Most research on concurrency involves either communication-based approaches or accepts the shared-variable model. This paper addresses the latter approach and traces the research from Hoare’s axiomatic approach, through Sue Owicki’s work up to separation logics and rely/guarantee methods. Researchers in these last two approaches have been involved in a friendly rivalry and cooperation. The focus is on the insights that have arisen rather than technical details.

Cliff B. Jones

Specification with Class: A Brief History of Object-Z

The end of the 1980s saw a growing interest in object orientation as both a design and programming methodology with the advent of programming languages like C++ and Eiffel. The trend was taken up by some in the formal methods community, including a team of researchers in Australia. Their contribution was a formal specification language, Object-Z, which had immediate industrial impact, gained rapid international recognition, and then two decades later began to fade, along with some of its contemporaries, from the formal methods scene. This paper details the rise and fall of Object-Z from the perspective of two of its original developers.

Graeme Smith, David J. Duke

Formal Specifications and Software Testing, a Fruitful Convergence

This paper gives some account of the evolution of ideas and the main advances in the domain of software testing based on formal specifications and reports some personal anecdotes on my activity in this field. Going back to the seventies, being slightly caricatural, software testing was perceived, on the one hand, by its actors as an empirical activity that had nothing to gain from formal methods, on the other hand, by the advocates of these methods as doomed to disappear based on the belief that in the long run programs will be correct by construction. Currently, these two communities haven’t yet reached a complete consensus. But fortunately there have been some significant moves from both sides and various success stories that allow saying that there is a fruitful convergence toward testing methods based on formal specifications.

Marie-Claude Gaudel

From Manuscripts to Programming Languages: An Archivist Perspective

This paper presents the archival treatment of documentation produced by IFIP Working Group 2.1, Algorithmic Languages and Calculi as kept by former chairman Willem van der Poel (1962-69) and Chris Cheney. It highlights the importance of archival treatment based on standards such as International Standard for Archival Description—ISAD(G) and the International Standard Archival Authority Record (Corporate Bodies, Persons and Families)—ISAAR-(CPF) both issued by the International Council on Archives, the Portuguese Guidelines for Archival Description ODA and Encoded Archival Description EAD.The archived collection enables dissemination and effective access to the information for research and in-depth knowledge of computer history and specifically programming languages and formal methods.The paper also addresses the issues of the long-term preservation of archival records produced today in their various formats and the importance of contributing to preserving collective memory and enriching the knowledge about the human society.

Alexandra Vidal, Ana Sandra Meneses, António Sousa

What Have Formal Methods Ever Done for Us? An Audience Discussion

The History of Formal Methods 2019 workshop ended with a discussion reflecting on the discipline of formal methods. An initial prompting question, “What have formal methods ever done for us?”, was presented, but the discussion evolved from there into consideration of applicability, education, and even Star Trek. The session was chaired and curated by Troy Astarte, who has also prepared this summary of the discussion. It is not a perfect transcription but rather a report on the interesting and stimulating conversation that resulted.

Troy Kaighin Astarte

NSAD 2019 - 8th Workshop on Numerical and Symbolic Abstract Domains

Frontmatter

Combination of Boxes and Polyhedra Abstractions for Constraint Solving

This paper investigates the use of abstract domains from Abstract Interpretation (AI) in the field of Constraint Programming (CP). CP solvers are generally very efficient on a specific constraint language, but can hardly be extended to tackle more general languages, both in terms of variable representation (discrete or continuous) and constraint type (global, arithmetic, etc.). For instance, linear constraints are usually solved with linear programming techniques, but non-linear ones have to be either linearized, reformulated or sent to an external solver. We approach this problem by adapting to CP a popular domain construction used to combine the power of several analyses in AI: the reduced product. We apply this product on two well-known abstract domains, Boxes and Polyhedra, that we lift to constraint solving. Finally we define general metrics for the quality of the solver results, and present a benchmark accordingly. Experiments show promising results and good performances.

Ghiles Ziat, Alexandre Maréchal, Marie Pelleau, Antoine Miné, Charlotte Truchet

An Abstract Domain for Objects in Dynamic Programming Languages

Dynamic languages, such as JavaScript, PHP, Python or Ruby, provide a memory model for objects data structures allowing programmers to dynamically create, manipulate, and delete objects’ properties. Moreover, in dynamic languages it is possible to access and update properties by using strings: this represents a hard challenge for static analysis. In this paper, we exploit the finite state automata abstract domain, approximating strings, in order to define a novel abstract domain for objects. We design an abstract interpreter useful to analyze objects in a toy language, inspired by real-word dynamic programming languages. We then show, by means of minimal yet expressive examples, the precision of the proposed abstract domain.

Vincenzo Arceri, Michele Pasqua, Isabella Mastroeni

OpenCERT 2019 - 9th International Workshop on Open Community Approaches to Education, Research and Technology

Frontmatter

A Survey of Learning Methods in Open Source Software

Open source software (OSS) is usually developed by heterogeneous groups of people, each with their own interests, motivations and abilities. Therefore, it is important to establish the best software development and contributing practices early in the life-time of the project. Such practices should foster the contributors’ involvement in the OSS project as quickly as possible. The sustainability of an OSS project is heavily based on the underlying community of contributors and on the knowledge and skills they bring to the project and they acquire and develop through their participation in the project and interaction with the project community. Therefore, identifying and investigating contributors’ learning processes is an important research area in OSS.This survey paper presents an overview of open source learning methods in order to explore how community interaction impacts the development and application of OSS learning processes in other areas, especially in education. It is argued that collaboration with peers and consistent code contributions result in learning progress in OSS. Typical research in this area is based on case by case analysis, whereas this survey tries to highlight and combine the outcomes of several research contributions from the literature.

Aidarbek Suleimenov, Assiya Khuzyakhmetova, Antonio Cerone

A Calculus of Chaos in Stochastic Compilation

Engineering in the Cause of Mathematics

An unexpected outcome from an open project to develop a ‘chaotic’ compiler for ANSI C is described here: a trace information entropy calculus for stochastically compiled programs. A stochastic compiler produces randomly different object codes every time it is applied to the same source code. This calculus quantifies the entropy introduced into run-time program traces by a compiler that aims for the maximal possible entropy, furnishing a definition and proof of security for encrypted computing (Turing-complete computation in which data remains in encrypted form throughout), where the status was formerly unknown.

Peter T. Breuer, Simon J. Pickin

Runtime Verification of Linux Kernel Security Module

The Linux kernel is one of the most important Free/Libre Open Source Software (FLOSS) projects. It is installed on billions of devices all over the world, which process various sensitive, confidential or simply private data. It is crucial to establish and prove its security properties. This work-in-progress paper presents a method to verify the Linux kernel for conformance with an abstract security policy model written in the Event-B specification language. The method is based on system call tracing and aims at checking that the results of system call execution do not lead to accesses that violate security policy requirements. As a basis for it, we use an additional Event-B specification of the Linux system call interface that is formally proved to satisfy all the requirements of the security policy model. In order to perform the conformance checks we use it to reproduce intercepted system calls and verify accesses.

Denis Efremov, Ilya Shchepetkov

Open and Interactive Learning Resources for Algorithmic Problem Solving

Algorithmic problem solving is a way of approaching and solving problems by using the advances that have been made in the principles of correct-by-construction algorithm design. The approach has been taught at first-year undergraduate level since September 2003 and, since then, a substantial amount of learning materials have been developed. However, the existing materials are distributed in a conventional and static way (e.g. as a textbook and as several documents in PDF format available online), not leveraging the capabilities provided by modern collaborative and open-source platforms.In this paper, we propose the creation of an online, open-source repository of interactive learning materials on algorithmic problem solving. We show how the existing framework Mathigon can be used to support such a repository. By being open and hosted on a platform such as GitHub, the repository enables collaboration and anyone can create and submit new material. Furthermore, by making the material interactive, we hope to encourage engagement with and a better understanding of the materials.

João F. Ferreira, Alexandra Mendes

Challenges Faced by Students in an Open Source Software Undergraduate Course

The Open Source Software (OSS) development is gaining popularity from year to year, however, entering the OSS community still remains a challenging task. In this work, we describe challenges faced by a beginner OSS code-developer during the first contribution. Additionally, we analyze our experience and offer hints for potential newcomers. Whole work was done as the project of the Open Source Software undergraduate course at the Computer Department of Nazarbayev University.

Dias Issa

Open Source Software as a Learning Tool for Computer Science Students

In this paper authors’ experience of contributing to Open Source Software (OSS) is described. Contributions were done as a part of the OSS course taken at Nazarbayev University during the Spring 2019 term. Two junior bachelors degree students described their experience, motivations to contribute to OSS, selected projects, course structure and the lists of activities they performed. Assessment of this experience by other community members and the course instructor are also reported in this publication. This paper also studies how the course structure can affect people’s ability to make contributions in general.

Assiya Khuzyakhmetova, Aidarbek Suleimenov

Overture 2019 - 17th Overture Workshop

Frontmatter

Exploring Human Behaviour in Cyber-Physical Systems with Multi-modelling and Co-simulation

Definitions of cyber-physical systems often include humans within the system boundary, however design techniques often focus on the technical aspects and ignore this important part of the system. Multi-modelling and co-simulation offer a way to bring together models from different disciplines to capture better the behaviours of the overall system. In this paper we present some initial results of incorporating ergonomic models of human behaviours within a cyber-physical multi-model. We present three case studies, from the autonomous aircraft and railway sectors, including initial experiments, and discuss future directions.

Ken Pierce, Carl Gamble, David Golightly, Roberto Palacín

Migrating the INTO-CPS Application to the Cloud

The INTO-CPS Application is a common interface used to access and manipulate different model-based artefacts produced by the INTO-CPS tool chain during the development of a cyber-physical system. The application was developed during the INTO-CPS project. It uses web-technologies on top of the Electron platform, and it requires local installation and configuration on each user local machine. In this paper, we present a cloud-based version of the INTO-CPS Application which was developed while researching the potential of cloud technologies to support the INTO-CPS tool chain environment. The proposed application has the advantage that no configuration or installation on a local machine is needed. It makes full usage of the cloud resource management, and its architecture allows for a local machine version, keeping the current local approach option open.

Hugo Daniel Macedo, Mikkel Bayard Rasmussen, Casper Thule, Peter Gorm Larsen

Towards a Static Check of FMUs in VDM-SL

In order to ensure that the co-simulation of Cyber-Physical Systems (CPSs) is possible with as wide a variety of tools as possible, a standard called the Functional Mockup Interface (FMI) has been defined. The FMI provides the means to compute the overall behaviour of a coupled system by the coordination and communication of simulators, each responsible for a part of the system. The contribution presented in this paper is an initial formal model of the FMI standard using the VDM Specification Language. Early results suggest that the FMI standard defines a number of FMU static constraints that are not enforced by many of the tools that are able to export such FMUs.

Nick Battle, Casper Thule, Cláudio Gomes, Hugo Daniel Macedo, Peter Gorm Larsen

ViennaDoc: An Animatable and Testable Specification Documentation Tool

An obstacle to applying formal specification techniques to industrial projects is that stakeholders with little engineering background may experience difficulty comprehending the specification. Forming a common understanding of a specification is indeed essential in software development because a specification is consulted by many kinds of stakeholders, including those who do not necessarily have an engineering background.This paper introduces ViennaDoc, a specification documentation tool that interleaves animation of a formal specification into informal texts written using natural language. ViennaDoc helps readers to understand the behaviour of the specified system by providing opportunities to verify their understanding by executing the specification in the context of the informal explanation. ViennaDoc also helps maintainers of the specification by enabling unit testing that asserts equality between values embedded in the informal specification and formal expressions.

Tomohiro Oda, Keijiro Araki, Yasuhiro Yamamoto, Kumiyo Nakakoji, Hiroshi Sako, Han-Myung Chang, Peter Gorm Larsen

Refine 2019 - 19th Refinement Workshop

Frontmatter

A Map of Asynchronous Communication Models

Asynchronous communication encompasses a variety of features besides the decoupling of send and receive events. Those include message-ordering policies which are often crucial to the correctness of a distributed algorithm. This paper establishes a map of communication models that exhibits the relations between them along two axes of comparison: the strength of the ordering property and the level of abstraction of the specification. This brings knowledge about which model can be substituted by another without breaking any safety property. Furthermore, it brings flexibility and ready-to-use modules when developing correct-by-construction distributed systems where model decomposition exposes the communication component. Both criteria of comparison are covered by refinement. We consider seven ordering policies and we model in Event-B these communication models at three levels of abstraction. The proofs of refinement between all of these are mechanized in Rodin.

Florent Chevrou, Aurélie Hurault, Shin Nakajima, Philippe Quéinnec

An Abstract Semantics of Speculative Execution for Reasoning About Security Vulnerabilities

Reasoning about correctness and security of software is increasingly difficult due to the complexity of modern microarchitectural features such as out-of-order execution. A class of security vulnerabilities termed Spectre that exploits side effects of speculative, out-of-order execution was announced in 2018 and has since drawn much attention. In this paper we formalise speculative execution and its side effects as an extension of a framework for reasoning about out-of-order execution in weak memory models. Our goal is to allow speculation to be reasoned about abstractly at the software level. To this end we encode speculative execution explicitly using a novel language construct and modify the definition of conditional statements correspondingly. Underlying this extension is a model that has sufficient detail to enable specification of the relevant microarchitectural features. We add an abstract cache to the global state of the system, and derive some general refinement rules that expose cache side effects due to speculative loads. The rules are encoded in a simulation tool, which we use to analyse an abstract specification of a Spectre attack and vulnerable code fragments.

Robert J. Colvin, Kirsten Winter

Weakening Correctness and Linearizability for Concurrent Objects on Multicore Processors

In this paper, we argue that there are two fundamental ways of defining correctness of concurrent objects on the weak memory models of multicore processors: we can abstract from concurrent interleaving and weak memory effects at the specification level, or we can abstract from concurrent interleaving only, leaving weak memory effects at the specification level. The first allows us to employ standard linearizability as the correctness criterion; a result proved in earlier work. The second requires a weakening of linearizability. We provide such a weakening and prove it sound and complete with respect to this notion of correctness.

Graeme Smith, Lindsay Groves

Towards a Method for the Decomposition by Refinement in Event-B

Refinement consists of detailing the specification in order to get a more concrete model. However, this technique leads to large models. Hence, model decomposition is used to reduce model complexity. In this paper, we present the main methods of decomposition and their limitations. Then, we define the decomposition by refinement method that deals with these limitations. Thereafter, we proceed with the rules to follow in order to get a correct decomposed model.

Kenza Kraibi, Rahma Ben Ayed, Joris Rehm, Simon Collart-Dutilleul, Philippe Bon, Dorian Petit

Transformations for Generating Type Refinements

We present transformations for incrementally defining both inductive sum/variant types and coinductive product/record types in a formal refinement setting. Inductive types are built by incrementally accumulating constructors. Coinductive types are built by incrementally accumulating observers. In each case, when the developer decides that the constructor (resp. observer) set is complete, a transformation is applied that generates a canonical definition for the type. It also generates definitions for functions that have been characterized in terms of patterns over the constructors (resp. copatterns over the observers). Functions that input a possibly-recursive sum/variant type are defined inductively via patterns on the input data. Dually, functions that output a possibly-recursive record type are defined coinductively via copatterns on the function’s output. The transformations have been implemented in the Specware system [4] and have been used extensively in the automated synthesis of concurrent garbage collection algorithms [9, 12] and families of protocol-processing codes for distributed vehicle control [5].

Douglas R. Smith, Stephen J. Westfold

Comparing Correctness-by-Construction with Post-Hoc Verification—A Qualitative User Study

Correctness-by-construction (CbC) is a refinement-based methodology to incrementally create formally correct programs. Programs are constructed using refinement rules which guarantee that the resulting implementation is correct with respect to a pre-/postcondition specification. In contrast, with post-hoc verification (PhV) a specification and a program are created, and afterwards verified that the program satisfies the specification. In the literature, both methods are discussed with specific advantages and disadvantages. By letting participants construct and verify programs using CbC and PhV in a controlled experiment, we analyzed the claims in the literature. We evaluated defects in intermediate code snapshots and discovered a trial-and-error construction process to alter code and specification. The participants appreciated the good feedback of CbC and state that CbC is better than PhV in helping to find defects. Nevertheless, some defects in the constructed programs with CbC indicate that the participants need more time to adapt the CbC process.

Tobias Runge, Thomas Thüm, Loek Cleophas, Ina Schaefer, Bruce W. Watson

RPLA 2019 - Workshop on Reversibility in Programming, Languages, and Automata

Frontmatter

Reversible Programs Have Reversible Semantics

During the past decade, reversible programming languages have been formalized using various established semantic frameworks. However, these semantics fail to effectively specify the distinct properties of reversible languages at the metalevel, and even neglect the central question of whether the defined language is reversible. In this paper, we build on a metalanguage foundation for reversible languages based on the category of sets and partial injective functions. We exemplify our approach through step-by-step development of the full semantics of an r-Turing complete reversible while-language with recursive procedures. This yields a formalization of the semantics in which the reversibility of the language and its inverse semantics are immediate, as well as the inversion of programs written in the language. We further discuss applications and future research directions for reversible semantics.

Robert Glück, Robin Kaarsgaard, Tetsuo Yokoyama

Two-Way Quantum and Classical Automata with Advice for Online Minimization Problems

We consider online algorithms. Typically the model is investigated with respect to competitive ratio. In this paper, we explore two-way automata as a model for online algorithms. We focus on quantum and classical online algorithms. We show that there are problems that can be solved more efficiently by two-way automata with quantum and classical states than classical two-way automata in the case of sublogarithmic memory (sublinear size) even if classical automata get advice bits.

Kamil Khadiev, Aliya Khadieva

Quotients and Atoms of Reversible Languages

We consider several reversible finite automaton models which have been introduced over decades, and study some properties of their languages. In particular, we look at the question whether the quotients and atoms of a specific class of reversible languages also belong to that class or not. We consider bideterministic automata, reversible deterministic finite automata (REV-DFAs), reversible multiple-entry DFAs (REV-MeDFAs), and several variants of reversible nondeterministic finite automata (REV-NFAs). It is known that the class of REV-DFA languages is strictly included in the class of REV-MeDFA languages. We show that the classes of complete REV-DFA languages and complete REV-MeDFA languages are the same. We also show that differently from the general case of a REV-DFA language, the minimal DFA of a complete REV-DFA language is a complete REV-DFA. We also show that atoms of any regular language are accepted by REV-NFAs with a single initial and a single final state.

Hellis Tamm

SASB 2019 - 10th International Workshop on Static Analysis and Systems Biology

Frontmatter

Bayesian Verification of Chemical Reaction Networks

We present a data-driven verification approach that determines whether or not a given chemical reaction network (CRN) satisfies a given property, expressed as a formula in a modal logic. Our approach consists of three phases, integrating formal verification over models with learning from data. First, we consider a parametric set of possible models based on a known stoichiometry and classify them against the property of interest. Secondly, we utilise Bayesian inference to update a probability distribution of the parameters within a parametric model with data gathered from the underlying CRN. In the third and final stage, we combine the results of both steps to compute the probability that the underlying CRN satisfies the given property. We apply the new approach to a case study and compare it to Bayesian statistical model checking.

Gareth W. Molyneux, Viraj B. Wijesuriya, Alessandro Abate

Nested Event Representation for Automated Assembly of Cell Signaling Network Models

The rate at which biological literature is published far outpaces the current capabilities of modeling experts. In order to facilitate the automation of model assembly, we improve upon methods for converting machine reading output obtained from papers studying intracellular networks into discrete element rule-based models. We introduce a graph representation that can capture the complicated semantics found in machine reading output. Specifically, we focus on extracting change-of-rate information available when network elements are found to inhibit or catalyze other interactions (nested events). We demonstrate the viability of this approach by measuring the prevalence of these nested events in cancer literature, as well as the success rates of two machine readers in capturing them. Finally, we show how our algorithm can translate between machine reading output and the new graphical form. By incorporating these more detailed interactions into the model, we can more accurately predict cellular dynamics on a broad scale, leading to improvements in experimental design and disease treatment discovery.

Evan W. Becker, Kara N. Bocan, Natasa Miskov-Zivanov

TAPAS 2019 - 10th Workshop on Tools for Automatic Program Analysis

Frontmatter

PrideMM: Second Order Model Checking for Memory Consistency Models

We present PrideMM, an efficient model checker for second-order logic enabled by recent breakthroughs in quantified satisfiability solvers. We argue that second-order logic sits at a sweet spot: constrained enough to enable practical solving, yet expressive enough to cover an important class of problems not amenable to (non-quantified) satisfiability solvers. To the best of our knowledge PrideMM is the first automated model checker for second-order logic formulae.We demonstrate the usefulness of PrideMM by applying it to problems drawn from recent work on memory specifications, which define the allowed executions of concurrent programs. For traditional memory specifications, program executions can be evaluated using a satisfiability solver or using equally powerful ad hoc techniques. However, such techniques are insufficient for handling some emerging memory specifications.We evaluate PrideMM by implementing a variety of memory specifications, including one that cannot be handled by satisfiability solvers. In this problem domain, PrideMM provides usable automation, enabling a modify-execute-evaluate pattern of development where previously manual proof was required.

Simon Cooksey, Sarah Harris, Mark Batty, Radu Grigore, Mikoláš Janota

Fkcc: The Farkas Calculator

In this paper, we present fkcc, a scripting tool to prototype program analyses and transformations exploiting the affine form of Farkas lemma. Our language is general enough to prototype in a few lines sophisticated termination and scheduling algorithms. The tool is freely available and may be tried online via a web interface. We believe that fkcc is the missing chain to accelerate the development of program analyses and transformations exploiting the affine form of Farkas lemma.

Christophe Alias

Handling Heap Data Structures in Backward Symbolic Execution

Backward symbolic execution (BSE), also known as weakest precondition computation, is a useful technique to determine validity of assertions in program code by transforming its semantics into boolean conditions for an SMT solver. Regrettably, the literature does not cover various challenges which arise during its implementation, especially when we want to reason about heap objects using the theory of arrays and to use the SMT solver efficiently. In this paper, we present our achievements in this area. Our contribution is threefold. First, we summarize the two most popular state-of-the-art approaches used for BSE, denoting them as disjunct propagation and conjunct combination. Second, we present a novel method for modelling heap operations in BSE using the theory of arrays, optimized for incremental checking during the analysis and handling the input heap. Third, we compare both approaches with our heap handling implementation on a set of program examples, presenting their strengths and weaknesses. The evaluation shows that conjunct combination is the most efficient variant, exceeding the straightforward implementation of disjunct propagation in an order of magnitude.

Robert Husák, Jan Kofroň, Filip Zavoral

AuthCheck: Program-State Analysis for Access-Control Vulnerabilities

According to security rankings such as the SANS Top 25 and the OWASP Top 10, access-control vulnerabilities are still highly relevant. Even though developers use web frameworks such as Spring and Struts, which handle the entire access-control mechanism, their implementation can still be vulnerable because of misuses, errors, or inconsistent implementation from the design specification. We propose AuthCheck, a static analysis that tracks the program’s state using a finite state machine to report illegal states caused by vulnerable implementation. We implemented AuthCheck for the Spring framework and identified four types of mistakes that developers can make when using Spring Security. With AuthCheck, we analyzed an existing open-source Spring application with inserted vulnerable code and detected the vulnerabilities.

Goran Piskachev, Tobias Petrasch, Johannes Späth, Eric Bodden

Backmatter

Weitere Informationen