Skip to main content

Über dieses Buch

This book constitutes the thoroughly revised selected papers from the 16th International Conference on Formal Aspects of Component Software, FACS 2019, held in Amsterdam, The Netherlands, in October 2019.
The 9 full papers presented together with 9 full papers and 3 short papers as well as 2 other papers were carefully reviewed and selected from 27 submissions. FACS 2019 is concerned with how formal methods can be used to make component-based and service-oriented software development succeed. Formal methods have provided a foundation for component-based software by successfully addressing challenging issues such as mathematical models for components, composition and adaptation, or rigorous approaches to verification, deployment, testing, and certification.



Invited Papers


Modeling Guidelines for Component-Based Supervisory Control Synthesis

Supervisory control theory provides means to synthesize supervisors from a model of the uncontrolled plant and a model of the control requirements. Currently, control engineers lack experience with using automata for this purpose, which results in low adaptation of supervisory control theory in practice. This paper presents three modeling guidelines based on experience of modeling and synthesizing supervisors of large-scale infrastructural systems. Both guidelines see the model of the plant as a collection of component models. The first guideline expresses that independent components should be modeled as asynchronous models. The second guideline expresses that physical relationships between component models can be easily expressed with extended finite automata. The third guideline expresses that the input-output perspective of the control hardware should be used as the abstraction level. The importance of the guidelines is demonstrated with examples from industrial cases.
Martijn Goorden, Joanna van de Mortel-Fronczak, Michel Reniers, Wan Fokkink, Jacobus Rooda

Modelling and Analysing Software in mCRL2

Model checking is an effective way to design correct software. Making behavioural models of software, formulating correctness properties using modal formulas, and verifying these using finite state analysis techniques, is a very efficient way to obtain the required insight in the software. We illustrate this on four common but tricky examples.
Jan Friso Groote, Jeroen J. A. Keiren, Bas Luttik, Erik P. de Vink, Tim A. C. Willemse

Regular Papers


A Formally Verified Model of Web Components

The trend towards ever more complex client-side web applications is unstoppable. Compared to traditional software development, client-side web development lacks a well-established component model, i. e., a method for easily and safely reusing already developed functionality. To address this issue, the web community started to adopt shadow trees as part of the Document Object Model (DOM). Shadow trees allow developers to “partition” a DOM instance into parts that should be safely separated, e. g., code modifying one part should not unintentionally affect other parts of the DOM.
While shadow trees provide the technical basis for defining web components, the DOM standard neither defines the concept of web components nor specifies the safety properties that web components should guarantee. Consequently, the standard also does not discuss how or even if the methods for modifying the DOM respect component boundaries.
In this paper, we present a formally verified model of web components and define safety properties which ensure that different web components can only interact with each other using well-defined interfaces. Moreover, our verification of the application programming interface (API) of the DOM revealed numerous invariants that implementations of the DOM API need to preserve to ensure the integrity of components.
Achim D. Brucker, Michael Herzberg

Minimizing Characterizing Sets

A characterizing set (CS) for a given finite state machine (FSM) defines a set of input sequences such that for any pair of states of FSM, there exists an input sequence in a CS that can separate these states. There are techniques that generate test sequences with guaranteed fault detection power using CSs. The number of inputs and input sequences in a CS directly impacts the cost of the test: the higher the number of elements, the longer it takes to generate the test. Despite the direct benefits of using CSs with fewer sequences, there has been no work focused on generating minimum sized characterizing sets. In this paper, we show that constructing CS with fewer elements is a PSPACE-Hard problem and that the corresponding decision problem is PSPACE-Complete. We then introduce a heuristic to construct CSs with fewer input sequences. We evaluate the proposed algorithm using randomly generated FSMs as well as some benchmark FSMs. The results are promising, and the proposed method reduces the number of test sequences by \(37.3\%\) and decreases the total length of the tests by \(34.6\%\) on the average.
Kadir Bulut, Guy Vincent Jourdan, Uraz Cengiz Türker

A Bond-Graph Metamodel:

Physics-Based Interconnection of Software Components
Composability and modularity in relation to physics are useful properties in the development of cyber-physical systems that interact with their environment. The bond-graph modeling language offers these properties. When systems structures conform to the bond-graph notation, all interfaces are defined as physical “power ports” which are guaranteed to exchange power. Having a single type of interface is a key feature when aiming for modular, composable systems. Furthermore, the facility to monitor energy flows in the system through power ports allows the definition of system-wide properties based on component properties. In this paper we present a metamodel of the bond-graph language aimed to facilitate the description and deployment of software components for cyber-physical systems. This effort provides a formalized description of standardized interfaces that enable physics-conformal interconnections. We present a use-case showing that the metamodel enables composability, reusability, extensibility, replaceability and independence of control software components.
Reynaldo Cobos Méndez, Julio de Oliveira Filho, Douwe Dresscher, Jan Broenink

Multilabeled Petri Nets

We introduce multilabeled Petri nets as an inherently parallel generalization of constraint automata. Composition of multilabeled nets does not suffer from state-space explosions, which makes them an adequate intermediate representation for code generation. We present also an abstraction operator for multilabeled nets that eliminates internal transitions, which optimizes the execution of multilabeled nets.
Kasper Dokter

A Service-Oriented Approach for Decomposing and Verifying Hybrid System Models

The design of fault free hybrid control systems, which combine discrete and continuous behavior, is a challenging task. Their hybrid behavior and further factors make their design and verification challenging: These systems can consist of multiple interacting components, and commonly used design languages, like MATLAB Simulink do not directly allow for the verification of hybrid behavior. By providing hybrid contracts, which formally define the interface behavior of hybrid system components in differential dynamic logic ( ), and providing a decomposition technique, we enable compositional verification of Simulink models with interacting components. This enables us to use the interactive theorem prover KeYmaera X to prove the correctness of hybrid control systems modeled in Simulink, which we demonstrate with an automotive industrial case study.
Timm Liebrenz, Paula Herber, Sabine Glesner

Compositional Liveness-Preserving Conformance Testing of Timed I/O Automata

I/O conformance testing theories (e.g., ioco) are concerned with formally defining when observable output behaviors of an implementation conform to those permitted by a specification. Thereupon, several real-time extensions of ioco, usually called tioco, have been proposed, further taking into account permitted delays between actions. In this paper, we propose an improved version of tioco, called live timed ioco (ltioco), tackling various weaknesses of existing definitions. Here, a reasonable adaptation of quiescence (i.e., observable absence of any outputs) to real-time behaviors has to be done with care: ltioco therefore distinguishes safe outputs being allowed to happen, from live outputs being enforced to happen within a certain time period thus inducing two different facets of quiescence. Furthermore, tioco is frequently defined on Timed I/O Labeled Transition Systems (TIOLTS), a semantic model of Timed I/O Automata (TIOA) which is infinitely branching and thus infeasible for practical testing tools. Instead, we extend the theory of zone graphs to enable ltioco testing on a finite semantic model of TIOA. Finally, we investigate compositionality of ltioco with respect to parallel composition including a proper treatment of silent transitions.
Lars Luthmann, Hendrik Göttmann, Malte Lochau

RecordFlux: Formal Message Specification and Generation of Verifiable Binary Parsers

Various vulnerabilities have been found in message parsers of protocol implementations in the past. Even highly sensitive software components like TLS libraries are affected regularly. Resulting issues range from denial-of-service attacks to the extraction of sensitive information. The complexity of protocols and imprecise specifications in natural language are the core reasons for subtle bugs in implementations, which are hard to find. The lack of precise specifications impedes formal verification.
In this paper, we propose a model and a corresponding domain-specific language to formally specify message formats of existing real-world binary protocols. A unique feature of the model is the capability to define invariants, which specify relations and dependencies between message fields. Furthermore, the model allows defining the relation of messages between different protocol layers and thus ensures correct interpretation of payload data. We present a technique to derive verifiable parsers based on the model, generate efficient code for their implementation, and automatically prove the absence of runtime errors. Examples of parser specifications for Ethernet and TLS demonstrate the applicability of our approach.
Tobias Reiher, Alexander Senier, Jeronimo Castrillon, Thorsten Strufe

State Identification for Labeled Transition Systems with Inputs and Outputs

For Finite State Machines (FSMs) a rich testing theory has been developed to discover aspects of their behavior and ensure their correct functioning. Although this theory is widely used, e.g., to check conformance of protocol implementations, its applicability is limited by restrictions of the FSM framework: the fact that inputs and outputs alternate in an FSM, and outputs are fully determined by the previous input and state. Labeled Transition Systems with inputs and outputs (LTSs), as studied in ioco testing theory, provide a richer framework for testing component oriented systems, but lack the algorithms for test generation from FSM theory.
In this article, we propose an algorithm for the fundamental problem of state identification during testing of LTSs. Our algorithm is a direct generalization of the well-known algorithm for computing adaptive distinguishing sequences for FSMs proposed by Lee & Yannakakis. Our algorithm has to deal with so-called compatible states, states that cannot be distinguished in case of an adversarial system-under-test. Analogous to the result of Lee & Yannakakis, we prove that if an (adaptive) test exists that distinguishes all pairs of incompatible states of an LTS, our algorithm will find one. In practice, such adaptive tests typically do not exist. However, in experiments with an implementation of our algorithm on an industrial benchmark, we find that tests produced by our algorithm still distinguish more than 99% of the incompatible state pairs.
Petra van den Bos, Frits Vaandrager

Combining State- and Event-Based Semantics to Verify Highly Available Programs

Replicated databases are attractive for managing the data of distributed applications that require high availability, low latency, and high throughput. However, these benefits entail weak consistency which comes at a price: it becomes harder to reason about application correctness. We address this difficulty with a verification technique for highly available programs. We augment an existing sequential programming language with primitives for interacting concurrently with a highly available database and extend the state-based operational semantics of that language accordingly. To this end we make use of existing event-based database semantics.
We then present a reduction of the extended semantics to a simpler one, which is again sequential and therefore easier to handle in verification tools. Our verification tool Repliss uses this technique and demonstrates its feasibility.
Peter Zeller, Annette Bieniusa, Arnd Poetzsch-Heffter

Short Papers


Reowolf: Synchronous Multi-party Communication over the Internet

In this position paper we introduce Reowolf: an on-going project that aims to replace the decades-old application programming interface, BSD sockets, for communication on the Internet. A novel programming interface is being implemented at the systems level that is inter-operable with existing Internet applications.
Christopher A. Esterhuyse, Hans-Dieter A. Hiep

Modeling and Verifying Dynamic Architectures with FACTum Studio

With the emergence of ambient and adaptive computing, dynamic architectures have become increasingly important. Dynamic architectures describe an evolving state space of systems over time. In such architectures, components can appear or disappear, and connections between them can change over time. Due to the evolving state space of such architectures, verification is challenging. To address this problem, we developed FACTum Studio, a tool that combines model checking and interactive theorem proving to support the verification of dynamic architectures. To this end, a dynamic architecture is first specified in terms of component types and architecture configurations. Next, each component type is verified against asserted contracts using nuXmv. Then, the composition of the contracts is verified using Isabelle/HOL. In this paper, we discuss the tool’s extended features with an example of an encrypted messaging system. It is developed with Eclipse and active on Github.
Habtom Kahsay Gidey, Alexander Collins, Diego Marmsoler

Revisiting Trace Equivalences for Markov Automata

Equivalences are important for system synthesis as well as system analysis. This paper defines new variants of trace equivalence for Markov automata (MAs). We perform button pushing experiments with a black box model of MA to obtain these equivalences. For every class of MA scheduler, a corresponding variant of trace equivalence is defined. We investigate the relationship among these equivalences and also prove that each variant defined in this paper is strictly coarser than the corresponding variant of trace equivalence defined originally in [12]. Next, we establish the relationship between our equivalences and bisimulation for MAs. Finally, we investigate the relationship of these equivalences with trace relations defined in the literature for some of the implied models.
Arpit Sharma


Weitere Informationen

Premium Partner