Skip to main content

2021 | Buch

Software Engineering and Formal Methods. SEFM 2020 Collocated Workshops

ASYDE, CIFMA, and CoSim-CPS, Amsterdam, The Netherlands, September 14–15, 2020, Revised Selected Papers

insite
SUCHEN

Über dieses Buch

This volume constitutes the revised selected papers from the three workshops collocated with the 18th International Conference on Software Engineering and Formal Methods, SEFM 2020, held in Amsterdam, The Netherlands, in September 2020.

The 15 full papers presented together with 8 short papers in this volume were carefully reviewed and selected from a total of 35 submissions. The contributions that are collected in this volume have been selected from the presentations at the following workshops:

ASYDE 2020: Second International Workshop on Automated and Verifiable Software System Development; CIFMA 2020: Second International Workshop on Cognition: Interdisciplinary Foundations, Models and Applications; and CoSim-CPS 2020: Fourth International Workshop on Formal Co-Simulation of Cyber-Physical Systems.

Due to the Corona pandemic this event was held virtually.

Inhaltsverzeichnis

Frontmatter

ASYDE 2020

Frontmatter
Model Translation from Papyrus-RT into the nuXmv Model Checker
Abstract
Papyrus-RT is an eclipse based modelling tool for embedded systems that makes use of the Model-Driven Engineering approach to generate executable C++ code from UML-RT models. The UML-RT state diagrams are very similar to Finite State Machines used in the nuXmv model checker (an extension of the NuSMV symbolic model checker). In this paper we present an approach for automated verification of the UML-RT models by exporting them mechanically into equivalent nuXmv models with positive results.
Sneha Sahu, Ruth Schorr, Inmaculada Medina-Bulo, Matthias Wagner
Modeling and Verification of Temporal Constraints for Web Service Composition
Abstract
This paper aims to verify temporal constraints for Web service composition. The expected deployment of such verification when composing services strongly depends on the development of an adequate solution that guarantees a high level of service quality to the system users. Given the importance of e-commerce solutions for Algerian citizens that are favorable to it due to the current confinement situation during the Covid-19 pandemic, we develop a Web service composition that studies the speed distribution of Every Consumer Goods in Algeria by using the Timed Colored Petri Nets formalism. Once the temporal constraints are identified and the formal model is developed, we analyze the performance by creating a monitor on which multiple simulations are performed by using the software CPN Tools allowing the collection of several time data, which are evaluated thereafter using the Java Framework.
Maya Souilah Benabdelhafid, Houda Boubaker, Mahmoud Boufaida
Modeling Attack-Defense Trees’ Countermeasures Using Continuous Time Markov Chains
Abstract
ADTrees (Attack-Defense Trees) are graphical security modeling tools used to logically represent attack scenarios along with their corresponding countermeasures in a user-friendly way. Many researchers nowadays use ADTrees to represent attack scenarios and perform quantitative as well as qualitative security assessment. Among all different existing quantitative security assessment techniques, CTMCs (Continuous Time Markov Chains) have been attractively adopted for ADTrees. ADTrees are usually transformed into CTMCs, where traditional stochastic quantitative analysis approaches can be applied. For that end, the correct transformation of an ADTree to a CTMC requires that each individual element of an ADTree should have its correct and complete representation in the corresponding CTMC. In this paper, we mainly focus on modeling countermeasures in ADTrees using CTMCs. The existing CTMC-model does not provide a precise and complete modeling capability, in particular, when cascaded-countermeasures are used. Cascaded-countermeasures occur when an attacker and a defender in a given ADTree recursively counter each other more than one time in a given branch of the tree. We propose the notion of tokenized-CTMC to construct a new CTMC-model that can precisely model and represent countermeasures in ADTrees. This new CTMC-model allows to handle cascaded-countermeasure scenarios in a more comprehensive way.
Karim Lounis, Samir Ouchani
Automated Validation of State-Based Client-Centric Isolation with TLA
Abstract
Clear consistency guarantees on data are paramount for the design and implementation of distributed systems. When implementing distributed applications, developers require approaches to verify the data consistency guarantees of an implementation choice. Crooks et al. define a state-based and client-centric model of database isolation. This paper formalizes this state-based model in https://static-content.springer.com/image/chp%3A10.1007%2F978-3-030-67220-1_4/MediaObjects/502130_1_En_4_Figa_HTML.gif , reproduces their examples and shows how to model check runtime traces and algorithms with this formalization. The formalized model in https://static-content.springer.com/image/chp%3A10.1007%2F978-3-030-67220-1_4/MediaObjects/502130_1_En_4_Figb_HTML.gif enables semi-automatic model checking for different implementation alternatives for transactional operations and allows checking of conformance to isolation levels. We reproduce examples of the original paper and confirm the isolation guarantees of the combination of the well-known 2-phase locking and 2-phase commit algorithms. Using model checking this formalization can also help finding bugs in incorrect specifications. This improves feasibility of automated checking of isolation guarantees in synthesized synchronization implementations and it provides an environment for experimenting with new designs.
Tim Soethout, Tijs van der Storm, Jurgen J. Vinju
Code Coverage Aware Test Generation Using Constraint Solver
Abstract
Code coverage has been used in the software testing context mostly as a metric to assess a generated test suite’s quality. Recently, code coverage analysis is used as a white-box testing technique for test optimization. Most of the research activities focus on using code coverage for test prioritization and selection within automated testing strategies. Less effort has been paid in the literature to use code coverage for test generation. This paper introduces a new Code Coverage-based Test Case Generation (CCTG) concept that changes the current practices by utilizing the code coverage analysis in the test generation process. CCTG uses the code coverage data to calculate the input parameters’ impact for a constraint solver to automate the generation of effective test suites. We applied this approach to a few real-world case studies. The results showed that the new test generation approach could generate effective test cases and detect new faults .
Krystof Sykora, Bestoun S. Ahmed, Miroslav Bures
From Requirements to Verifiable Executable Models Using Rebeca
Abstract
Software systems are complicated, and the scientific and engineering methodologies for software development are relatively young. We need robust methods for handling the ever-increasing complexity of software systems that are now in every corner of our lives. In this paper we focus on asynchronous event-based reactive systems and show how we start from the requirements, move to actor-based Rebeca models, and formally verify the models for correctness. The Rebeca models include the details of the signals and messages that are passed at the network level including the timing, and can be mapped to the executable code. We show how we can use the architecture design and structured requirements to build the behavioral models, including Rebeca models, and use the state diagrams to write the properties of interest, and then use model checking to check the properties. The formally verified models can then be used to develop the executable code. The natural mappings among the models for requirements, the formal models, and the executable code improve the effectiveness and efficiency of the approach. It also helps in runtime monitoring and adaptation.
Marjan Sirjani, Luciana Provenzano, Sara Abbaspour Asadollah, Mahshid Helali Moghadam

CIFMA 2020

Frontmatter
A Pragmatic Model of Justification for Social Epistemology
Abstract
Social epistemology presents different theories about the status of shared knowledge, but only some of them retain a fruitful relation with classical epistemology. The aim of my contribution is to present a pragmatic model which is, on the one side, related to the classical concepts of “truth” and “justification”, while, on the other side, addressing to a fundamentally “social” structure for the justification of knowledge. The shift from formal semantics to pragmatics is based on a notion of “material inference” embedding commitments implicit in the use of language, that favors the recognition of the social source of shared knowledge.
Raffaela Giovagnoli
Personal Identity and False Memories
Abstract
In current philosophy of mind, there are two main approaches to the question of personal identity. The first one claims that personal identity is based on our memory and, for several decades, has been known as a psychological approach to the problem. The second one has been called an animalistic approach and considers personal identity as a biological property of human beings or as a specific feature of our bodily continuity. The experiment on creating false memories in mice brains, recently conducted at Massachusetts Institute of Technology (MIT), seems to shed new light on the question of personal identity, taking into account the fact that the mouse brain is morphologically quite similar to our brain. The purpose of my paper is to consider whether the above-mentioned experiment supports one of the approaches: the psychological or the animalistic. Using the conceptual instrumentarium of contemporary analytic philosophy and cognitive phenomenology, I differentiate between strong and weak false memories and I argue that we cannot consider the conducted experiment to have created false memories in the strong sense. I develop a thought experiment showing what it would be like to experience an implanted (weak) false memory in the human brain. I conclude that there is not and cannot be an experience of the (strong) false memory.
Danil Razeev
Against the Illusory Will Hypothesis
A Reinterpretation of the Test Results from Daniel Wegner and Thalia Wheatley’s I Spy Experiment
Abstract
Since Benjamin Libet’s famous experiments in 1979, the study of the will has become a focal point in the cognitive sciences. Just like Libet, the scientists Daniel Wegner and Thalia Wheatley came to doubt that the will is causally efficacious. In their influential study I Spy from 1999, they created an experimental setup to show that agents erroneously experience their actions as caused by their thoughts. Instead, these actions are caused by unconscious neural processes; the agent’s ‘causal experience of will’ is just an illusion. Both the scientific method and the conclusion drawn from the empirical results have already been criticized by philosophers. In this paper, I will analyze the action performed in the I Spy experiment and criticize more fundamentally the assumption of a ‘causal experience of will’. I will argue that the experiment does not show that the agent’s causal experience of will is illusory, because it does not show that there is a causal experience of will. Against Wegner & Wheatley’s assumption, I will show that it is unlikely that the participants in the I Spy experiment experienced their conscious thoughts as causally efficacious for an action, that they did not perform at all. It is more likely, that they experienced their own bodily movement as causally efficacious for a cooperative action, that they did not perform solely by themselves.
Robert Reimer
Understanding Responses of Individuals with ASD in Syllogistic and Decision-Making Tasks: A Formal Study
Abstract
Recent studies have shown that in some reasoning tasks people with Autism Spectrum Disorder perform better than typically developing people. The present note gives a brief comparison of two such tasks, namely a syllogistic task and a decision-making task, identifying the common structure as well as differences. In the terminology of David Marr’s three levels of cognitive systems, the tasks show commonalities on the computational level in terms of the effect of contextual stimuli, though an in-depth analysis of such contexts provides certain distinguishing features in the algorithmic level. We also make some general remarks on our approach.
Torben Braüner, Aishwarya Ghosh, Sujata Ghosh
Symbolic and Statistical Theories of Cognition: Towards Integrated Artificial Intelligence
Abstract
There are two types of approaches to Artificial Intelligence, namely Symbolic AI and Statistical AI. The symbolic and statistical paradigms of cognition may be considered to be in conflict with each other; the recent debate between Chomsky and Norvig exemplifies a fundamental tension between the two paradigms (esp. on language), which is arguably in parallel with a conflict on interpretations of quantum theory as seen between Bohr and Einstein, one side arguing for the probabilist or empiricist view and the other for the universalist or rationalist view. In the present paper we explicate and articulate the fundamental discrepancy between them, and explore how a unifying theory could be developed to integrate them, and what sort of cognitive rôles Integrated AI could play in comparison with present-day AI. We give, inter alia, a classification of Integrated AI, and argue that Integrated AI serves the purpose of humanising AI in terms of making AI more verifiable, more explainable, more causally accountable, more ethical, and thus closer to general intelligence. We especially emphasise the ethical advantage of Integrated AI. We also briefly touch upon the Turing Test for Ethical AI, and the pluralistic nature of Turing-type Tests for Integrated AI. Overall, we believe that the integrated approach to cognition gives the key to the next generation paradigm for AI and Cognitive Science in general, and that Categorical Integrated AI or Categorical Integrative AI Robotics would be arguably the most promising approach to it.
Yoshihiro Maruyama
An Interdisciplinary Model for Graphical Representation
Abstract
The paper questions whether data-driven and problem-driven models are sufficient for a software to automatically represent a meaningful graphical representation of scientific findings. The paper presents descriptive and prescriptive case studies to understand the benefits and the shortcomings of existing models that aim to provide graphical representations of data-sets. First, the paper considers data-sets coming from the field of software metrics and shows that existing models can provide the expected outcomes for descriptive scientific studies. Second, the paper presents data-sets coming from the field of human mobility and sustainable development, and shows that a more comprehensive model is needed in the case of prescriptive scientific fields requiring interdisciplinary research. Finally, an interdisciplinary problem-driven model is proposed to guide the software users, and specifically scientists, to produce meaningful graphical representation of research findings. The proposal is indeed based not only on a data-driven and/or problem-driven model but also on the different knowledge domains and scientific aims of the experts, who can provide the information needed for a higher-order structure of the data, supporting the graphical representation output.
G. Antonio Pierro, Alexandre Bergel, Roberto Tonelli, Stéphane Ducasse
Information Retrieval from Semantic Memory: BRDL-Based Knowledge Representation and Maude-Based Computer Emulation
Abstract
This paper presents a formal model for the representation of relational information in semantic memory and for its retrieval as a reaction to triggering questions which are normally used in experimental psychology. Information is represented using the Behaviour and Reasoning Description Language (BRDL), while the engine for its retrieval is given by the real-time extension of the Maude rewrite language. Maude’s capability of specifying complex data structures as many sorted algebras and the time features of Real-Time Maude are essential in providing a means for formalising alternative human memory models. Furthermore, using Maude’s object-oriented modelling style, aspects of such alternative memory models may be implemented in separate, interchangeable modules, thus providing a way for their comparison through in silico experiments. Finally, the results of in silico experiments may be contrasted with the data produced through lab experiments and natural observations to yield, on the one hand, a calibration of the emulation engine underlying BRDL and, on the other hand, important insights into alternative theories of cognition.
Antonio Cerone, Diana Murzagaliyeva
A Multi-Agent Depth Bounded Boolean Logic
Abstract
Recent developments in the formalization of reasoning, especially in computational settings, have aimed at defining cognitive and resource bounds to express limited inferential abilities. This feature is emphasized by Depth Bounded Boolean Logics, an informational logic that models epistemic agents with inferential abilities bounded by the amount of information they can use. However, such logics do not model the ability of agents to make use of information shared by other sources. The present paper provides a first account of a Multi-Agent Depth Bounded Boolean Logic, defining agents whose limited inferential abilities can be increased through a dynamic operation of becoming informed by other data sources.
Giorgio Cignarale, Giuseppe Primiero
The Intensional Structure of Epistemic Convictions
Abstract
We discuss an axiomatic setup as an appropriate account to the intensional structure of epistemic convictions. This includes a resolution of the problem of logical omniscience as well as the individual rendering of knowledge by different persons.
Reinhard Kahle
Short-Circuiting the Definition of Mathematical Knowledge for an Artificial General Intelligence
Abstract
We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement \(\phi \) if and only if that AGI would include \(\phi \) in the resulting enumeration if that AGI were commanded: “Enumerate all the EA-sentences which you know.” This definition is non-circular because an AGI, being capable of practical English communication, is capable of understanding the everyday English word “know” independently of how any philosopher formally defines knowledge; we elaborate further on the non-circularity of this circular-looking definition. This elegantly solves the problem that different AGIs may have different internal knowledge definitions and yet we want to study knowledge of AGIs in general, without having to study different AGIs separately just because they have separate internal knowledge definitions. Finally, we suggest how this definition of AGI knowledge can be used as a bridge which could allow the AGI research community to import certain abstract results about mechanical knowing agents from mathematical logic.
Samuel Allen Alexander
Reasoning About Ignorance and Beliefs
Abstract
When building artificial agents that have to make decisions, understanding what follows from what they know or believe is mandatory, but it is also important to understand what happens when those agents ignore some facts, where ignoring a fact is interpreted to stand for not knowing/not being aware of something. This becomes especially relevant when such agents ignore their ignorance, since this hinders their ability of seeking the information they are missing. Given this fact, it might prove useful to clarify in which circumstances ignorance is present and what might cause an agent to ignore that he/she is ignoring. This paper is an attempt at exploring those facts. In the paper, the relationship between ignorance and beliefs is analysed. In particular, three doxastic effects are discussed, showing that they can be seen as a cause of ignorance. The effects are formalized in a bi-modal formal language for knowledge and belief and it is shown how ignorance follows directly from those effects. Moreover, it is shown that negative introspection is the culprit of the passage between simply ignoring a fact and ignoring someone’s ignorance about that fact. Those results could prove useful when artificial agents are designed, since modellers would be aware of which conditions are mandatory to avoid deep forms of ignorance; this means that those artificial agents would be able to infer which information they are ignoring and they could employ this fact to seek it and fill the gaps in their knowledge/belief base.
Alessandro Aldini, Pierluigi Graziani, Mirko Tagliaferri

CoSIM-CPS 2020

Frontmatter
A Case Study on Formally Validating Motion Rules for Autonomous Cars
Abstract
Car motion control is a key functional stage for providing advanced assisted or autonomous driving capabilities to vehicles. Car motion is subject to strict safety rules which are normally expressed in natural language. As such, these natural language rules are subject to potential misinterpretation during the implementation phase of the motion control stage. In this paper, we show a novel approach by which safety rules are expressed in natural language, then in a formal language specification which is then validated and used to generate a car motion checker. We present a case study of using the approach with true road capture data and its associated imperfections. We also show how the approach lowers the validation efforts needed to guarantee that the car motion always respects a desired set of safety rules while other traditional validation methods would be much heavier to deploy and error prone.
Mario Henrique Cruz Torres, Jean-Pierre Giacalone, Joelle Abou Faysal
Modelling Train Driver Behaviour in Railway Co-simulations
Abstract
The performance of a cyber-physical system (CPS) is affected by many factors, however the impact of human performance on a CPS is often overlooked. Modelling and simulation play an important role in understanding CPSs, and co-simulation offers a way to easily incorporate human performance models into co-models of CPSs. This paper demonstrates an initial human performance model in the form of a train driver model in the railway domain. The model is linked to models of the rolling stock and movement authority using the Functional Mock-up Interface (FMI). Initial results are presented and a discussion of future directions for the work.
Tomas Hotzel Escardo, Ken Pierce, David Golightly, Roberto Palacin
Cross-level Co-simulation and Verification of an Automatic Transmission Control on Embedded Processor
Abstract
This work proposes a method for the development of cyber-physical systems starting from a high-level representation of the control algorithm, performing a formal analysis of the algorithm, and co-simulating the algorithm with the controlled system both at high level, abstracting from the target processor, and at low level, i.e., including the emulation of the target processor. The expected advantages are a smoother and more controllable development process and greater design dependability and accuracy with respect to basic model-driven development. As a case study, an automatic transmission control has been used to show the applicability of the proposed approach.
Cinzia Bernardeschi, Andrea Domenici, Maurizio Palmieri, Sergio Saponara, Tanguy Sassolas, Arief Wicaksana, Lilia Zaourar
A Semantic-Aware, Accurate and Efficient API for (Co-)Simulation of CPS
Abstract
To understand the behavior emerging from the coordination of heterogeneous simulation units, co-simulation usually relies on either a time-triggered or an event-triggered Application Programming Interface (API). It creates bias in the resulting behavior since time or event triggered API may not be appropriate to the behavioral semantics of the model inside the simulation unit. This paper presents a new semantic-aware API to execute models. This API is a simple and straightforward extension of the Functional Mock-up Interface (FMI) API. It can be used to execute models in isolation, to debug them, and to co-simulate them. The new API is semantic aware in the sense that it goes beyond time/event triggered API to allow communication based on the behavioral semantics of internal models. This API is illustrated on a simple co-simulation use case with both Cyber and Physical models.
Giovanni Liboni, Julien Deantoni
An FMI-Based Initialization Plugin for INTO-CPS Maestro 2
Abstract
The accuracy of the result of a co-simulation is dependent on the correct initialization of all the simulation units. In this work, we consider co-simulation where the simulation units are described as Functional Mock-up Units (FMU). The Functional Mock-up Interface (FMI) specification specifies constraints to the initialization of variables in the scope of a single FMU. However, it does not consider the initialization of interconnected variables between instances of FMUs. Such interconnected variables place particular constraints on the initialization order of the FMUs.
The approach taken to calculate a correct initialization order is based on predicates from the FMI specification and the topological ordering of both internal connections and interconnected variables. The approach supports the initialization of co-simulation scenarios containing algebraic loops using fixed point iteration. The approach has been realized as a plugin for the open-source INTO-CPS Maestro 2 Co-simulation framework. It has been tested for various scenarios and compared to an existing Initializer that has been validated through academic and industrial application.
Simon Thrane Hansen, Casper Thule, Cláudio Gomes
Introducing Regression Tests and Upgrades to the INTO-CPS Application
Abstract
In this paper, we report on the progress made to upgrade and develop a stable upgrading process to the INTO-CPS Application, an Electron.js based desktop application providing a front-end to an INtegrated TOolchain, which is used to develop Cyber-Physical Systems models. We added regression tests to the codebase and for the first time can detect the loss of functionality of the application and its accompanying training tutorials using an automated process. The tests were developed on top of the Mocha, Chai and Spectron frameworks and cover all the tutorials steps performed in the desktop application (approximately 33% of the app and other tools total). The testing process is not yet ready to be deployed in the also recently developed GitHub Actions automated workflow, but this is a possibility to be considered in future developments. We expect this work to improve the stability and security of the code, thus improving user experience.
Prasad Talasila, Armine Sanjari, Kristoffer Villadsen, Casper Thule, Peter Gorm Larsen, Hugo Daniel Macedo
Cosimulation-Based Control Synthesis
Abstract
In this paper, we present a procedure for guaranteed control synthesis for nonlinear sampled switched systems which relies on an adaptive state-space tiling procedure. The computational complexity of the procedure being exponential in the dimension of the system, we explore the use of cosimulation for improving computation times and the scalabity of the method. We apply the procedure on a scalable case study of various dimensions, which is, to our knowledge, a significant step towards the scalabity of formal control synthesis methods with respect to the state of the art.
Adrien Le Coënt, Julien Alexandre dit Sandretto, Alexandre Chapoutot
Backmatter
Metadaten
Titel
Software Engineering and Formal Methods. SEFM 2020 Collocated Workshops
herausgegeben von
Loek Cleophas
Dr. Mieke Massink
Copyright-Jahr
2021
Electronic ISBN
978-3-030-67220-1
Print ISBN
978-3-030-67219-5
DOI
https://doi.org/10.1007/978-3-030-67220-1

Premium Partner