Skip to main content
Top

2019 | Book

Computer Safety, Reliability, and Security

SAFECOMP 2019 Workshops, ASSURE, DECSoS, SASSUR, STRIVE, and WAISE, Turku, Finland, September 10, 2019, Proceedings

Editors: Alexander Romanovsky, Elena Troubitsyna, Dr. Ilir Gashi, Erwin Schoitsch, Friedemann Bitsch

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the proceedings of the Workshops held in conjunction with SAFECOMP 2019, 38th International Conference on Computer Safety, Reliability and Security, in September 2019 in Turku, Finland.

The 32 regular papers included in this volume were carefully reviewed and selected from 43 submissions; the book also contains two invited papers. The workshops included in this volume are:

ASSURE 2019:
7th International Workshop on Assurance Cases for Software-Intensive Systems

DECSoS 2019:
14th ERCIM/EWICS/ARTEMIS Workshop on Dependable Smart Embedded and Cyber-Physical Systems and Systems-of-Systems

SASSUR 2019:
8th International Workshop on Next Generation of System Assurance Approaches for Safety-Critical Systems

STRIVE 2019:
Second International Workshop on Safety, securiTy, and pRivacy In automotiVe systEms

WAISE 2019:
Second International Workshop on Artificial Intelligence Safety Engineering

Table of Contents

Frontmatter
Correction to: Combining GSN and STPA for Safety Arguments

In the originally published version of this chapter there was an error in figure 2. This has been corrected.

Celso Hirata, Simin Nadjm-Tehrani

7th International Workshop on Assurance Cases for Software-Intensive Systems (ASSURE 2019)

Frontmatter
Combining GSN and STPA for Safety Arguments

Dependability case, assurance case, or safety case is employed to explain why all critical hazards have been eliminated or adequately mitigated in mission-critical and safety-critical systems. Goal Structuring Notation (GSN) is the most employed graphical notation for documenting dependability cases. System Theoretic Process Analysis (STPA) is a technique, based on System Theoretic Accidents Model and Process (STAMP), to identify hazardous control actions, scenarios, and causal factors. STPA is considered a rather complex technique, but there is a growing interest in using STPA in certifications of safety-critical systems development. We investigate how STAMP and STPA can be related to use of assurance cases. This is done in a generic way by representing the STPA steps as part of the evidence and claim documentations within GSN.

Celso Hirata, Simin Nadjm-Tehrani
A Modelling Approach for System Life Cycles Assurance

System assurance involves assuring properties of both a target system itself and the system life cycle acting on it. Assurance of the latter seems less understood than the former, due partly to the lack of consensus on what a ‘life cycle model’ is. This paper proposes a formulation of life cycle models that aims to clarify what it means to assure that a life cycle so modelled achieves expected outcomes. Dependent Petri Net life cycle model is a variant of coloured Petri nets with inputs and outputs that interacts and controls the real life cycle being modelled. Tokens held at a place are data representing artefacts together with assurance that they satisfy conditions associated with the place. The ‘propositions as types’ notion is used to represent evidence(proofs) for assurance as data included in tokens. The intended application is a formulation of the DEOS life cycle model with assurance that it achieves open systems dependability, which is standardised as IEC 62853.

Shuji Kinoshita, Yoshiki Kinoshita, Makoto Takeyama
Modular Safety Cases for Product Lines Based on Assume-Guarantee Contracts

Safety cases are recommended, and in some cases required, by a number of standards. In the product line context, unlike for single systems, safety cases are inherently complex because they must argue about the safety of a family of products that share various types of engineering assets. Safety case modularization has been proposed to reduce safety case complexity by separating concerns, modularizing tightly coupled arguments, and localizing effects of changes to particular modules. Existing modular safety-case approaches for product lines propose a feature-based modularization, which is too coarse to modularize the claims of different types, at different levels of abstraction. To overcome these limitation, a novel, modular safety-case architecture is presented. The modularization is based on a contract-based specification product-line model, which jointly captures the component-based architecture of systems and corresponding safety requirements as assume-guarantee contracts. The proposed safety-case architecture is analyzed against possible product-line changes and it is shown that it is robust both with respect to fine and coarse-grained, and also product and implementation-level changes. The proposed modular safety case is exemplified on a simplified, but real automotive system.

Damir Nešić, Mattias Nyberg

14th International ERCIM/EWICS/ARTEMIS Workshop on Dependable Smart Embedded Cyber-Physical Systems and Systems-of-Systems (DECSoS 2019)

Frontmatter
Comparative Evaluation of Security Fuzzing Approaches

This article compares security fuzzing approaches with respect to different characteristics commenting on their pro and cons concerning both their potential for exposing vulnerabilities and the expected effort required to do so. These preliminary considerations based on abstract reasoning and engineering judgement are subsequently confronted with experimental evaluations based on the application of three different fuzzing tools characterized by diverse data generation strategies on examples known to contain exploitable buffer overflows. Finally, an example inspired by a real-world application illustrates the importance of combining different fuzzing concepts in order to generate data in case fuzzing requires the generation of a plausible sequence of meaningful messages to be sent over a network to a software-based controller as well as the exploitation of a hidden vulnerability by its execution.

Loui Al Sardy, Andreas Neubaum, Francesca Saglietti, Daniel Rudrich
Assuring Compliance with Protection Profiles with ThreatGet

We present ThreatGet a new tool for security analysis, based on threat modeling. The tool is integrated into a model-based engineering platform, supporting an iterative and model-based risk management process. We explain the modeling and operation of ThreatGet and how it can be used for security by design. As a specific use case, we demonstrate how ThreatGet can assess compliance with a protection profile.

Magdy El Sadany, Christoph Schmittner, Wolfgang Kastner
A Survey on the Applicability of Safety, Security and Privacy Standards in Developing Dependable Systems

Safety-critical systems are required to comply with safety standards. These systems are increasingly digitized and networked to an extent where they need to also comply with security and privacy standards. This paper aims to provide insights into how practitioners apply the standards on safety, security or privacy (Sa/Se/Pr), as well as how they employ Sa/Se/Pr analysis methodologies and software tools to meet such criteria. To this end, we conducted a questionnaire-based survey within the participants of an EU project SECREDAS and obtained 21 responses. The results of our survey indicate that safety standards are widely applied by product and service providers, driven by the requirements from clients or regulators/authorities. When it comes to security standards, practitioners face a wider range of standards while few target specific industrial sectors. Some standards linking safety and security engineering are not widely used at the moment, or practitioners are not aware of this feature. For privacy engineering, the availability and usage of standards, analysis methodologies and software tools are relatively weaker than for safety and security, reflecting the fact that privacy engineering is an emerging concern for practitioners.

Lijun Shan, Behrooz Sangchoolie, Peter Folkesson, Jonny Vinter, Erwin Schoitsch, Claire Loiseaux
Combined Approach for Safety and Security

With evolution in Cyber-Physical Systems, the dependence and conflicts among dependability attributes (safety, security, reliability, availability etc) have become increasingly complex. We can not consider these dependability attributes in isolation, therefore, combined approaches for safety, security and other attributes are required. In this document, we provide a matrix based approach (inspired from ANP (Analytical Network Process)) for combined risk assessment for safety and security. This approach allows combined risk assessment considering dependence and conflict among attributes. The assessment results for different dependability attributes (such as safety, security etc.) are provided in the ANP matrix. We will discuss approaches such as Fault Tree Analysis (FTA), Stochastic Colored Petri Net (SCPN) Analysis, Attack Tree Analysis (ATA), Failure Mode Vulnerability and Effect Analysis (FMVEA) for evaluation of concerned attributes and achieving our goal of combined assessment.

Siddhartha Verma, Thomas Gruber, Christoph Schmittner, P. Puschner
Towards Integrated Quantitative Security and Safety Risk Assessment

Although multiple approaches for the combination of safety and security analysis exist, there are still some major gaps to overcome before they can be used for combined risk management. This paper presents the existing gaps, based on an overview of available methods, which is followed by the proposal towards a solution to achieve coordinated risk management by applying a quantitative security risk assessment methodology. This methodology extends established safety and security risk analysis methods with an integrated model, denoting the relationship between adversary and victim, including the used capabilities and infrastructure. This model is used to estimate the resistance strength and threat capabilities, to determine attack probabilities and security risks.

Jürgen Dobaj, Christoph Schmittner, Michael Krisper, Georg Macher
Potential Use of Safety Analysis for Risk Assessments in Smart City Sensor Network Applications

Smart City applications strongly rely on sensor networks for the collection of data and their subsequent analysis. In this paper we discuss whether methods from dependability engineering could be used to identify potential risk relating to safety and security of such applications. The demonstration object of this paper is a sensor network for air quality analysis and dynamic traffic control based on these data.

Torge Hinrichs, Bettina Buth
Increasing Safety of Neural Networks in Medical Devices

Neural networks are now widely used in industry for applications such as data analysis and pattern recognition. In the medical devices domain neural networks are used to detect certain medical/decease indications. For example, a potential imminent asthma insult is detected based e.g. on breathing pattern, heart rate, and a few optional additional parameters. The patient receives a warning message and can either change his behavior and/or take some medicine in order to avoid the insult. This directly increases the patient’s quality of life. Although, currently medical devices mostly use neural networks to provide some guidance information or to propose some treatment or change of settings, safety and reliability of the neural network are paramount. Internal errors or influences from the environment can cause wrong inferences. This paper will describe the experiences we made and the ways we used in order to both increase safety and reliability of a neural network in a medical device. We use a combination of online and offline tests to detect undesired behavior. Online tests are performed in regular intervals during therapy and offline tests are performed when the device is not performing therapy.

Uwe Becker
Smart Wristband for Voting

Nowadays sensor networks are not solely used in industrial settings anymore but are accessible to the public and thus allow for broad and diverse applications. The technical advancement of cyber-physical systems (CPS) paved the way for an easy and fast development of Internet of Things (IoT) devices. In this work, a voting wristband that uses hand gestures and the measurement of the corresponding barometric air pressure for voting has been developed. Audience response systems are often used as a way to improve participation and spark interest in a topic during a presentation. The intuitive wrist movement allows for multiple choice voting results. Textile integration of the wristband guarantees comfort and allows for an adaptable design for different events. Furthermore, an application that makes live updating of voting results and visualization thereof possible was implemented. This illustrates the endless possibilities of wearables and approaches to IoT design with sensors in times of CPS.

Martin Pfatrisch, Linda Grefen, Hans Ehm

8th International Workshop on Next Generation of System Assurance Approaches for Safety-Critical Systems (SASSUR 2019)

Frontmatter
Automotive Cybersecurity Standards - Relation and Overview

Today many connected and automated vehicles are available and connectivity features and information sharing is increasingly used for additional vehicle-, maintenance- and traffic safety features. This highly connected networking also increase the attractiveness of an attack on vehicles and the connected infrastructure by hackers with different motivations and thus introduces new risks for vehicle cybersecurity. Highly aware of this fact, the automotive industry has therefore taken high efforts in designing and producing safe and secure connected and automated vehicles. Therefore the domain invested efforts in the development of industry standards to tackle automotive cybersecurity issues and protect their assets. The joint working group of the standardization organizations International Organization for Standardization (ISO) and Society of Automotive Engineers (SAE) has recently established and published a committee draft of the “ISO-SAE Approved new Work Item (AWI) 21434 Road Vehicles - Cybersecurity Engineering” standard. In addition to that SAE is also working on a set of cybersecurity guidance, ISO is addressing specific automotive cybersecurity related topics in additional standards and European Telecommunications Standards Institute (ETSI) and International Telecommunication Union (ITU) is working on security topics of connected vehicles. Further activities are national and international regulations on Automotive Cybersecurity. In the course of this document, a review of the available work and ongoing developments is given and the outline of the automotive cybersecurity framework is given. The aim of this work is to provide a position statement for discussion of available standards, methods and recommendations for automotive cybersecurity.

Christoph Schmittner, Georg Macher
A Runtime Safety Monitoring Approach for Adaptable Autonomous Systems

Adaptable Autonomous Systems are advanced autonomous systems which not only interact with their environment, but are aware of it and are capable of adapting their behavior and structure accordingly. Since these systems operate in an unknown, dynamic and unstructured safety-critical environment, traditional safety assurance techniques are not sufficient anymore. In order to guarantee safe behavior, possibly at all times in all possible situations, they require methodologies that can observe the system status at runtime and ensure safety accordingly. To this end, we introduce a runtime safety monitoring approach that uses a rule-based safety monitor to observe the system for safety-critical deviations. The approach behaves like a fault tolerance mechanism where, the system continuously monitors itself and activates corrective measures in the event of safety-critical failures, thereby aiding the system to sustain a safe behavior at runtime. We illustrate the presented approach by employing an example from autonomous agricultural domain and discuss the case study with initial findings.

Nikita Bhardwaj Haupt, Peter Liggesmeyer
Structured Reasoning for Socio-Technical Factors of Safety-Security Assurance

Current research presents several approaches to safety-security technical risk analysis. Indeed, many safety standards now have the requirement that security must be considered. However, with greater knowledge of what makes assuring both attributes in an industrial context difficult, it becomes clear that it is not just the technical assurance that is challenging. It is the entirety of the socio-technical system that supports assurance. In this paper, the second part of the Safety-Security Assurance Framework - the Socio-Technical Model (SSAF STM) is presented as one way of reasoning about these wider issues that make co-assurance difficult.

Nikita Johnson, Tim Kelly
The SISTER Approach for Verification and Validation: A Lightweight Process for Reusable Results

The research project SISTER aims to improve the safety and autonomy of light rail trains by developing and integrating novel technologies for remote sensing and object detection, safe positioning, and broadband radio communication. To prove safety of the SISTER solution, CENELEC-compliant Verification and Validation (V&V) is obviously required. In the SISTER project, we tackled the challenge of defining and applying a compact V&V methodology, able to provide convincing safety evidence on the solution, but still within the reduced resources available for the project. A relevant characteristic of the methodology is to produce V&V results that can be reused for future industrial exploitation of SISTER outcomes after project termination. This paper presents the V&V methodology that is currently applied in parallel to the progress of project activities, with preliminary results from its application.

Andrea Ceccarelli, Davide Basile, Andrea Bondavalli, Lorenzo Falai, Alessandro Fantechi, Sandro Ferrari, Gianluca Mandò, Nicola Nostro, Luigi Rucher

2nd International Workshop on Safety, securiTy, and pRivacy In automotiVe systEms (STRIVE 2019)

Frontmatter
Demo: CANDY CREAM

The attack performed back to 2015 by Miller and Valasek to the Jeep Cherokee proved that modern vehicles can be hacked like traditional PCs or smart-phones. Vehicles are no longer purely mechanical devices but shelter so much digital technology that they resemble a network of computers. Electronic Control Units (ECUs), that regulate all the functionalities of a vehicles, are commonly interconnected through the Controller Area Network (CAN) communication protocol. CAN is not secure-by-design: authentication, integrity and confidentiality are not considered in the design and implementation of the protocol. This represents one of the main vulnerability of modern vehicle: getting the access (physical or remote) to CAN communication allows a possible malicious entity to inject unauthorised messages on the CAN bus. These messages may lead to unexpected and possible very dangerous behaviour of the target vehicle. Here, we describe how we implement and perform CANDY CREAM, an attack made of two parts: CANDY aiming at exploiting a misconfiguration exposed by an infotainment system based on Android operating system connected to the vehicle’s CAN bus network, and CREAM, a post-exploitation script that injects customized CAN frame to alter the behaviour of the vehicle.

Gianpiero Costantino, Ilaria Matteucci
CarINA - Car Sharing with IdeNtity Based Access Control Re-enforced by TPM

Car sharing and car access control from mobile devices is an increasingly relevant topic. While numerous proposals started to appear, practical deployments ask for simple solutions, that are easy to implement and yet secure. In this work we explore the use of TPM 2.0 functionalities along with identity-based signatures in order to derive a flexible solution for gaining access to a vehicle. While TPM 2.0 specifications do not have support for identity-based primitives we can easily bootstrap identity-based private keys for Shamir’s signature scheme from regular RSA functionalities of TPM 2.0. In this way, key distribution becomes more secure as it is re-enforced by hardware and the rest of the functionalities can be carried from software implementations on mobile phones and in-vehicle controllers. We test the feasibility of the approach on modern Android devices and in-vehicle controllers as well as with a recent TPM circuit from Infineon.

Bogdan Groza, Lucian Popa, Pal-Stefan Murvay
Combining Safety and Security in Autonomous Cars Using Blockchain Technologies

Modern cars increasingly deploy complex software systems consisting of millions of lines of code that may be subject to cyber attacks. An infamous example is the Jeep hack which allowed an attacker to remotely control the car engine by just exploiting a software bug in the infotainment system. The digitalization and connectivity of modern cars demands a rethinking of car safety as security breaches now affect the driver’s safety. To address the new threat landscape, we develop a novel concept that simultaneously addresses both car safety and security based on the arising blockchain technology, which we mainly exploit to ensure integrity. Previous related work exploited the blockchain for the purpose of forensics, where vehicle data is stored on an externally shared ledger that is accessible by authorized third parties. However, those approaches cannot ensure integrity of information used by the vehicle’s components. In contrast, we propose a blockchain-based architecture based on a shared ledger inside the car, where each ECU can act as a miner and shares its information with other ECUs. The architecture does not only improve the integrity of information for forensics. Some algorithms, e.g. the recognition of dangerous situations, are adaptive and can be improved using for example sensor data. Using our architecture, we ensure that those algorithms only take verified and correct information as input.

Lucas Davi, Denis Hatebur, Maritta Heisel, Roman Wirtz
Enhancing CAN Security by Means of Lightweight Stream-Ciphers and Protocols

The Controller Area Network (CAN) is the most used standard for communication inside vehicles. CAN relies on frame broadcast to exchange data payloads between different Electronic Control Units (ECUs) which manage critical or comfort functions such as cruise control or air conditioning. CAN is distinguished by its simplicity, its real-time application compatibility and its low deployment cost. However, CAN major drawback is its lack of security support. Indeed, CAN does not provide protections against attacks such as intrusion, injection or impersonation. In this work, we propose a framework for CAN security based on Trivium and Grain, two well-known lightweight stream ciphers. We define a simple authentication and key exchange protocol for ECUs. In addition, we extend CAN with the support of confidentiality and integrity for at least critical frames.

Aymen Boudguiga, Jerome Letailleur, Renaud Sirdey, Witold Klaudel
Analysis of Security Overhead in Broadcast V2V Communications

This paper concerns security issues for broadcast vehicle to vehicle (V2V) messages carrying vehicle status information (location, heading, speed, etc.). These are often consumed by safety-related applications that e.g. augment situational awareness, issue alerts, recommend courses of action, and even trigger autonomous action. Consequently, the messages need to be both trustworthy and timely. We explore the impact of authenticity and integrity protection mechanisms on message latency using a model based on queuing theory. In conditions of high traffic density such as found in busy city centres, even the latency requirement of 100 ms for first generation V2V applications was found to be challenging. Our main objective was to compare the performance overhead of the standard, PKC-based, message authenticity and integrity protection mechanism with that of an alternative scheme, TESLA, which uses symmetric-key cryptography combine with hash chains. This type of scheme has been dismissed in the past due to supposed high latency, but we found that in high traffic density conditions it outperformed the PKC-based scheme. without invoking congestion management measures. Perhaps the most significant observation from a security perspective is that denial of service attacks appear very easy to carry out and hard to defend against. This merits attention from the research and practitioner communities and is a topic we intend to address in the future.

Mujahid Muhammad, Paul Kearney, Adel Aneiba, Andreas Kunz
You Overtrust Your Printer

Printers are common devices whose networked use is vastly unsecured, perhaps due to an enrooted assumption that their services are somewhat negligible and, as such, unworthy of protection. This article develops structured arguments and conducts technical experiments in support of a qualitative risk assessment exercise that ultimately undermines that assumption. Three attacks that can be interpreted as post-exploitation activity are found and discussed, forming what we term the Printjack family of attacks to printers. Some printers may suffer vulnerabilities that would transform them into exploitable zombies. Moreover, a large number of printers, at least on an EU basis, are found to honour unauthenticated printing requests, thus raising the risk level of an attack that sees the crooks exhaust the printing facilities of an institution. There is also a remarkable risk of data breach following an attack consisting in the malicious interception of data while in transit towards printers. Therefore, the newborn IoT era demands printers to be as secure as other devices such as laptops should be, also to facilitate compliance with the General Data Protection Regulation (EU Regulation 2016/679) and reduce the odds of its administrative fines.

Giampaolo Bella, Pietro Biondi

2nd International Workshop on Artificial Intelligence Safety Engineering (WAISE 2019)

Frontmatter
Three Reasons Why: Framing the Challenges of Assuring AI

Assuring the safety of systems that use Artificial Intelligence (AI), specifically Machine Learning (ML) components, is difficult because of the unique challenges that AI presents for current assurance practice. However, what is also missing is an overall understanding of this multi-disciplinary problem space. In this paper, a model is given that frames the challenges into three categories which are aligned to the reasons why they occur. Armed with a common picture of where existing issues and solutions “fit-in”, the aim is to help bridge cross-domain conceptual gaps and provide a clearer understanding to safety practitioners, ML experts, regulators and anyone involved in the assurance of a system with AI.

Xinwei Fang, Nikita Johnson
Improving ML Safety with Partial Specifications

Advanced autonomy features of vehicles are typically difficult or impossible to specify precisely and this has led to the rise of machine learning (ML) from examples as an alternative implementation approach to traditional programming. Developing software without specifications sacrifices the ability to effectively verify the software yet this is a key component of safety assurance. In this paper, we suggest that while complete specifications may not be possible, partial specifications typically are and these could be used with ML to strengthen safety assurance. We review the types of partial specifications that are applicable for these problems and discuss the places in the ML development workflow that they could be used to improve the safety of ML-based components.

Rick Salay, Krzysztof Czarnecki
An Abstraction-Refinement Approach to Formal Verification of Tree Ensembles

Recent advances in machine learning are now being considered for integration in safety-critical systems such as vehicles, medical equipment and critical infrastructure. However, organizations in these domains are currently unable to provide convincing arguments that systems integrating machine learning technologies are safe to operate in their intended environments.In this paper, we present a formal verification method for tree ensembles that leverage an abstraction-refinement approach to counteract combinatorial explosion. We implemented the method as an extension to a tool named VoTE, and demonstrate its applicability by verifying the robustness against perturbations in random forests and gradient boosting machines in two case studies. Our abstraction-refinement based extension to VoTE improves the performance by several orders of magnitude, scaling to tree ensembles with up to 50 trees with depth 10, trained on high-dimensional data.

John Törnblom, Simin Nadjm-Tehrani
RL-Based Method for Benchmarking the Adversarial Resilience and Robustness of Deep Reinforcement Learning Policies

This paper investigates the resilience and robustness of Deep Reinforcement Learning (DRL) policies to adversarial perturbations in the state space. We first present an approach for the disentanglement of vulnerabilities caused by representation learning of DRL agents from those that stem from the sensitivity of the DRL policies to distributional shifts in state transitions. Building on this approach, we propose two RL-based techniques for quantitative benchmarking of adversarial resilience and robustness in DRL policies against perturbations of state transitions. We demonstrate the feasibility of our proposals through experimental evaluation of resilience and robustness in DQN, A2C, and PPO2 policies trained in the Cartpole environment.

Vahid Behzadan, William Hsu
A Safety Standard Approach for Fully Autonomous Vehicles

Assuring the safety of self-driving cars and other fully autonomous vehicles presents significant challenges to traditional software safety standards both in terms of content and approach. We propose a safety standard approach for fully autonomous vehicles based on setting scope requirements for an overarching safety case. A viable approach requires feedback paths to ensure that both the safety case and the standard itself co-evolve with the technology and accumulated experience. An external assessment process must be part of this approach to ensure lessons learned are captured, as well as to ensure transparency. This approach forms the underlying basis for the UL 4600 initial draft standard.

Philip Koopman, Uma Ferrell, Frank Fratrik, Michael Wagner
Open Questions in Testing of Learned Computer Vision Functions for Automated Driving

Vision is an important sensing modality in automated driving. Deep learning-based approaches have gained popularity for different computer vision (CV) tasks such as semantic segmentation and object detection. However, the black-box nature of deep neural nets (DNN) is a challenge for practical software verification. With this paper, we want to initiate a discussion in the academic community about research questions w.r.t. software testing of DNNs for safety-critical CV tasks. To this end, we provide an overview of related work from various domains, including software testing, machine learning and computer vision and derive a set of open research questions to start discussion between the fields.

Matthias Woehrle, Christoph Gladisch, Christian Heinzemann
Adaptive Deployment of Safety Monitors for Autonomous Systems

This article discusses the problem of deploying safety-critical software for an autonomous system, namely a collaborative robot operating in domestic environments. We present a deployment infrastructure to enhance both humans and robots in carrying out their deployment activities. We develop means to enable humans to explicitly specify the requirements of the software to be deployed, along with the resources of the robot platform on which the software will be executed. In addition, we propose an architecture which enables robots to autonomously re-deploy their software at run-time in order to account for changing requirements imposed by their task, platform and environment. We show how the architecture enables a collaborative robot to autonomously re-deploy safety monitors for detecting in-hand slippage often occuring in human-robot handover tasks. By doing so, the robot autonomously maintains a certain safety level as the functioning of the monitor depends on both selecting and deploying the correct monitoring strategy for the situation at hand.

Nico Hochgeschwender
Uncertainty Wrappers for Data-Driven Models
Increase the Transparency of AI/ML-Based Models Through Enrichment with Dependable Situation-Aware Uncertainty Estimates

In contrast to established safety-critical software components, we can neither prove nor assume that the outcomes of components containing models based on artificial intelligence (AI) or machine learning (ML) will be correct in any situation. Thus, uncertainty is an inherent part of decision-making when using the outcomes of data-driven models created by AI/ML algorithms. In order to deal with this – especially in the context of safety-related systems – we need to make uncertainty transparent via dependable statistical statements. This paper introduces both a conceptual model and the related mathematical foundation of an uncertainty wrapper solution for data-driven models. The wrapper enriches existing data-driven models such as provided by ML or other AI techniques with case-individual and sound uncertainty estimates. The task of traffic sign recognition is used to illustrate the approach, which considers uncertainty not only in terms of model fit but also in terms of data quality and scope compliance.

Michael Kläs, Lena Sembach
Confidence Arguments for Evidence of Performance in Machine Learning for Highly Automated Driving Functions

Due to their ability to efficiently process unstructured and highly dimensional input data, machine learning algorithms are being applied to perception tasks for highly automated driving functions. The consequences of failures and insufficiencies in such algorithms are severe and a convincing assurance case that the algorithms meet certain safety requirements is therefore required. However, the task of demonstrating the performance of such algorithms is non-trivial, and as yet, no consensus has formed regarding an appropriate set of verification measures. This paper provides a framework for reasoning about the contribution of performance evidence to the assurance case for machine learning in an automated driving context and applies the evaluation criteria to a pedestrian recognition case study.

Simon Burton, Lydia Gauerhof, Bibhuti Bhusan Sethy, Ibrahim Habli, Richard Hawkins
Bayesian Uncertainty Quantification with Synthetic Data

Image semantic segmentation systems based on deep learning are prone to making erroneous predictions for images affected by uncertainty influence factors such as occlusions or inclement weather. Bayesian deep learning applies the Bayesian framework to deep models and allows estimating so-called epistemic and aleatoric uncertainties as part of the prediction. Such estimates can indicate the likelihood of prediction errors due to the influence factors. However, because of lack of data, the effectiveness of Bayesian uncertainty estimation when segmenting images with varying levels of influence factors has not yet been systematically studied. In this paper, we propose using a synthetic dataset to address this gap. We conduct two sets of experiments to investigate the influence of distance, occlusion, clouds, rain, and puddles on the estimated uncertainty in the segmentation of road scenes. The experiments confirm the expected correlation between the influence factors, the estimated uncertainty, and accuracy. Contrary to expectation, we also find that the estimated aleatoric uncertainty from Bayesian deep models can be reduced with more training data. We hope that these findings will help improve methods for assuring machine-learning-based systems.

Buu Phan, Samin Khan, Rick Salay, Krzysztof Czarnecki
A Self-certifiable Architecture for Critical Systems Powered by Probabilistic Logic Artificial Intelligence

We present a versatile architecture for AI-powered self-adaptive self-certifiable critical systems. It aims at supporting semi-automated low-cost re-certification for self-adaptive systems after each adaptation of their behavior to a persistent change in their operational environment throughout their lifecycle.

Jacques Robin, Raul Mazo, Henrique Madeira, Raul Barbosa, Daniel Diaz, Salvador Abreu
Tackling Uncertainty in Safety Assurance for Machine Learning: Continuous Argument Engineering with Attributed Tests

There are unique kinds of uncertainty in implementations constructed by machine learning from training data. This uncertainty affects the strategy and activities for safety assurance. In this paper, we investigate this point in terms of continuous argument engineering with a granular performance evaluation over the expected operational domain. We employ an attribute testing method for evaluating an implemented model in terms of explicit (partial) specification. We then show experimental results that demonstrate how safety arguments are affected by the uncertainty of machine learning. As an example, we show the weakness of a model, which cannot be predicted beforehand. We show our tool for continuous argument engineering to track the latest state of assurance.

Yutaka Matsuno, Fuyuki Ishikawa, Susumu Tokumoto
The Moral Machine: Is It Moral?

Many recent studies have been proposing, discussing and investigating moral decisions in scenarios of imminent accident involving Autonomous Vehicles (AV). Those studies investigate people’s expectations about the best decisions the AVs should make when some life needs to be sacrificed to save other ones. A recent research found those preferences have strong ties to the respondents’ cultural traits. The present position paper questions the importance and the real value of those discussions. It also argues about their morality. Finally, an approach based on risk-oriented decision making is discussed as an alternative way to tackle those situations framed as “moral dilemmas” under the light of safety engineering.

A. M. Nascimento, L. F. Vismari, A. C. M. Queiroz, P. S. Cugnasca, J. B. Camargo Jr., J. R. de Almeida Jr.
Backmatter
Metadata
Title
Computer Safety, Reliability, and Security
Editors
Alexander Romanovsky
Elena Troubitsyna
Dr. Ilir Gashi
Erwin Schoitsch
Friedemann Bitsch
Copyright Year
2019
Electronic ISBN
978-3-030-26250-1
Print ISBN
978-3-030-26249-5
DOI
https://doi.org/10.1007/978-3-030-26250-1

Premium Partner