Skip to main content

2021 | Buch

Computer Safety, Reliability, and Security. SAFECOMP 2021 Workshops

DECSoS, MAPSOD, DepDevOps, USDAI, and WAISE, York, UK, September 7, 2021, Proceedings

herausgegeben von: Ibrahim Habli, Mark Sujan, Dr. Simos Gerasimou, Erwin Schoitsch, Friedemann Bitsch

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the Workshops held in conjunction with SAFECOMP 2021, the 40th International Conference on Computer Safety, Reliability and Security, which took place in York, UK, in September 2021.

The 26 regular papers included in this volume were carefully reviewed and selected from 34 submissions. The workshops included in this volume are:

DECSoS 2021: 16th Workshop on Dependable Smart Embedded and Cyber-Physical Systems and Systems-of-SystemsWAISE 2021: Fourth International Workshop on Artificial Intelligence Safety EngineeringDepDevOps 2021: Second International Workshop on Dependable Development-Operation Continuum Methods for Dependable Cyber-Physical SystemsUSDAI 2021: Second International Workshop on Underpinnings for Safe Distributed AI MAPSOD 2021: First International Workshop on Multi-concern Assurance Practices in Software Design

Inhaltsverzeichnis

Frontmatter

16th International ERCIM/EWICS/ARTEMIS Workshop on Dependable Smart Embedded Cyber-Physical Systems and Systems-of-Systems (DECSoS 2021)

Frontmatter
Dependable Integration Concepts for Human-Centric AI-Based Systems
Abstract
The rising demand for adaptive, cloud-based and AI-based systems is calling for an upgrade of the associated dependability concepts. That demands instantiation of dependability-orientated processes and methods to cover the whole life cycle. However, a common solution is not in sight yet That is especially evident for continuously learning AI and/or dynamic runtime-based approaches. This work focuses on engineering methods and design patterns that support the development of dependable AI-based autonomous systems. The emphasis on the human-centric aspect leverages users’ physiological, emotional, and cognitive state for the adaptation and optimisation of autonomous applications. We present the related body of knowledge of the TEACHING project and several automotive domain regulation activities and industrial working groups. We also consider the dependable architectural concepts and their applicability to different scenarios to ensure the dependability of evolving AI-based Cyber-Physical Systems of Systems (CPSoS) in the automotive domain. The paper shines the light on potential paths for dependable integration of AI-based systems into the automotive domain through identified analysis methods and targets.
Georg Macher, Siranush Akarmazyan, Eric Armengaud, Davide Bacciu, Calogero Calandra, Herbert Danzinger, Patrizio Dazzi, Charalampos Davalas, Maria Carmela De Gennaro, Angela Dimitriou, Juergen Dobaj, Maid Dzambic, Lorenzo Giraudi, Sylvain Girbal, Dimitrios Michail, Roberta Peroglio, Rosaria Potenza, Farank Pourdanesh, Matthias Seidl, Christos Sardianos, Konstantinos Tserpes, Jakob Valtl, Iraklis Varlamis, Omar Veledar
Rule-Based Threat Analysis and Mitigation for the Automotive Domain
Abstract
Cybersecurity is given a prominent role in curbing risks encountered by novel technologies, specifically the case in the automotive domain, where the possibility of cyberattacks impacts vehicle operation and safety. The potential threats must be identified and mitigated to guarantee the flawless operation of the safety-critical systems. This paper presents a novel approach to identify security vulnerabilities in automotive architectures and automatically propose mitigation strategies using rule-based reasoning. The rules, encoded in ontologies, enable establishing clear relationships in the vast combinatorial space of possible security threats and related assets, security measures, and security requirements from the relevant standards. We evaluate our approach on a mixed-criticality platform, typically used to develop Autonomous Driving (AD) features, and provide a generalized threat model that serves as a baseline for threat analysis of proprietary AD architectures.
Abdelkader Magdy Shaaban, Stefan Jaksic, Omar Veledar, Thomas Mauthner, Edin Arnautovic, Christoph Schmittner
Guideline for Architectural Safety, Security and Privacy Implementations Using Design Patterns: SECREDAS Approach
Abstract
Vehicle systems engineering experiences new challenges with vehicle electrification, advanced driving systems, and connected vehicles. Modern architectural designs cope with an increasing number of functionalities integrated into complex Electric/Electronic (E/E) systems. Such complexity is extended, adding V2X (Vehicle-to-everything) communication systems, which provide remote communication services that collect, store, and manipulate confidential data. The impact on Safety, Security, and Privacy (SSP) of these new advanced technological systems requires the implementation of new processes during their development phase. Therefore, new product development strategies need to be implemented to integrate SSP mechanism across the entire product development lifecycle. The European H2020 ECSEL project SECREDAS proposes an innovative solution for Safety, Security and Privacy specifically for automated systems. The project outlines the shortcomings of existing SSP approaches and proposes its own approach to implementing SSP mechanism for the emerging technologies. This approach includes a reference architecture with SSP features implemented by a set of reusable Design Patterns (DPs) along with their associated technology elements. This guideline proposes rules for developing new architectural Safety, Security, and Privacy implementations in a product under development using Design Patterns.
Nadja Marko, Joaquim Maria Castella Triginer, Christoph Striecks, Tobias Braun, Reinhard Schwarz, Stefan Marksteiner, Alexandr Vasenev, Joerg Kemmerich, Hayk Hamazaryan, Lijun Shan, Claire Loiseaux
Structured Traceability of Security and Privacy Principles for Designing Safe Automated Systems
Abstract
Creating modern safe automated systems like vehicles demands making them secure. With many diverse components addressing different needs, it is hard to trace and ensure the contributions of components to the overall security of systems. Principles, as high-level statements, can be used to reason how components contribute to security (and privacy) needs. This would help to design systems and products by aligning security and privacy concerns. The structure proposed in this positioning paper helps to make traceable links from stakeholders to specific technologies and system components. It aims at informing holistic discussions and reasoning on security approaches with stakeholders involved in the system development process. Ultimately, the traceable links can help to assist in aligning developers, create test cases, and provide certification claims - essential activities to ensure the final system is secure and safe.
Behnam Asadi Khashooei, Alexandr Vasenev, Hasan Alper Kocademir
Synchronisation of an Automotive Multi-concern Development Process
Abstract
Standardisation has a primary role in establishing common ground and providing technical guidance on best practices. However, as the methods for Autonomous Driving Systems design, validation and assurance are still in their initial stages, and several of the standards are under development or have been recently published, an established practice for how to work with several complementary standards simultaneously is still lacking. To bridge this gap, we present a unified chart describing the processes, artefacts, and activities for three road vehicle standards addressing different concerns: ISO 26262 - functional safety, ISO 21448 - safety of the intended functionality, and ISO 21434 - cybersecurity engineering. In particular, the need to ensure alignment between the concerns is addressed with a synchronisation structure regarding content and timing.
Martin Skoglund, Fredrik Warg, Hans Hansson, Sasikumar Punnekkat
Offline Access to a Vehicle via PKI-Based Authentication
Abstract
Using modern methods to control vehicle access is becoming more common. At present, various approaches and ideas are emerging on how to ensure the access in use cases reflecting car rental services, car sharing, and fleet management, where the process of assigning car access to individual users is dynamic and yet must be secure. In this paper, we show that this challenge can be resolved by a combination of the PKI technology and an access management system. We implemented a vehicle key validation process into an embedded platform (ESP32) and measured the real-time parameters of this process to evaluate the user experience. Utilizing the SHA256-RSA cipher suite with the key length of 3072 bits, we measured the validation time of 46.6 ms. The results indicate that the user experience is not worsened by the entry delays arising from the limited computing power of embedded platforms, even when using key lengths that meet the 2020 NIST recommendations for systems to be deployed until 2030 and beyond.
Jakub Arm, Petr Fiedler, Ondrej Bastan
HEIFU - Hexa Exterior Intelligent Flying Unit
Abstract
The number of applications for which UAVs can be used is growing rapidly, either because they can perform more efficiently than traditional methods or because they can be a good alternative when there are risks involved. Indeed, as a result of some incidents that could have resulted in disastrous accidents, the European Union is tightening regulations regarding the use of drones and requiring formal training as well as logged missions from those who want to use UAVs above a certain MTOW for whatever reason, whether domestic or professional. If the application requires BVLOS flights, the limitations become much more stringent. In this article HEIFU is presented, a class 3 hexacopter UAV that can carry up to an 8 kg payload (having a MTOW of 15 kg) and a wingspan of 1.5 m, targeting applications that could profit much from having fully automated missions. Inside, an AI engine was installed so that the UAV could be trained to fly, following a pre-determined mission, but also detect obstacles in real-time so that it can accomplish its task without incidents. A sample use case of HEIFU is also presented, facilitating the temporal replication of an autonomous mission for an agricultural application.
Dário Pedro, Pedro Lousã, Álvaro Ramos, J. P. Matos-Carvalho, Fábio Azevedo, Luís Campos
Testing for IT Security: A Guided Search Pattern for Exploitable Vulnerability Classes
Abstract
This article presents a generic structured approach supporting the detection of exploitable software vulnerabilities of given type. Its applicability is illustrated for two weakness types: buffer overflowing and race conditions.
Andreas Neubaum, Loui Al Sardy, Marc Spisländer, Francesca Saglietti, Yves Biener
Formal Modelling of the Impact of Cyber Attacks on Railway Safety
Abstract
Modern railway signaling extensively relies on wireless communication technologies for efficient operation. The communication infrastructures that they rely on are increasingly based on standardized protocols and are shared with other users. As a result, it has an increased attack surface and is more likely to become the target of cyber attacks that can result in loss of availability and, in the worst case, in safety incidents. While formal modeling of safety properties has a well-established methodology in the railway domain, the consideration of security vulnerabilities and the related threats lacks a framework that would allow a formal treatment. In this paper, we develop a modeling framework for the analysis of the potential of security vulnerabilities to jeopardize safety in communications-based train control for railway signaling, focusing on the recently introduced moving block system. We propose a refinement-based approach enabling a structured and rigorous analysis of the impact of security on system safety.
Ehsan Poorhadi, Elena Troubitysna, György Dán
LoRaWAN with HSM as a Security Improvement for Agriculture Applications - Evaluation
Abstract
The future of agriculture is digital and the move towards it has already started. Comparable to modern industrial automation control systems (IACS), today’s smart agriculture makes use of smart sensors, sensor networks, intelligent field devices, cloud-based data storage, and intelligent decision-making systems. These agriculture automation control systems (AACS) require equivalent, but adapted, security protection measures. Last year, the agriculture-related security theme was addressed in a presentation [1] on DECSoS '20 workshop of the SAFECOMP 2020 conference. In the workshop a simple soil sensor prototype with wireless communication system was presented. The sensor was used to demonstrate the improvements to operational security of field devices through employing cyber security protection techniques. As a continuation of the last year contribution, this paper presents the evaluation of the technologies presented in real-life deployment of AACS. It also describes operational scenarios and experiences with the implementation of security measures for AACS, e.g.: the implementation of a four-layer cyber security architecture, the signalling concept of alarms and notifications in the event of a cyber-attack, the assessment of security measures, the costs of security, and an outlook of upcoming future security requirements for wireless IoT devices which are specified by the European Commission through the European Radio Equipment Directive (RED).
Reinhard Kloibhofer, Erwin Kristen, Afshin Ameri E.

2nd International Workshop on Dependable Development-Operation Continuum Methods for Dependable Cyber-Physical System (DepDevOps 2021)

Frontmatter
Towards Continuous Safety Assessment in Context of DevOps
Abstract
Promoted by the internet companies, continuous delivery is more and more appealing to industries which develop systems with safety-critical functions. Since safety-critical systems must meet regulatory requirements and require specific safety assessment processes in addition to the normal development steps, enabling continuous delivery of software in safety-critical systems requires the automation of the safety assessment process in the delivery pipeline. In this paper, we outline a continuous delivery pipeline for realizing continuous safety assessment in software-intensive safety-critical systems based on model-based safety assessment methods.
Marc Zeller
The Digital Twin as a Common Knowledge Base in DevOps to Support Continuous System Evolution
Abstract
There is an industry wide push for faster and more feature rich systems, also in the development of Cyber-Physical Systems (CPS). Therefore, the need for applying agile development practices in the model-based design of CPS is becoming more widespread. This is no easy feat, as CPS are inherently complex, and their model-based development is less suited for agile development. Model-based development does suit the concept of digital twin, that is, design models representing a system instance in operation. In this paper we present an approach where the digital twins of system instances serve as a common-knowledge base for the entire agile development cycle of the system when performing system updates. Doing so enables interesting possibilities, such as the identification and detection of system variants, which is beneficial for the verification and validation of the system update. It also brings along challenges, as the executable physics based digital twin is generally computationally expensive. In this paper we introduce this approach by means of a small example of a swiveling pick and place robotic arm. We also elaborate on related work, and open future challenges.
Joost Mertens, Joachim Denil

1st International Workshop on Multi-concern Assurance Practices in Software Design (MAPSOD 2021)

Frontmatter
An Accountability Approach to Resolve Multi-stakeholder Conflicts
Abstract
A problem that has been increasingly emerging recently in software development is that the interests of stakeholders with different positions and roles can lead to system failures due to exacerbation of conflicts among them. This research considers the degradation of software reliability due to conflicts of interest among multiple stakeholders as one of the challenges of multi-concern assurance, and describes a method to resolve stakeholder conflicts and improve software dependability. We propose an accountability map that incorporates the approach to achieving accountability outlined in international standard IEC 62853. We identify events that require accountability, identify risks that arise from exacerbating stakeholder conflicts, and describe resolution procedures that support and resolve risk responses. The results of this paper are valuable in that increased trust among stakeholders will enable software development to be responsive to accountability issues.
Yukiko Yanagisawa, Yasuhiko Yokote
Towards Assurance-Driven Architectural Decomposition of Software Systems
Abstract
Computer systems are so complex, so they are usually designed and analyzed in terms of layers of abstraction. Complexity is still a challenge facing logical reasoning tools that are used to find software design flaws and implementation bugs. Abstraction is also a common technique for scaling those tools to more complex systems. However, the abstractions used in the design phase of systems are in many cases different from those used for assurance. In this paper we argue that different software quality assurance techniques operate on different aspects of software systems. To facilitate assurance, and for a smooth integration of assurance tools into the Software Development Lifecycle (SDLC), we present a 4-dimensional meta-architecture that separates computational, coordination, and stateful software artifacts early on in the design stage. We enumerate some of the design and assurance challenges that can be addressed by this meta-architecture, and demonstrate it on the high-level design of a simple file system.
Ramy Shahin

2nd International Workshop on Underpinnings for Safe Distributed Artificial Intelligence (USDAI 2021)

Frontmatter
Integration of a RTT Prediction into a Multi-path Communication Gateway
Abstract
Reliable communication between the vehicle and its environment is an important aspect, to enable automated driving functions that include data from outside the vehicle. One way to achieve this is presented in this paper, a pipeline, that represents the entire process from data acquisition up to model inference in production. In this paper, a pipeline is developed to conduct a round-trip time prediction for TCP in the 4th generation of mobile network, called LTE. The pipeline includes data preparation, feature selection, model training and evaluation, and deployment of the model. In addition to the technical backgrounds of the design of the required steps for the deployment of a model on a target platform within the vehicle, a concrete implementation how such a model enables more reliable scheduling between multiple communication paths is demonstrated. Finally, the work outlines how such a feature can be applied beyond the field of automated vehicles, e.g. to the domain of unmanned aerial vehicles.
Josef Schmid, Patrick Purucker, Mathias Schneider, Rick vander Zwet, Morten Larsen, Alfred Höß

4th International Workshop on Artificial Intelligence Safety Engineering (WAISE 2021)

Frontmatter
Improving Robustness of Deep Neural Networks for Aerial Navigation by Incorporating Input Uncertainty
Abstract
Uncertainty quantification methods are required in autonomous systems that include deep learning (DL) components to assess the confidence of their estimations. However, to successfully deploy DL components in safety-critical autonomous systems, they should also handle uncertainty at the input rather than only at the output of the DL components. Considering a probability distribution in the input enables the propagation of uncertainty through different components to provide a representative measure of the overall system uncertainty. In this position paper, we propose a method to account for uncertainty at the input of Bayesian Deep Learning control policies for Aerial Navigation. Our early experiments show that the proposed method improves the robustness of the navigation policy in Out-of-Distribution (OoD) scenarios.
Fabio Arnez, Huascar Espinoza, Ansgar Radermacher, François Terrier
No Free Lunch: Overcoming Reward Gaming in AI Safety Gridworlds
Abstract
We present two heuristics for tackling the problem of reward gaming by self-modification in Reinforcement Learning agents. Reward gaming occurs when the agent’s reward function is mis-specified and the agent can achieve a high reward by altering or fooling, in some way, its sensors rather than by performing the desired actions. Our first heuristic tracks the rewards encountered in the environment and converts high rewards that fall outside the normal distribution into penalities. Our second heuristic relies on the existence of some validation action that an agent can take to check the reward. In this heuristic, on encountering an abnormally high reward, the agent performs a validation step before either accepting the reward as it is, or converting it into a penalty. We evaluate the performance of these heuristics on variants of the tomato watering problem from the AI Safety Gridworlds suite.
Mariya Tsvarkaleva, Louise A. Dennis
Effect of Label Noise on Robustness of Deep Neural Network Object Detectors
Abstract
Label noise is a primary point of interest for safety concerns in previous works as it affects the robustness of a machine learning system by a considerable amount. This paper studies the sensitivity of object detection loss functions to label noise in bounding box detection tasks. Although label noise has been widely studied in the classification context, less attention is paid to its effect on object detection. We characterize different types of label noise and concentrate on the most common type of annotation error, which is missing labels. We simulate missing labels by deliberately removing bounding boxes at training time and study its effect on different deep learning object detection architectures and their loss functions. Our primary focus is on comparing two particular loss functions: cross-entropy loss and focal loss. We also experiment on the effect of different focal loss hyperparameter values with varying amounts of noise in the datasets and discover that even up to 50% missing labels can be tolerated with an appropriate selection of hyperparameters. The results suggest that focal loss is more sensitive to label noise, but increasing the gamma value can boost its robustness.
Bishwo Adhikari, Jukka Peltomäki, Saeed Bakhshi Germi, Esa Rahtu, Heikki Huttunen
Human-in-the-Loop Learning Methods Toward Safe DL-Based Autonomous Systems: A Review
Abstract
The involvement of humans during the training phase can play a crucial role in mitigating some safety issues of Deep learning (DL)-based autonomous systems. This paper reviews the main concepts and methods for human-in-the-loop learning as a first step towards the development of a framework for human-machine teaming through safe learning and anomaly prediction. The methods come with their own set of challenges such as the variation in the training data provided by the human and test-time distributions, the cost involved to keep the human in the loop during the long training phase and the imperfection of the human to deal with unforeseen circumstances and define safer policies.
Prajit T. Rajendran, Huascar Espinoza, Agnes Delaborde, Chokri Mraidha
An Integrated Approach to a Safety Argumentation for AI-Based Perception Functions in Automated Driving
Abstract
Developing a stringent safety argumentation for AI-based perception functions requires a complete methodology to systematically organize the complex interplay between specifications, data and training of AI-functions, safety measures and metrics, risk analysis, safety goals and safety requirements. The paper presents the overall approach of the German research project “KI-Absicherung” for developing a stringent safety-argumentation for AI-based perception functions. It is a risk-based approach in which an assurance case is constructed by an evidence-based safety argumentation.
Michael Mock, Stephan Scholz, Frédérik Blank, Fabian Hüger, Andreas Rohatschek, Loren Schwarz, Thomas Stauner
Experimental Conformance Evaluation onUBER ATG Safety Case Framework withANSI/UL 4600
Abstract
The safety of Self-Driving Vehicles (SDVs) is crucial for social acceptance of self-driving technology/vehicles, and how to assure such safety is of great concern for automakers and regulatory and standardization bodies. ANSI/UL 4600 (4600) [3], a standard for the safety of autonomous products, has an impact on the regulatory regime of self-driving technology/vehicles due to its detailed and well defined assurance requirements on what will be required for the safety of autonomous products. One of the major characteristics of the standard is wide-scale adoption of the safety case, which has been traditionally used for safety assurance of safety-critical systems such as railways and automobiles.
Uber ATG (now Aurora) then released its own safety case called the Safety Case Framework (SCF) [1] for their SDVs. A question arises as to how much the SCF would conform to 4600 even though the SFC does not claim its conformance with the standard. An answer to this question would result in what type of argumentation would be fit for purpose for safety assurance for SDVs and address issues with conformance assessment of a safety case with a standard.
In this paper we report on lessons we learned from an experimental analysis on the conformance ratios of the SCF with 4600 and structural analysis following the argument structure of the SCF.
Kenji Taguchi, Fuyuki Ishikawa
Learning from AV Safety: Hope and Humility Shape Policy and Progress
Abstract
Producing automated vehicles (AVs) that are, and can be shown to be, safe is an ongoing challenge. This position paper draws on recent work to discuss alternative approaches to assessing AV safety, noting how AI can be a positive or negative influence it. It features suggestions to promote AV safety, drawing from practice and policy, and it ends with a speculation about a special new role for AI.
Marjory S. Blumenthal
Levels of Autonomy and Safety Assurance for AI-Based Clinical Decision Systems
Abstract
Levels of Autonomy are an important guide to structure our thinking of capability, expectation and safety in autonomous systems. Here we focus on autonomy in the context of digital healthcare, where autonomy maps out differently to e.g. self-driving cars. Specifically we focus here on mapping levels of autonomy to clinical decision support systems and consider how these levels relate to safety assurance. We then explore the differences in the generation of safety evidence that exist between medical applications based on supervised learning (often used for prediction tasks such as in diagnosis and monitoring) and reinforcement learning (which we recently established as a way for AI-guided medical intervention). These latter systems have the potential to intervene on patients and should therefore be regarded as autonomous systems.
Paul Festor, Ibrahim Habli, Yan Jia, Anthony Gordon, A. Aldo Faisal, Matthieu Komorowski
Certification Game for the Safety Analysis of AI-Based CPS
Abstract
Current certification procedures aim at establishing trust in manufacturers of artificial intelligence/machine learning based cyber-physical systems. The certification process usually requires the manufacturer to demonstrate excellence in following safety engineering standards and regulations throughout the holistic system’s engineering process. This paper touches on the need for real-world performance monitoring performed by the certifier to ensure that the operational system does not deviate from the specifications. We propose an interactive cooperative process between the manufacturer and certifier which aims at verifying conformance and consistency between the specifications and the operational model while preserving the manufacturer’s competitive advantage.
Imane Lamrani, Ayan Banerjee, Sandeep K. S. Gupta
A New Approach to Better Consensus Building and Agreement Implementation for Trustworthy AI Systems
Abstract
We propose a system that focuses on consensus building and agreement implementation as the basis for establishing AI trustworthiness. Our approach is new, and we have called it consensus building with an assurance case: it is based on applying open systems dependability engineering to the objective of achieving stakeholder accountability. The experimental validation of the proposed system was conducted on the issue of feature engineering within the context of a project on data-intensive medicine. Our findings show that the online nature of the proposed system is effective in facilitating stakeholders participation and contribution, and logging the consensus building process in a useful manner. Using this approach, stakeholders can achieve a logical understanding of the substance of the consensus and the process by which it was reached, thereby providing assurance that AI safety-related requirements are being met.
Yukiko Yanagisawa, Yasuhiko Yokote
Backmatter
Metadaten
Titel
Computer Safety, Reliability, and Security. SAFECOMP 2021 Workshops
herausgegeben von
Ibrahim Habli
Mark Sujan
Dr. Simos Gerasimou
Erwin Schoitsch
Friedemann Bitsch
Copyright-Jahr
2021
Electronic ISBN
978-3-030-83906-2
Print ISBN
978-3-030-83905-5
DOI
https://doi.org/10.1007/978-3-030-83906-2