Skip to main content
main-content

Über dieses Buch

This book constitutes the proceedings of the Workshops held in conjunction with SAFECOMP 2020, 39th International Conference on Computer Safety, Reliability and Security, Lisbon, Portugal, September 2020.
The 26 regular papers included in this volume were carefully reviewed and selected from 45 submissions; the book also contains one invited paper. The workshops included in this volume are:

DECSoS 2020:
15th Workshop on Dependable Smart Embedded and Cyber-Physical Systems and Systems-of-Systems.

DepDevOps 2020:
First International Workshop on Dependable Development-Operation Continuum Methods for Dependable Cyber-Physical Systems.

USDAI 2020:
First International Workshop on Underpinnings for Safe Distributed AI.

WAISE 2020:
Third International Workshop on Artificial Intelligence Safety Engineering.

The workshops were held virtually due to the COVID-19 pandemic.

Inhaltsverzeichnis

Frontmatter

15th International Workshop on Dependable Smart Cyber-Physical Systems and Systems-of-Systems (DECSoS 2020)

Frontmatter

Supervisory Control Theory in System Safety Analysis

Development of safety critical systems requires a risk management strategy to identify and analyse hazards, and apply necessary actions to eliminate or control them as malfunctions could be catastrophic. Fault Tree Analysis (FTA) is one of the most widely used methods for safety analysis in industrial use. However, the standard FTA is manual, informal, and limited to static analysis of systems. In this paper, we present preliminary results from a model-based approach to address these limitations using Supervisory Control Theory. Taking an example from the Fault Tree Handbook, we present a systematic approach to incrementally obtain formal models from a fault tree and verify them in the tool Supremica. We present a method to calculate minimal cut sets using our approach. These compositional techniques could potentially be very beneficial in the safety analysis of highly complex safety critical systems, where several components interact to solve different tasks.

Yuvaraj Selvaraj, Zhennan Fei, Martin Fabian

A Method to Support the Accountability of Safety Cases by Integrating Safety Analysis and Model-Based Design

In this paper, we describe a method of visualizing the behavior of systems’ failures in order to improve the explanatory ability of safety analysis artifacts. Increasingly complex in-vehicle systems are making traditional safety analysis artifacts more difficult for reviewers to understand. One of the requirements for improvement is to provide more understandable explanations of failure behaviors. The AIAG/VDA FMEA (Failure Mode and Effect Analysis) handbook, published in 2019, introduced the FMEA-MSR (Supplemental FMEA for Monitoring and System Response) to explicitly describe the behavior of failures called the Hybrid Failure Chain (e.g., chain of failure mode, failure cause, monitoring, system response, and failure effects). For more precise explanations of the safety analysis artifacts, we propose a method to integrate and visualize failure behaviors into architectural design diagrams using SysML. Based on FTA (Fault Tree Analysis) and FMEA results, along with SysML diagrams (e.g., internal block diagrams), the proposed method imports represent FMEA and FTA data graphically as Hybrid Failure Chains with a system model to improve information cohesion in the safety analysis artifact. We found that the proposed method facilitates the discovery or recognition of flaws and omissions in the fault model.

Nobuaki Tanaka, Hisashi Yomiya, Kiyoshi Ogawa

Collecting and Classifying Security and Privacy Design Patterns for Connected Vehicles: SECREDAS Approach

In the past several years, autonomous driving turned out to be a target for many technical players. Automated driving requires new and advanced mechanisms to provide safe functionality and the increased communication makes automated vehicles more vulnerable to attacks. Security is already well-established in some domains, such as the IT sector, and now spills over to Automotive. In order to not reinvent the wheel, existing security methods and tools can be evaluated and adapted to be applicable in other domains, such as Automotive. In the European H2020 ECSEL project SECREDAS, this approach is followed and existing methods, tools, protocols, best practices etc. are analyzed, combined and improved to be applicable in the field of connected vehicles. To provide modular and reusable designs, solutions are collected in form of design patterns. The SECREDAS design patterns describe solution templates to solve security, safety and privacy issues related to automated systems. The grouping and classification of design patterns is important to facilitate the selection process which is a challenging task and weak classification schemes can be a reason for a sparse application of security patterns, which represent a subgroup of design patterns. This work aims to assist automotive software and systems engineers in adopting and using technologies available on the market. The SECREDAS security patterns are based on existing technologies, so-called Common Technology Elements, and describe how and where to apply them in context of connected vehicles by making a reference to a generic architecture. This allows developers to easily find solutions to common problems and reduces the development effort by providing concrete, trustworthy solutions. The whole approach and classification scheme is illustrated based on one example security pattern.

Nadja Marko, Alexandr Vasenev, Christoph Striecks

Safety and Security Interference Analysis in the Design Stage

Safety and security engineering have been traditionally separated disciplines (e.g., different required knowledge and skills, terminology, standards and life-cycles) and operated in quasi-silos of knowledge and practices. However, the co-engineering of these two critical qualities of a system is being largely investigated as it promises the removal of redundant work and the detection of trade-offs in early stages of the product development life-cycle. In this work, we enrich an existing safety-security co-analysis method in the design stage providing capabilities for interference analysis. Reports on interference analyses are crucial to trigger co-engineering meetings leading to the trade-offs analyses and system refinements. We detail our automatic approach for this interference analysis, performed through fault trees generated from safety and security local analyses. We evaluate and discuss our approach from the perspective of two industrial case studies on the space and medical domains.

Jabier Martinez, Jean Godot, Alejandra Ruiz, Abel Balbis, Ricardo Ruiz Nolasco

Formalising the Impact of Security Attacks on IoT Safety

Modern safety-critical systems become increasingly networked and interconnected. Often the communication between the system components utilises the protocols similar to the standard Internet Protocol (IP). In particular, such protocols are used for communication between smart sensors and controller. While offering advanced capabilities such as remote diagnostics and maintenance, this also make safety-critical systems susceptible to the attacks implementable against IP-based systems. In this paper, we propose an approach to specifying a generic IP-based networked control system and formalising its security properties. We use the Event-B framework to formally analyse the impact of security attacks on safety properties of the system.

Ehsan Poorhadi, Elena Troubitysna, György Dan

Assurance Case Patterns for Cyber-Physical Systems with Deep Neural Networks

With the increasing use of deep neural networks (DNNs) in the safety-critical cyber-physical systems (CPS), such as autonomous vehicles, providing guarantees about the safety properties of these systems becomes ever more important. Tools for reasoning about the safety of DNN-based systems have started to emerge. In this paper, we show that assurance cases can be used to argue about the safety of CPS with DNNs by proposing assurance case patterns that are amenable to the existing evidence generation tools for these systems. We use case studies of two different autonomous driving scenarios to illustrate the use of the proposed patterns for the construction of these assurance cases.

Ramneet Kaur, Radoslav Ivanov, Matthew Cleaveland, Oleg Sokolsky, Insup Lee

Safety-Critical Software Development in C++

The choice of the programming language is a fundamental decision to be made when defining a safety-oriented software development process. It has significant impact on code quality and performance, but also on the achievable level of safety, the development and verification effort, and on the cost of tool qualification. Traditionally, safety-critical systems have been programmed in C or ADA. In recent years, also C++ has entered into the discussion. C++ enables elegant programming, but its inherent language complexity is much higher compared to C. This has implications for testability, structural coverage, performance, and code analysis. Further issues to be considered are tool chain diversity, the role of the standard library, and tool qualification for compilers, analyzers and other development tools. This article summarizes the requirements of different safety norms, illustrates development and verification challenges and addresses tool qualification.

Daniel Kästner, Christoph Cullmann, Gernot Gebhard, Sebastian Hahn, Thomas Karos, Laurent Mauborgne, Stephan Wilhelm, Christian Ferdinand

An Instruction Filter for Time-Predictable Code Execution on Standard Processors

Dependable cyber-physical systems usually have stringent requirements on their response time, since failure to react to changes in the system state in a timely manner might lead to catastrophic consequences. It is therefore necessary to determine reliable bounds on the execution time of tasks. However, timing analysis, whether done statically using a timing model or based on measurements, struggles with the large number of possible execution paths in typical applications. The single-path code generation paradigm makes timing analysis trivial by producing programs with a single execution path. Single-path code uses predicated execution, where individual instructions are enabled or disabled based on predicates, instead of conditional control-flow branches. Most processing architectures support a limited number of predicated instructions, such as for instance a conditional move, but single-path code benefits from fully predicated execution, where every instruction is predicated. However, few architectures support full predication, thus limiting the choice of processing platforms. We present a novel approach that adds support for fully predicated execution to existing processor cores which do not natively provide it. Single-path code is generated by restructuring regular machine code and replacing conditional control-flow branches with special instructions that control the predication of subsequent code. At runtime an instruction filter interprets these predicate-defining instructions, computes and saves predicates and filters regular instructions based on the predicate state, replacing inactive instructions with a substitute that has no effect (e.g. a NOP). We are implementing this single-path filter for the LEON3 and the IBEX processors.

Michael Platzer, Peter Puschner

ISO/SAE DIS 21434 Automotive Cybersecurity Standard - In a Nutshell

A range of connected and automated vehicles is already available, which is intensifying the usage of connectivity features and information sharing for vehicle maintenance and traffic safety features. The resulting highly connected networking amplifies the attractiveness level for attacks on vehicles and connected infrastructure by hackers with different motivations. Hence, the newly introduced cybersecurity risks are attracting a range of mitigating strategies across the automotive field. The industry’s target is to design and deliver safe and secure connected and automated vehicles. Therefore, efforts are being poured into developing an industry standard capable of tackling automotive cybersecurity issues and protecting assets. The joint working group of the standardization organizations ISO and SAE have recently established and published a draft international specification of the “ISO/SAE DIS 21434 Road Vehicles - Cybersecurity Engineering” standard.This document delivers a review of the available draft. This work provides a position statement for discussion of available analysis methods and recommendations given in the standard. The aim is to provide a basis for industry experts and researchers for an initial review of the standard and consequently trigger discussions and suggestions of best practices and methods for application in the context of the standard.

Georg Macher, Christoph Schmittner, Omar Veledar, Eugen Brenner

WiCAR - Simulating Towards the Wireless Car

Advanced driving assistance systems (ADAS) pose stringent requirements to a system’s control and communications, in terms of timeliness and reliability, hence, wireless communications have not been seriously considered a potential candidate for such deployments. However, recent developments in these technologies are supporting unprecedented levels of reliability and predictability. This can enable a new generation of ADAS systems with increased flexibility and the possibility of retrofitting older vehicles. However, to effectively test and validate these systems, there is a need for tools that can support the simulation of these complex communication infrastructures from the control and the networking perspective. This paper introduces a co-simulation framework that enables the simulation of an ADAS application scenario in these two fronts, analyzing the relationship between different vehicle dynamics and the delay required for the system to operate safely, exploring the performance limits of different wireless network configurations.

Harrison Kurunathan, Ricardo Severino, Ênio Filho, Eduardo Tovar

Automated Right of Way for Emergency Vehicles in C-ITS: An Analysis of Cyber-Security Risks

Cooperative Intelligent Transport Systems (C-ITS) provide comprehensive information and communication services to enable a more efficient and safe use of transport systems. Emergency vehicles can benefit from C-ITS by sending preemption requests to traffic lights or other connected road users, thus reducing their time loss when approaching an emergency. This, however, depends on a secure and reliable communication between all involved parties. Potential risks involve cyber-attacks and acts of sabotage. A major issue is the security process applied to provide C-ITS vehicles with the authorisations to exercise the right of way intended for emergency vehicles.This paper presents results from the research project EVE (Efficient right of way for emergency vehicles in C-ITS): Following the lifecycle and processes of the emergency vehicle and its on-board unit from installation to decommissioning, relevant use cases are subjected to an extended Failure Mode and Effects Analysis (FMEA) to assess inherent flaws that could be exploited by cyber-attacks. The results show that, while the technical provisions foreseen by the relevant standards in general provide strong security, detailed security management processes need to be specified.

Lucie Langer, Arndt Bonitz, Christoph Schmittner, Stefan Ruehrup

Integrity Checking of Railway Interlocking Firmware

While uses of trusted computing have concentrated on the boot process, system integrity and remote attestation of systems, little has been made on the higher use cases - particularly safety related domains - where integrity failures can have devastating consequences, eg: StuxNet and Triton. Understanding trusted systems and exploring their operation is complicated by the need for a core and hardware roots of trust, such as TPM module. This can be problematical, if not impossible to work with in some domains, such as Rail and Medicine, where such hardware is still unfamiliar. We construct a simulation environment to quickly prototype and explore trusted systems, as well as provide a safe means for exploring trust and integrity attacks in these vertical domains.

Ronny Bäckman, Ian Oliver, Gabriela Limonta

LoRaWAN with HSM as a Security Improvement for Agriculture Applications

The digital future in agriculture has started a long time ago, with Smart Farming and Agriculture 4.0 being synonyms that describe the change in this domain. Digitalization stands for the needed technology to realize the transformation from conventional to modern agriculture. The continuously monitoring of all environmental data and the recording of all work parameters enables data collections, which are used for precise decision making and the planning of in-time missions. To guarantee secure and genuine data, appropriate data security measures must be provided.This paper will present a research work in the EU AFarCloud project. It introduces the important LoRaWAN data communication technology for the transmission of sensor data and to present a concept for improving data security and protection of sensor nodes. Data and device protection are becoming increasingly important, particularly around LoRaWAN applications in agriculture.In the first part, a general assessment of the security situation in modern agriculture, data encryption methods, and the LoRaWAN data communication technology, will be presented.Then, the paper explains the security improvement concept by using a Hardware Secure Module (HSM), which not only improves the data security but also prevents device manipulations. A real system implementation (Security Evaluation Demonstrator, SED) helps to validate the correctness and the correct function of the advanced security improvement.Finally, an outlook on necessary future works declares what should be done in order to make the digital agriculture safe and secure in the same extent as Industrial Control Systems (ICSs) will be today.

Reinhard Kloibhofer, Erwin Kristen, Luca Davoli

1st International Workshop on Dependable Development-Operation Continuum Methods for Dependable Cyber-Physical System (DepDevOps 2020)

Frontmatter

Multilevel Runtime Security and Safety Monitoring for Cyber Physical Systems Using Model-Based Engineering

Cyber-Physical Systems (CPS) are heterogeneous in nature and are composed of numerous components and embedded subsystems that are interacting with each other and with the physical world. The interaction of hardware and software components at each level, expose them to attack surfaces, which need novel methods to secure against. To ensure safety and security of high integrity CPSs, we present a multilevel runtime monitor approach where there are monitors at each level of processing and integration. In the proposed multi-level monitoring framework, some monitoring properties are formally defined using Event Calculus. We then demonstrate the need for multilevel monitors for faster detection and isolation of attacks by performing data attack and fault injection on a Simulink CPS model.

Smitha Gautham, Athira V. Jayakumar, Carl Elks

Towards a DevOps Approach in Cyber Physical Production Systems Using Digital Twins

Nowadays product manufacturing must respond to mass customisation of products in order to meet the global market needs. This requires an agile and dynamic production process to be competitive in the market. Consequently, the need of factory digitalisation arises with the introduction of Industry 4.0. One example of the digitalisation is the digital twin. Digital twin enhances flexibility due to its adaptability and seamless interaction between the physical system and its virtual model. Furthermore, it bridges the gap between development and operations through the whole product life cycle. Therefore, digital twin can be an enabler for the DevOps application in cyber physical production systems as DevOps aims at merging Development and Operations to provide a continuous and an agile process. This paper analyses the use of the digital twin to enable a DevOps approach of cyber physical production systems (CPPS) in order to create a fully integrated and automated production process, enabling continuous improvement.

Miriam Ugarte Querejeta, Leire Etxeberria, Goiuria Sagardui

Leveraging Semi-formal Approaches for DepDevOps

While formal methods have long been praised by the dependable Cyber-Physical System community, continuous software engineering practices are now employing or promoting semi-formal approaches for achieving lean and agile processes. This paper is a discussion about using Behaviour Driven Development, particularly Gherkin and RSpec for DepDevOps, DevOps for dependable Cyber-Physical Systems.

Wanja Zaeske, Umut Durak

1st International Workshop on Underpinnings for Safe Distributed Artificial Intelligence (USDAI 2020)

Frontmatter

Towards Building Data Trust and Transparency in Data-Driven Business Applications

In view of deriving business value from their (product) data, organisations need to adopt the right method and technology to analyse these data to infer new insights and business intelligence. This is feasible only with a certain guarantee on the completeness, trustworthiness, consistency and accuracy of the data. Thus, building trust in acquired (product) data and its analytics is pivotal if we are to realise its full benefits. To this end, we explore different technologies for building data trust, such as Blockchain, traditional distributed databases and trusted third party platforms, in combination with security algorithms. In this paper, we present a Blockchain-based solution for building data trust, based on which we designed a system prototype as a proof-of-concept.

Annanda Rath, Wim Codenie, Anna Hristoskova

Distributed AI for Special-Purpose Vehicles

In this paper, we elaborate on two issues that are crucial to consider when exploiting data across a fleet of industrial assets deployed in the field: 1) reliable storage and efficient communication of large quantities of data in the absence of continuous connectivity, and 2) the traditional centralized data analytics model which is challenged by the inherently distributed context when considering a fleet of distributed assets. We illustrate how advanced machine learning techniques can run locally at the edge, in the context of two industry-relevant use cases related to special-purpose vehicles: data compression and vehicle overload detection. These techniques exploit real-world usage data captured in the field using the I-HUMS platform provided by our industrial partner ILIAS solutions Inc.

Kevin Van Vaerenbergh, Henrique Cabral, Pierre Dagnely, Tom Tourwé

Cynefin Framework, DevOps and Secure IoT

Understanding the Nature of IoT Systems and Exploring Where in the DevOps Cycle Easy Gains Can Be Made to Increase Their Security

In the relatively new domain of the Internet of Things (IoT), startups and small companies thrive in and stride in bringing new products to the market. Many of them experience problems and fail to profit from their IoT innovation. A lot of those problems are security related. In IoT development, security issues are often overlooked or underestimated.This article explores, from a holistic viewpoint, how security in IoT systems can be prevented or mitigated with a minimal effort. Concepts examined are: The Cynefin framework, Business DevOps, and the role of constraints and requirements in the design phase.

Franklin Selgert

Creating It from SCRATCh: A Practical Approach for Enhancing the Security of IoT-Systems in a DevOps-Enabled Software Development Environment

DevOps describes a method to reorganize the way different disciplines in software engineering work together to speed up software delivery. However, the introduction of DevOps-methods to organisations is a complex task. A successful introduction results in a set of structured process descriptions. Despite the structure, this process leaves margin for error: Especially security issues are addressed in individual stages, without consideration of the interdependence. Furthermore, applying DevOps-methods to distributed entities, such as the Internet of Things (IoT) is difficult as the architecture is tailormade for desktop and cloud resources. In this work, an overview of tooling employed in the stages of DevOps processes is introduced. Gaps in terms of security or applicability to the IoT are derived. Based on these gaps, solutions that are being developed in the course of the research project SCRATCh are presented and discussed in terms of benefit to DevOps-environments.

Simon D. Duque Anton, Daniel Fraunholz, Daniel Krohmer, Daniel Reti, Hans D. Schotten, Franklin Selgert, Marcell Marosvölgyi, Morten Larsen, Krishna Sudhakar, Tobias Koch, Till Witt, Cédric Bassem

3rd International Workshop on Artificial Intelligence Safety Engineering (WAISE 2020)

Frontmatter

Revisiting Neuron Coverage and Its Application to Test Generation

The use of neural networks in perception pipelines of autonomous systems such as autonomous driving is indispensable due to their outstanding performance. But, at the same time their complexity poses a challenge with respect to safety. An important question in this regard is how to substantiate test sufficiency for such a function. One approach from software testing literature is that of coverage metrics. Similar notions of coverage, called neuron coverage, have been proposed for deep neural networks and try to assess to what extent test input activates neurons in a network. Still, the correspondence between high neuron coverage and safety-related network qualities remains elusive. Potentially, a high coverage could imply sufficiency of test data. In this paper, we argue that the coverage metrics as discussed in the current literature do not satisfy these high expectations and present a line of experiments from the field of computer vision to prove this claim.

Stephanie Abrecht, Maram Akila, Sujan Sai Gannamaneni, Konrad Groh, Christian Heinzemann, Sebastian Houben, Matthias Woehrle

A Principal Component Analysis Approach for Embedding Local Symmetries into Deep Learning Algorithms

Building robust-by-design Machine Learning algorithms is key for critical tasks such as safety or military applications. By leveraging on the ideas developed in the context of building invariant Support Vectors Machines, this paper introduces a convenient methodology for embedding local Lie groups symmetries into Deep Learning algorithms by performing a Principal Component Analysis on the corresponding Tangent Covariance Matrix. The projection of the input data onto the principal directions leads to a new data representation which allows singling out the components conveying the semantic information useful to the considered algorithmic task while reducing the dimension of the input manifold. Besides, our numerical testing emphasizes that, although less efficient than using Group-Convolutional Neural Networks as only dealing with local symmetries, our approach does improve accuracy and robustness without introducing significant computational overhead. Performance improvements up to 5% were obtained for low capacity algorithms, making this approach of particular interest for the engineering of safe embedded Artificial Intelligence systems.

Pierre-Yves Lagrave

A Framework for Building Uncertainty Wrappers for AI/ML-Based Data-Driven Components

More and more software-intensive systems include components that are data-driven in the sense that they use models based on artificial intelligence (AI) or machine learning (ML). Since the outcomes of such models cannot be assumed to always be correct, related uncertainties must be understood and taken into account when decisions are made using these outcomes. This applies, in particular, if such decisions affect the safety of the system. To date, however, hardly any AI-/ML-based model provides dependable estimates of the uncertainty remaining in its outcomes. In order to address this limitation, we present a framework for encapsulating existing models applied in data-driven components with an uncertainty wrapper in order to enrich the model outcome with a situation-aware and dependable uncertainty statement. The presented framework is founded on existing work on the concept and mathematical foundation of uncertainty wrappers. The application of the framework is illustrated using pedestrian detection as an example, which is a particularly safety-critical feature in the context of autonomous driving. The Brier score and its components are used to investigate how the key aspects of the framework (scoping, clustering, calibration, and confidence limits) can influence the quality of uncertainty estimates.

Michael Kläs, Lisa Jöckel

Rule-Based Safety Evidence for Neural Networks

Neural networks have many applications in safety and mission critical systems. As industrial standards in various safety-critical domains require developers of critical systems to provide safety assurance, tools and techniques must be developed that enable effective creation of safety evidence for AI systems. In this position paper, we propose the use of rules extracted from neural networks as artefacts for safety evidence. We discuss the rationale behind the use of rules and illustrate it using the MNIST dataset.

Tewodros A. Beyene, Amit Sahu

Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks

Deep learning methods are widely regarded as indispensable when it comes to designing perception pipelines for autonomous agents such as robots, drones or automated vehicles. The main reasons, however, for deep learning not being used for autonomous agents at large scale already are safety concerns. Deep learning approaches typically exhibit a black-box behavior which makes it hard for them to be evaluated with respect to safety-critical aspects. While there have been some work on safety in deep learning, most papers typically focus on high-level safety concerns. In this work, we seek to dive into the safety concerns of deep learning methods on a deeply technical level. Additionally, we present extensive discussions on possible mitigation methods and give an outlook regarding what mitigation methods are still missing in order to facilitate an argumentation for the safety of a deep learning method.

Oliver Willers, Sebastian Sudholt, Shervin Raafatnia, Stephanie Abrecht

Positive Trust Balance for Self-driving Car Deployment

The crucial decision about when self-driving cars are ready to deploy is likely to be made with insufficient lagging metric data to provide high confidence in an acceptable safety outcome. A Positive Trust Balance approach can help with making a responsible deployment decision despite this uncertainty. With this approach, a reasonable initial expectation of safety is based on a combination of a practicable amount of testing, engineering rigor, safety culture, and a strong commitment to use post-deployment operational feedback to further reduce uncertainty. This can enable faster deployment than would be required by more traditional safety approaches by reducing the confidence necessary at time of deployment in exchange for a more stringent requirement for Safety Performance Indicator (SPI) field feedback in the context of a strong safety culture.

Philip Koopman, Michael Wagner

Integration of Formal Safety Models on System Level Using the Example of Responsibility Sensitive Safety and CARLA Driving Simulator

Automated Driving (AD) is about to transform our daily life. However, on the way towards mass deployment, some challenges have to be resolved. Among these, the safety assurance problem is a key issue. Therefore, Intel/Mobileye proposed a formal, mathematical model called Responsibility Sensitive Safety (RSS) to digitize reasonable boundaries on the behavior of other road users by establishing clear mathematically proven rules. While the concept of RSS and a first reference implementation are already known to the community, a remaining question is the integration of RSS into a complete AD system. In this paper, we address this gap and describe the integration of RSS into the CARLA driving simulator as practical example.

Bernd Gassmann, Frederik Pasch, Fabian Oboril, Kay-Ulrich Scholl

A Safety Case Pattern for Systems with Machine Learning Components

Several standards from the domain of safety critical systems, in order to support the argumentation of the safety assurance of a system under development, recommend the construction of a safety case. This activity is guided by the objectives to be met, recommended or required by the standards along the safety lifecycle. Ongoing attempts to use Machine Learning (ML) for safety critical functionality revealed certain deficits. For instance, the widely recognized standard for functional safety of automotive systems, ISO 26262, which can be used as a basis to construct a safety case, does not reason about ML. To this end, the goal of this work is to provide a pattern for arguing about the correct implementation of safety requirements in system components based on ML. The pattern is integrated within an overall encompassing approach for safety case generation for automotive systems and its applicability is showcased on a pedestrian avoidance system.

Ernest Wozniak, Carmen Cârlan, Esra Acar-Celik, Henrik J. Putzer

Structuring the Safety Argumentation for Deep Neural Network Based Perception in Automotive Applications

Deep neural networks (DNNs) are widely considered as a key technology for perception in high and full driving automation. However, their safety assessment remains challenging, as they exhibit specific insufficiencies: black-box nature, simple performance issues, incorrect internal logic, and instability. These are not sufficiently considered in existing standards on safety argumentation. In this paper, we systematically establish and break down safety requirements to argue the sufficient absence of risk arising from such insufficiencies. We furthermore argue why diverse evidence is highly relevant for a safety argument involving DNNs, and classify available sources of evidence. Together, this yields a generic approach and template to thoroughly respect DNN specifics within a safety argumentation structure. Its applicability is shown by providing examples of methods and measures following an example use case based on pedestrian detection.

Gesina Schwalbe, Bernhard Knie, Timo Sämann, Timo Dobberphul, Lydia Gauerhof, Shervin Raafatnia, Vittorio Rocco

An Assurance Case Pattern for the Interpretability of Machine Learning in Safety-Critical Systems

Machine Learning (ML) has the potential to become widespread in safety-critical applications. It is therefore important that we have sufficient confidence in the safe behaviour of the ML-based functionality. One key consideration is whether the ML being used is interpretable. In this paper, we present an argument pattern, i.e. reusable structure, that can be used for justifying the sufficient interpretability of ML within a wider assurance case. The pattern can be used to assess whether the right interpretability method and format are used in the right context (time, setting and audience). This argument structure provides a basis for developing and assessing focused requirements for the interpretability of ML in safety-critical domains.

Francis Rhys Ward, Ibrahim Habli

A Structured Argument for Assuring Safety of the Intended Functionality (SOTIF)

Current safety standards for automated driving recommend the development of a safety case. This case aims to justify and critically evaluate, by means of an explicit argument and evidence, how the safety claims concerning the intended functionality of an automated driving feature are supported. However, little guidance exists on how such an argument could be developed. In this paper, the MISRA consortium proposes a state machine on which an argument concerning the safety of the intended functionality could be structured. By systematically covering the activation status of the automated driving feature within and outside the operational design domain, this state machine helps in exploring the conditions, and asserting the corresponding safety claims, under which hazardous events could be caused by the intended functionality. MISRA uses a Traffic Jam Drive feature to illustrate the application of this approach.

John Birch, David Blackburn, John Botham, Ibrahim Habli, David Higham, Helen Monkhouse, Gareth Price, Norina Ratiu, Roger Rivett

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise