Skip to main content

2014 | Buch

Computer Safety, Reliability, and Security

33rd International Conference, SAFECOMP 2014, Florence, Italy, September 10-12, 2014. Proceedings

herausgegeben von: Andrea Bondavalli, Felicita Di Giandomenico

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 33nd International Conference on Computer Safety, Reliability, and Security, SAFECOMP 2014, held in Florence, Italy, in September 2014. The 20 revised full papers presented together with 3 practical experience reports were carefully reviewed and selected from 85 submissions. The papers are organized in topical sections on fault injection techniques, verification and validation techniques, automotive systems, coverage models and mitigation techniques, assurance cases and arguments, system analysis, security and trust, notations/languages for safety related aspects, safety and security.

Inhaltsverzeichnis

Frontmatter

Fault Injection Techniques

A Simulated Fault Injection Framework for Time-Triggered Safety-Critical Embedded Systems
Abstract
This paper presents a testing and simulated fault injection framework for time-triggered safety-critical embedded systems. Our approach facilitates the validation of fault-tolerance mechanisms by performing non-intrusive (SFI) on models of the system at different stages of the development, from the (PIM) to the (PSM). The SFI enables exercising the intended fault tolerance mechanisms by injecting faults in a simulated model of a system. The main benefit of this work is that it enables an early detection of design flaws in fault-tolerant systems, what reduces the possibility of late discovery of design pitfalls that might require an expensive redesign of the system. We examine the feasibility of the proposed approach in a case study, where SFI is used to assess the fault tolerance mechanisms designed in a simplified railway signaling system.
Iban Ayestaran, Carlos F. Nicolas, Jon Perez, Asier Larrucea, Peter Puschner
Rapid Fault-Space Exploration by Evolutionary Pruning
Abstract
Recent studies suggest that future microprocessors need low-cost fault-tolerance solutions for reliable operation. Several competing software-implemented error-detection methods have been shown to increase the overall resiliency when applied to critical spots in the system. Fault injection (FI) is a common approach to assess a system’s vulnerability to hardware faults. In an FI campaign comprising multiple runs of an application benchmark, each run simulates the impact of a fault in a specific hardware location at a specific point in time. Unfortunately, exhaustive FI campaigns covering all possible fault locations are infeasible even for small target applications. Commonly used sampling techniques, while sufficient to measure overall resilience improvements, lack the level of detail and accuracy needed for the identification of critical spots, such as important variables or program phases. Many faults are sampled out, leaving the developer without any information on the application parts they would have targeted.
We present a methodology and tool implementation that application-specifically reduces experimentation efforts, allows to freely trade the number of FI runs for result accuracy, and provides information on all possible fault locations. After training a set of Pareto-optimal heuristics, the experimenting user is enabled to specify a maximum number of FI experiments. A detailed evaluation with a set of benchmarks running on the eCos embedded OS, including MiBench’s automotive benchmark category, emphasizes the applicability and effectiveness of our approach: For example, when the user chooses to run only 1.5% of all FI experiments, the average result accuracy is still 99.84%.
Horst Schirmeier, Christoph Borchert, Olaf Spinczyk

Verification and Validation Techniques

Safety Validation of Sense and Avoid Algorithms Using Simulation and Evolutionary Search
Abstract
We present a safety validation approach for Sense and Avoid (SAA) algorithms aboard Unmanned Aerial Vehicles (UAVs). We build multi-agent simulations to provide a test arena for UAVs with various SAA algorithms, in order to explore potential conflict situations. The simulation is configured by a series of parameters, which define a huge input space. Evolutionary search is used to explore the input space and to guide the simulation towards challenging situations, thus accelerating the process of finding dangerous faults of SAA algorithms and supporting the safety validation process. We applied our approach to the recently published Selective Velocity Obstacles (SVO) algorithm. In our first experiment, we used both random and evolutionary search to find mid-air collisions where UAVs have perfect sensing ability. We found evolutionary search can find some faults (here, interesting problems with SVO) that random search takes a long time to find. Our second experiment added sensor noise to the model. Random search found similar problems as it did in experiment one, but the evolutionary search found some interesting new problems. The two experiments show that the proposed approach has potential for safety validation of SAA algorithms.
Xueyi Zou, Rob Alexander, John McDermid
Debugging with Timed Automata Mutations
Abstract
Model-based Debugging is an application of Model-based Diagnosis techniques to debugging computer systems. Its basic principle is to compare a model, i.e., a description of the correct behaviour of a system, to the observed behaviour of the system. In this paper we show how this technique can be applied in the context of model-based mutation testing (MBMT) with timed automata. In MBMT we automatically generate a set of test sequences out of a test model. In contrast to general model-based testing, the test cases of MBMT cover a pre-defined set of faults that have been injected into the model (model mutation). Our automatic debugging process is purely black-box. If a test run fails, our tool reports a diagnosis as a set of model mutations. These mutations provide possible explanations why the test case has failed. For reproducing the failure, we also generate a set of minimal test cases leading to the implementation fault. The technique is implemented for Uppaal’s timed automata models and is based on a language inclusion check via bounded model checking. It adds debugging capability to our existing test-case generators. A car-alarm system serves as illustrating case study.
Bernhard K. Aichernig, Klaus Hörmaier, Florian Lorber

Automotive Systems

Systematic Derivation of Functional Safety Requirements for Automotive Systems
Abstract
The released ISO 26262 standard for automotive systems requires breaking down safety goals from the hazard analysis and risk assessment into functional safety requirements in the functional safety concept. It has to be justified that the defined functional safety requirements are suitable to achieve the stated safety goals. In this paper, we present a systematic, structured and model-based method to define functional safety requirements using a given set of safety goals. The rationale for safety goal achievement, the relevant attributes of the functional safety requirements, and their relationships are represented by a UML notation extended with stereotypes. The UML model enables a rigorous validation of several constraints expressed in OCL. We illustrate our method using an example electronic steering column lock system.
Kristian Beckers, Isabelle Côté, Thomas Frese, Denis Hatebur, Maritta Heisel
Making Implicit Safety Requirements Explicit
An AUTOSAR Safety Case
Abstract
Safety standards demand stringent requirements on embedded systems used in safety-critical applications such as automotive, railways, and aerospace. In the automotive domain, the AUTOSAR software architecture provides some mechanisms to fulfill the ISO26262 requirements. The verification of these mechanisms is a challenging problem and it is not always clear in which context the safety requirements are supposed to be met.
In this paper, we report on a case study developed in the SafeCer project, where we combined contract-based design and model-based testing. A contract-based approach has been used to formalize the safety requirements to detect communication failures. The formal specification shows under which assumptions the AUTOSAR protection mechanism fulfills these requirements. A model-based testing approach has been used to test the software implementing such protection mechanism. The model used for testing has been model checked against the contract specification ensuring that the system-level safety requirements are met.
Thomas Arts, Michele Dorigatti, Stefano Tonetta
Securing Vehicle Diagnostics in Repair Shops
Abstract
Diagnostics over IP (DoIP) is a new ISO standard for transmitting diagnostics messages, such as ISO 14229 Unified Diagnostic Services (UDS), over IP-based networks. The standard specifies the communication architecture needed for diagnostics communication and defines an application layer protocol for exchanging management and diagnostics messages between DoIP-enabled devices. However, DoIP relies on the insecure network protocols used in today’s Internet and no additional security was added in the standard to tackle this. Thus, to prevent malicious manipulations of vehicle diagnostics sessions in repair shops, appropriate security mechanisms need to be in place.
In this paper, we analyse possible approaches to find the most suitable security architecture for diagnostics communication in repair shop networks. First, an evaluation of possible approaches is conducted. These are then analysed with respect to a set of security requirements and implementation challenges. Finally, we present the approach that best meets the requirements for a secure diagnostics architecture in repair shops.
Pierre Kleberger, Tomas Olovsson

Coverage Models and Mitigation Techniques

Analysis of Persistence of Relevance in Systems with Imperfect Fault Coverage
Abstract
This paper introduces the concept of persistence and analyzes its influence on reliability of systems with imperfect fault coverage (IFC). A component is persistent if it is always relevant in the system unless the system is failed, and a system is persistent if all the components are persistent. In traditional imperfect fault coverage models, simply coverage models (CMs), the coverage (including identification and isolation) is typically limited to the faulty components regardless of their relevance. The general assumption on system coherence cannot guarantee that a component will be always relevant in the system. Rather, an initially relevant component could become irrelevant later due to the failures of other components, which is unfortunately a missing issue in the traditional CMs. For systems with IFC, it is important to cover the non-persistent components whenever they become irrelevant, so as to prevent their future uncovered faults that may lead to system failure. A new coverage model incorporating the timely coverage of the irrelevant components (in addition to the faulty components) is proposed, which opens up a cost-effective approach to improve the system reliability without increasing redundance.
Jianwen Xiang, Fumio Machida, Kumiko Tadano, Yoshiharu Maeno
Exploiting Narrow Data-Width to Mask Soft Errors in Register Files
Abstract
The dependability of computing, caused by soft errors, has become a growing design concern in the safety critical systems. Since Register Files (RFs) are very frequently accessed and errors occurred in them will propagate to other components quickly, RFs are among the major reasons for affecting systemic reliability. Current protecting techniques usually provoke significant power penalty and performance degradation. This paper proposes a lightweight software implemented method for mitigating soft errors in RFs. Based on the observation of many narrow data-width of registers’ value, which indicates a large fraction of unused bits of register data, the masking operations are inserted to clear the possible errors in these bits for reducing the window of vulnerability for RFs. To improve the effectiveness, the effect of each masking range is calculated, and the covered masks analysis can remove the unnecessary masks without scarifying the errors coverage. Under the user-defined overhead constrain, the most cost-effective masking operations can be automatically selected. Experimental results from several benchmarks indicate that the reliability of programs have been averagely improved for 16.8% with only 3.3% performance overhead.
Jianjun Xu, Qingping Tan, Zeming Shao, Hong Ning

Assurance Cases and Arguments

Towards a Clearer Understanding of Context and Its Role in Assurance Argument Confidence
Abstract
The Goal Structuring Notation (GSN) is a popular graphical notation for recording safety arguments. One of GSN’s key innovations is a context element that links short phrases used in the argument to detail available elsewhere. However, definitions of the context element admit multiple interpretations and conflict with guidance for building assured safety arguments. If readers do not share an understanding of the meaning of context that makes context’s impact on the main safety claim clear, confidence in safety might be misplaced. In this paper, we analyse the definitions and usage of GSN context elements, identify contradictions and vagueness, propose a more precise definition, and make updated recommendations for assured safety argument structure.
Patrick John Graydon
Assurance Cases for Block-Configurable Software
Abstract
One means of supporting software evolution is to adopt an architecture where the function of the software is defined through reconfiguring the flow of execution and parameters of pre-existing components. For such software it is desirable to maximise the reuse of assurance assets, and minimise re-verification effort in the presence of change. In this paper we describe how a modular assurance case can be established based upon formal analysis of the necessary preconditions of the component. Our approach supports the reuse of arguments and evidence established for components, including the results of the formal analysis.
Richard Hawkins, Alvaro Miyazawa, Ana Cavalcanti, Tim Kelly, John Rowlands
Generation of Safety Case Argument-Fragments from Safety Contracts
Abstract
Composable safety certification envisions reuse of safety case argument-fragments together with safety-relevant components in order to reduce the cost and time needed to achieve certification. The argument-fragments could cover safety aspects relevant for different contexts in which the component can be used. Creating argument-fragments for the out-of-context components is time-consuming and currently no satisfying approach exists to facilitate their automatic generation. In this paper we propose an approach based on (semi-)automatic generation of argument-fragments from assumption/guarantee safety contracts. We use the contracts to capture the safety claims related to the component, including supporting evidence. We provide an overview of the argument-fragment architecture and rules for automatic generation, including their application in an illustrative example. The proposed approach enables safety engineers to focus on increasing the confidence in the knowledge about the system, rather than documenting a safety case.
Irfan Sljivo, Barbara Gallina, Jan Carlson, Hans Hansson

System Analysis

Estimating Worst Case Failure Dependency with Partial Knowledge of the Difficulty Function
Abstract
For systems using software diversity, well-established theories show that the expected probability of failure on demand (pfd) for two diverse program versions failing together will generally differ from what it would be if they failed independently. This is explained in terms of a “difficulty function” that varies between demands on the system. This theory gives insight, but no specific prediction unless we have some means to quantify the difficulty function. This paper presents a theory leading to a worst case measure of “average failure dependency” between diverse software, given only partial knowledge of the difficulty function. It also discusses the possibility of estimating the model parameters, with one approach based on an empirical analysis of previous systems implemented as logic networks, to support pre-development estimates of expected gain from diversity. The approach is illustrated using a realistic safety system example.
Peter Bishop, Lorenzo Strigini
Proving the Absence of Stack Overflows
Abstract
In safety-critical embedded systems the stack typically is the only dynamically allocated memory area. However, the maximal stack usage must be statically known: at configuration time developers have to reserve enough stack space for each task. Stack overflow errors are often hard to find but can cause the system to crash or behave erroneously. All current safety standards, e.g., ISO-26262, require upper estimations of the storage space; due to its dynamic behavior the stack is an especially critical storage area.
Typically neither testing and measuring nor static source code analysis can provide safe bounds on the worst-case stack usage. A safe upper bound can be computed by whole-program static analysis at the executable code level. When an Abstract Interpretation based static analyzer is used, it can be formally proven that the maximal stack usage will never be underestimated. The challenge for binary-code level analyzers is to minimize the necessary amount of user interactions, e.g., for function pointer calls. To minimize user interaction, the analysis has to be precise, and the annotation mechanism has to be flexible and easy-to-use. The analyzer configuration has to be done once for each software project; afterwards the analysis can be run automatically, supporting continuous verification.
In this article we describe the principles of Abstract Interpretation based stack analysis. We present an annotation language addressing all properties of typical automotive and avionics software and report on practical experience.
Daniel Kästner, Christian Ferdinand

Security and Trust

Trust-Based Intrusion Tolerant Routing in Wireless Sensor Networks
Abstract
Wireless Sensor Networks (WSNs) are being increasingly adopted in several fields because of their advantages with respect to classic sensor networks. However, nodes in a WSN cooperate and this exposes them to several security threats. Trust-based systems constitute an established solution to ensure security of distributed systems. In this work, a trust-based approach is discussed to make WSNs tolerant against attacks targeting their routing layer. We show how such attacks are tolerated with low overhead in comparison to unprotected systems. Preliminary experimental results are presented confirming the validity of the proposed approach.
Francesco Buccafurri, Luigi Coppolino, Salvatore D’Antonio, Alessia Garofalo, Gianluca Lax, Antonino Nocera, Luigi Romano
A Petri Net Pattern-Oriented Approach for the Design of Physical Protection Systems
Abstract
The design of complex Physical Protection Systems (PPSs) still raises some challenges despite the high number of technologies for smart surveillance. One reason is the lack of effective methodologies able to support the PPS designer in evaluating the effectiveness of the system on varying design choices. Indeed, an estimation of the system vulnerability should be performed in the early phases of the PPS design. This paper introduces a model-based methodology for the quantitative estimation of the vulnerability of a PPS. The proposed methodology clearly defines a compositional approach which takes advantage from the usage of predefined patterns for the creation of vulnerability models. In particular, the paper proposes some Petri Net patterns able to capture the behavioural aspects of several assets and actors involved in attacking/defending scenarios.
Francesco Flammini, Ugo Gentile, Stefano Marrone, Roberto Nardone, Valeria Vittorini
On Two Models of Noninterference: Rushby and Greve, Wilding, and Vanfleet
Abstract
We formally compare two industrially relevant and popular models of noninterference, namely, the model defined by Rushby and the one defined by Greve, Wilding, and Vanfleet (GWV). We create a mapping between the objects and relations of the two models. We prove a number of theorems showing under which assumptions a system identified as “secure” in one model is also identified as “secure” in the other model. Using two examples, we illustrate and discuss some of these assumptions. Our main conclusion is that the GWV model is more discriminating than the Rushby model. All systems satisfying GWV’s Separation also satisfy Rushby’s noninterference. The other direction only holds if we additionally assume that GWV systems are such that every partition is assigned at most one memory segment. All of our proofs have been checked using the Isabelle/HOL proof assistant.
Adrian Garcia Ramirez, Julien Schmaltz, Freek Verbeek, Bruno Langenstein, Holger Blasum

Notations/Languages for Safety-Related Aspects

Specifying Safety Monitors for Autonomous Systems Using Model-Checking
Abstract
Autonomous systems operating in the vicinity of humans are critical in that they potentially harm humans. As the complexity of autonomous system software makes the zero-fault objective hardly attainable, we adopt a fault-tolerance approach. We consider a separate safety channel, called a monitor, that is able to partially observe the system and to trigger safety-ensuring actuations. A systematic process for specifying a safety monitor is presented. Hazards are formally modeled, based on a risk analysis of the monitored system. A model-checker is used to synthesize monitor behavior rules that ensure the safety of the monitored system. Potentially excessive limitation of system functionality due to presence of the safety monitor is addressed through the notion of permissiveness. Tools have been developed to assist the process.
Mathilde Machin, Fanny Dufossé, Jean-Paul Blanquart, Jérémie Guiochet, David Powell, Hélène Waeselynck
Automatically Generated Safety Mechanisms from Semi-Formal Software Safety Requirements
Abstract
Today’s automobiles incorporate a great number of functions that are realized by software. An increasing number of safety-critical functions also follow this trend. For the development of such functions, the ISO 26262 demands a number of additional steps to be performed compared to common software engineering activities. We address some of these demands with means to semi-formally express software safety requirements, tools to automatically implement these requirements, and artifacts and traceability information that can be used for safety case documentation. Through a hierarchical classification of safety mechanisms, a semi-formal specification language for requirements, a generation engine and a case study on a production-model automotive system, we demonstrate: first, how expert knowledge of the functional safety domain can be captured, second, how the tedious and error prone task of manually implementing safety mechanisms can be automated, and third, how this serves as a basis for formal safety argumentation.
Raphael Fonte Boa Trindade, Lukas Bulwahn, Christoph Ainhauser
Querying Safety Cases
Abstract
Querying a safety case to show how the various stakeholders’ concerns about system safety are addressed has been put forth as one of the benefits of argument-based assurance (in a recent study by the Health Foundation, UK, which reviewed the use of safety cases in safety-critical industries). However, neither the literature nor current practice offer much guidance on querying mechanisms appropriate for, or available within, a safety case paradigm. This paper presents a preliminary approach that uses a formal basis for querying safety cases, specifically Goal Structuring Notation (GSN) argument structures. Our approach semantically enriches GSN arguments with domain-specific metadata that the query language leverages, along with its inherent structure, to produce views. We have implemented the approach in our toolset AdvoCATE, and illustrate it by application to a fragment of the safety argument for an Unmanned Aircraft System (UAS) being developed at NASA Ames. We also discuss the potential practical utility of our query mechanism within the context of the existing framework for UAS safety assurance.
Ewen Denney, Dwight Naylor, Ganesh Pai

Safety and Security

Security Application of Failure Mode and Effect Analysis (FMEA)
Abstract
Increasingly complex systems lead to an interweaving of security, safety, availability and reliability concerns. Most dependability analysis techniques do not include security aspects. In order to include security, a holistic risk model for systems is needed. In our novel approach, the basic failure cause, failure mode and failure effect model known from FMEA is used as a template for a vulnerability cause-effect chain, and an FMEA analysis technique extended with security is presented. This represents a unified model for safety and security cause-effect analysis. As an example the technique is then applied to a distributed industrial measurement system.
Christoph Schmittner, Thomas Gruber, Peter Puschner, Erwin Schoitsch
Safety and Security Interactions Modeling Using the BDMP Formalism: Case Study of a Pipeline
Abstract
The digitalization of industrial control systems (ICS) raises several security threats that can endanger the safety of the critical infrastructures supervised by such systems. This paper presents an analysis method that enables the identification and ranking of risks leading to a safety issue, regardless of the origin of those risks: accidental or due to malevolence. This method relies on a modeling formalism called BDMP (Boolean logic Driven Markov Processes) that was initially created for safety studies, and then adapted to security. The use of the method is first illustrated on a simple case to show how it can be used to make decisions in a situation where security requirements are in conflict with safety requirements. Then it is applied to a realistic industrial system: a pipeline and its instrumentation and control system in order to highlight possible interactions between safety and security.
Siwar Kriaa, Marc Bouissou, Frederic Colin, Yoran Halgand, Ludovic Pietre-Cambacedes
A Pragmatic Approach towards Safe and Secure Medical Device Integration
Abstract
Compared to other safety-related domains, harmonizing efforts regarding electronic device integration and interaction – in terms of real-time and remote control – are conducted less ambitious in the medical device sector. There are a couple of reasons for this restrained progress. Traditionally, medical devices are either both small-sized and self-sufficient or they are highly integrated parts of a complete setup provided by one vendor. Especially when equipping critical care facilities such as intensive care units, producer and brand diversity is low, whereas the pressure of competition is high. The results are proprietary communication and control solutions when connecting devices in order to interact. On the other hand, any modern hospital heavily relies on highly available critical medical devices providing and sharing data but also on confidentiality and integrity of vitally important information.
This paper discusses a pragmatic risk-based approach to handle integration problems facing the current situation described above, while focusing on future development challenges. The background of this paper is a project for defining a safe and secure integration interface that – open to integration partners – enables to monitor and remote-control a critical medical device.
Christoph Woskowski
Backmatter
Metadaten
Titel
Computer Safety, Reliability, and Security
herausgegeben von
Andrea Bondavalli
Felicita Di Giandomenico
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-10506-2
Print ISBN
978-3-319-10505-5
DOI
https://doi.org/10.1007/978-3-319-10506-2

Premium Partner