Skip to main content

2009 | Buch

Computer Safety, Reliability, and Security

28th International Conference, SAFECOMP 2009, Hamburg, Germany, September 15-18, 2009. Proceedings

herausgegeben von: Bettina Buth, Gerd Rabe, Till Seyfarth

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

Computer-based systems have become omnipresent commodities within our - vironment. While for a large variety of these systems such as transportation systems, nuclear or chemical plants, or medical systems their relation to safety is obvious, we often do not re?ect that others are as directly related to risks concerning harm done to persons or matter as, for example, elevator control or mobile phones. At least we are not aware of the risk in our daily use of them. Safecomp as a community and a conference series has accompanied this - velopment for 30 years up to Safecomp 2009, which was the 28th of the series. During this time the topics and methods as well as the community have und- gone changes. These changes re?ect the requirements of the above-mentioned ubiquitious presence of safety-related systems. Safecomp has always encouraged and will further encourage academia and industry to share and exchange their ideas and experiences. After 30 years, we as the organizers of Safecomp 2009, found it imperative to take stock: which methods found their way into the application areas; which new approaches need to be checked for their practical applicability. As di?erent application domains developed their own approaches over the previous decades, we tried to attract people with di?erent backgrounds for this conference. - though the years 2008 and 2009 were not easy with regard to the overall global economic situation, we succeeded with this goal.

Inhaltsverzeichnis

Frontmatter

Invited Talks

A Domain-Specific Framework for Automated Construction and Verification of Railway Control Systems
(Extended Abstract)
Abstract
The development of modern railway and tramway control systems represents a considerable challenge to both systems and software engineers: The goal to increase the traffic throughput while at the same time increasing the availability and reliability of railway operations leads to a demand for more elaborate safety mechanisms in order to keep the risk at the same low level that has been established for European railways until today. The challenge is further increased by the demand for shorter time-to-market periods and higher competition among suppliers of the railway domain; both factors resulting in a demand for a higher degree of automation for the development verification, validation and test phases of projects, without impairing the thoroughness of safety-related quality measures and certification activities. Motivated by these considerations, this presentation describes an approach for automated construction and verification of railway control systems.
Anne E. Haxthausen

Medical Systems

Model-Based Development of Medical Devices
Abstract
Model-based development can offer many advantages compared to other techniques. This paper will demonstrate how models are used to develop safe systems in a medical devices company. The approach described uses a combination of model-driven analysis, model-driven design, model-driven test and model-driven safety analysis. Different approaches have been developed and followed in the past. The approach presented has been developed in an evolutionary manner and by combining approaches described in literature. It turned out to be well suited for the medical device domain and is considered to be a best practice approach. As such it is part of the development process that must be followed when developing new medical devices. The development process has to be defined in a written way and is checked by TÜV and FDA auditors on a yearly base. It is considered to be well above-average and thus may be adopted by other companies developing safety-relevant devices. During the audit process it is verified that the documentation of the process is as expected and that the actual development process is performed according to the defined process. This assures for companies adopting the approach that it is authenticated by daily practice and its use requires only modest overhead.
Uwe Becker
Why Are People’s Decisions Sometimes Worse with Computer Support?
Abstract
In many applications of computerised decision support, a recognised source of undesired outcomes is operators’ apparent over-reliance on automation. For instance, an operator may fail to react to a potentially dangerous situation because a computer fails to generate an alarm. However, the very use of terms like “over-reliance” betrays possible misunderstandings of these phenomena and their causes, which may lead to ineffective corrective action (e.g. training or procedures that do not counteract all the causes of the apparently “over-reliant” behaviour). We review relevant literature in the area of “automation bias” and describe the diverse mechanisms that may be involved in human errors when using computer support. We discuss these mechanisms, with reference to errors of omission when using “alerting systems”, with the help of examples of novel counterintuitive findings we obtained from a case study in a health care application, as well as other examples from the literature.
Eugenio Alberdi, Lorenzo Strigini, Andrey A. Povyakalo, Peter Ayton

Industrial Experience

Safety-Related Application Conditions – A Balance between Safety Relevance and Handicaps for Applications
Abstract
Railway standards prescribe the use of Safety-related Application Conditions (SACs). SACs are demands to be observed when using a safety related system or a sub-system. The use of SACs can, however, easily be associated with difficulties. SACs of sub-systems can imply high efforts regarding their fulfillment at system level. Furthermore, SACs at sub-system level may become very obstructive for the user of the sub-system, if the safe application on system level has strong restrictions. Additionally, a large number of SACs may be very difficult to manage. In this way, SACs may obstruct the introduction of a system or a sub-system into the field. Particular hazards could arise from SACs, if they are formulated ambiguously, so that the originally intended safety-related measures are not taken at all. This paper presents the objectives and benefits of SACs and depicts difficulties and challenges associated with the use of SACs. The paper not only explains what should be the SAC content but also the quality criteria, the conditions for SAC creation and SAC fulfillment are described. The SAC management process introduced at Thales Rail Signalling Solutions GmbH is outlined. On the one hand, this process shall support the quality of SACs and on the other hand reduce the effort for SAC creation, fulfillment and evidence.
Friedemann Bitsch, Ulrich Feucht, Huw Gough
Probability of Failure on Demand – The Why and the How
Abstract
In the paper, we will study the PFD and its connection with the probability of failure per hour and failure rates of equipment using very simple models. We describe the philosophies that are standing behind the PFD and the THR. A comparison shows, how the philosophies are connected and which connections between PFH and PFD are implied. Depending on additional parameters, there can be deviations between safety integrity levels that are derived on the basis of the PFD and the PFH. Problems are discussed, which can arise when working with the PFD. We describe, how PFD and PFH in IEC 61508 are connected with the THR defined in the standard EN 50129.
We discuss arguments that show, why care is needed when using the PFD. Moreover, we present a reasoning, why a probability of failure on demand (PFD) might be misleading.
Jens Braband, Rüdiger vom Hövel, Hendrik Schäbe
Establishing the Correlation between Complexity and a Reliability Metric for Software Digital I&C-Systems
Abstract
Faults introduced in design or during implementation might be prevented by design validation and by evaluation during implementation. There are numerous methods available for validating and evaluating software. Expert judgment is a much used approach to identify problematic areas in design or target challenges related to implementation. ISTec and IFE cooperate on a project on automated complexity measurements of software of digital instrumentation and control (I&C) systems. Metrics measured from the function blocks and logic diagrams specifying I&C-systems are used as input to a Bayesian Belief Net describing correlation between inputs and a complexity metric. By applying expert judgment in the algorithms for the automatic complexity evaluation, expert judgment is applied to entire software systems. The results from this approach can be used to identify parts of software which from a complexity viewpoint is eligible for closer inspection. In this paper we describe the approach in detail as well as plans for testing the approach.
John Eidar Simensen, Christian Gerst, Bjørn Axel Gran, Josef Märtz, Horst Miedl

Security Risk Analysis

Exploring Network Security in PROFIsafe
Abstract
Safety critical systems are used to reduce the probability of failure that could cause danger to person, equipment or environment. The increasing level of vertical and horizontal integration increases the security risks in automation. Since the risk of security attacks can not be treated as negligible anymore, there is a need to investigate possible security attacks on safety critical communication.
In this paper we show that it is possible to attack PROFIsafe and change the safety-related process data without any of the safety measures in the protocol detecting the attack. As a countermeasure to network security attacks, the concept of security modules in combination with PROFIsafe will reduce the risk of security attacks, and is in line with the security concept defense-in-depth.
Johan Åkerberg, Mats Björkman
Modelling Critical Infrastructures in Presence of Lack of Data with Simulated Annealing – Like Algorithms
Abstract
We propose a method to analyze inter-dependencies of technological networks and infrastructures when dealing with few available data or missing data. We suggest a simple inclusive index for inter-dependencies and note that even introducing broad simplifications, it is not possible to provide enough information to whatever analysis framework. Hence we resort to a Simulated Annealing–like algorithm (SAFE) to calculate the most probable cascading failure scenarios following a given unfavourable event in the network, compatibly with the previously known data. SAFE gives an exact definition of the otherwise vague notion of criticality and individuates the “critical” links/nodes. Moreover, a uniform probability distribution is used to approximate the unknown or missing data in order to cope with the recent finding that Critical Infrastructures such as the power system exhibit the self-organizing criticality phenomenon. A toy example based on a real topology is given; SAFE proves to be a reasonably fast, accurate and computationally simple evaluation tool in presence of more than 50% missing data.
Vincenzo Fioriti, Silvia Ruzzante, Elisa Castorini, A. Di Pietro, Alberto Tofani
Environment Characterization and System Modeling Approach for the Quantitative Evaluation of Security
Abstract
This article aims at proposing a new approach for the quantitative evaluation of information system security. Our approach focuses on system vulnerabilities caused by design and implementation errors and studies how system environment, considering such vulnerabilities, may endanger the system. The two main contributions of this paper are: 1) the identification of the environmental factors which influence the security system state; 2) the development a Stochastic Activity Network model taking into account the system and these environmental factors. Measures resulting from our modeling are aimed at helping the system designers in the assessment of vulnerability exploitation risks.
Geraldine Vache

Safety Guidelines

Experiences with the Certification of a Generic Functional Safety Management Structure According to IEC 61508
Abstract
This article summarizes the experiences undergone while supporting ABB Business Units (BUs) in achieving functional safety certification according to IEC 61508 for their safety related products. Being part of a large global organization, ABB BUs enjoy certain freedom in the way they implement their product development process both for hardware and software. Many times these processes are inherited from long standing and successful development tradition from companies that have been later incorporated by ABB. Given so, when faced to the increased demand of IEC 61508 compliant products, the BUs find themselves implementing IEC 61508 and adapting their development processes from scratch for each new product. As a consequence, there are many different ways throughout the organization of implementing similar artifacts with the same scope (i.e. templates, lifecycles, reports, etc.). Since the BUs have recognized that this is clearly not efficient for redundancy, repetition, and finally costs reasons we have undertaken the task of creating a generic process to be used as framework for developing safety compliant products according to IEC 61508 that can be reused for different products across BUs. The requirements of this framework are that it has to be easier to use than the original standard; self-contained (i.e. no need to look up information over the original standard), flexible (i.e. applicable for different kind of products across different BUs); be certifiable by any major certification body; coupled with ABB’s stage-gate business decision model; and most importantly: be attractive to BUs so that it can be widely adopted throughouto the organization. In order to satisfy those requirements we have developed a method and a set of components that we call “Safety Add-on”, to create and manage functional safety design and development activities according to IEC 61508. The Functional Safety Management module of the Safety Add-on has been certified by TÜV Rheinland and is being successfully used by several BUs across ABB.
Carlos G. Bilich, Zaijun Hu
Analysing Dependability Case Arguments Using Quality Models
Abstract
The Goal Structuring Notation (GSN)[1] facilitates a clear presentation of the argument structure in dependability cases for dependable systems. However, assessment of an argument structure with respect to validity, sufficiency and consistency of argumentation and the provided evidence still strongly depends on individual, tacit expert knowledge. We propose a 2-phase analysis method for argument structures:
Firstly, syntactic completeness, consistency, and proper instantiation of argument patterns are examined using a UML profile for GSN and OCL constraints. For the second phase, we propose 2-dimensional quality models to assist the expert in explicitly judging on the conclusiveness of argumentation. A quality model explicitly represents the impact of facts on design activities and software-system’s properties relevant for dependability. The impact value aggregates state-of-the-art knowledge and standard’s recommendations. Missing, negative or conflicting impact indicates impairment of the argument either by revealing a gap in the line of arguments or incompatibilities or opposing principles between decisions or techniques in the process. We show first steps towards the integration of the analysis into model-based tool supported development.
Michaela Huhn, Axel Zechner
Experience with Establishment of Reusable and Certifiable Safety Lifecycle Model within ABB
Abstract
One basic requirement for a functional safety development project is to establish a SIL-compliant safety lifecycle model. For a company with a big family of safety-related products and a great number of development projects like ABB, it would be very time-consuming and cost-intensive for each safety development project to develop a safety lifecycle model. One approach for managing the corresponding costs and effort is to create a common lifecycle model that fulfills the SIL requirements and can be reused by safety-related projects. In this paper we are going to present such a common safety lifecycle model, its structure and components, and our experience on how to establish and apply it in safety-related product development projects. The paper analyzes the design constraints for the development of a common safety lifecycle model such as complexity, flexibility, simplicity, conformity and the safety integrity. It shows how these constraints drive the design of the safety lifecycle model to be developed. Our design concept, design considerations, development strategy, and our experience in establishing such a common safety lifecycle model will also be discussed in the paper.
Zaijun Hu, Carlos G. Bilich

Automotive

Automotive IT-Security as a Challenge: Basic Attacks from the Black Box Perspective on the Example of Privacy Threats
Abstract
Since automotive IT is becoming more and more powerful, the IT-security in this domain is an evolving area of research. In this paper we focus on the relevance of the black box perspective in the context of threat analyses for automotive IT systems and discuss typical starting points and implications of respective attacks. We put a special focus on potential privacy issues, which we expect to be of increasing relevance in future automotive systems. To motivate appropriate provision for privacy protection in future cars we discuss potential scenarios of privacy violations. To underline the relevance even today, we further present a novel attack on a recent gateway ECU enabling an attacker to sniff arbitrary internal communication even beyond subnetwork borders.
Tobias Hoppe, Stefan Kiltz, Jana Dittmann
Safety Requirements for a Cooperative Traffic Management System: The Human Interface Perspective
Abstract
Traffic management systems are complex networks integrating sensors, actors, communication on different levels and humans as active part, consisting of road-side infrastructure coupled with advanced driver assistance systems and on-board data collection facilities.
COOPERS1 has the objective of co-operative traffic management by implementing intelligent services interfacing vehicles, drivers, road infrastructure and highway operators. These services have different levels of criticality and safety impact, and involve different types of smart systems and wireless communications. In the initial phase of the COOPERS project a RAMSS2 analysis was carried out on road traffic scenarios, services and communications. The analysis yielded that the HMI (Human Machine Interface) is one of the major threats to reliability.
After a short overview on COOPERS and the RAMSS analysis, this paper describes the risks of the HMI and human factors in the specific situation of a driver and gives concrete recommendations for the OBU (On-Board Unit) user interface.
Thomas Gruber, Egbert Althammer, Erwin Schoitsch

Aerospace

The COMPASS Approach: Correctness, Modelling and Performability of Aerospace Systems
Abstract
We report on a model-based approach to system-software co-engineering which is tailored to the specific characteristics of critical on-board systems for the aerospace domain. The approach is supported by a System-Level Integrated Modeling (SLIM) Language by which engineers are provided with convenient ways to describe nominal hardware and software operation, (probabilistic) faults and their propagation, error recovery, and degraded modes of operation.
Correctness properties, safety guarantees, and performance and dependability requirements are given using property patterns which act as parameterized “templates” to the engineers and thus offer a comprehensible and easy-to-use framework for requirement specification. Instantiated properties are checked on the SLIM specification using state-of-the-art formal analysis techniques such as bounded SAT-based and symbolic model checking, and probabilistic variants thereof. The precise nature of these techniques together with the formal SLIM semantics yield a trustworthy modeling and analysis framework for system and software engineers supporting, among others, automated derivation of dynamic (i.e., randomly timed) fault trees, FMEA tables, assessment of FDIR, and automated derivation of observability requirements.
Marco Bozzano, Alessandro Cimatti, Joost-Pieter Katoen, Viet Yen Nguyen, Thomas Noll, Marco Roveri
Formal Verification of a Microkernel Used in Dependable Software Systems
Abstract
In recent years, deductive program verification has improved to a degree that makes it feasible for real-world programs. Following this observation, the main goal of the BMBF-supported Verisoft XT project is (a) the creation of methods and tools which allow the pervasive formal verification of integrated computer systems, and (b) the prototypical realization of four concrete, industrial application tasks.
In this paper, we report on the Verisoft XT subproject Avionics, where formal verification is being applied to a commercial embedded operating system. The goal is to use deductive techniques to verify functional correctness of the PikeOS system, which is a microkernel-based partitioning hypervisor.
We present our approach to verifying the microkernel’s system calls, using a system call for changing the priority of threads as an example. In particular, (a) we give an overview of the tool chain and the verification methodology, (b) we explain the hardware model and how assembly semantics is specified so that functions whose implementation contain assembly can be verified, and (c) we describe the verification of the system call itself.
Christoph Baumann, Bernhard Beckert, Holger Blasum, Thorsten Bormer
Issues in Tool Qualification for Safety-Critical Hardware: What Formal Approaches Can and Cannot Do
Abstract
Technology has improved to the point that system designers have the ability to trade-off implementing complex functions in either hardware or software. However, clear distinctions exist in the design tools. This paper examines what is unique to hardware design, areas where formal methods can be applied to advantage in hardware design and how errors can exist in the hardware even if formal methods are used to prove the design is correct.
Brian Butka, Janusz Zalewski, Andrew J. Kornecki

Verification, Validation, Test

Probabilistic Failure Propagation and Transformation Analysis
Abstract
A key concern in safety engineering is understanding the overall emergent failure behaviour of a system, i.e., behaviour exhibited by the system that is outside its specification of acceptable behaviour. A system can exhibit failure behaviour in many ways, including that from failures of individual or a small number of components. It is important for safety engineers to understand how system failure behaviour relates to failures exhibited by individual components. In this paper, we propose a safety analysis technique, failure propagation and transformation analysis (FPTA), which automatically and quantitatively analyses failures based on a model of failure logic. The technique integrates previous work on automated failure analysis with probabilistic model checking supported by the PRISM tool. We demonstrate the technique and tool on a small, yet realistic safety-related application.
Xiaocheng Ge, Richard F. Paige, John A. McDermid
Towards Model-Based Automatic Testing of Attack Scenarios
Abstract
Model-based testing techniques play a vital role in producing quality software. However, compared to the testing of functional requirements, these techniques are not prevalent that much in testing software security. This paper presents a model-based approach to automatic testing of attack scenarios. An attack testing framework is proposed to model attack scenarios and test the system with respect to the modeled attack scenarios. The techniques adopted in the framework are applicable in general to the systems, where the potential attack scenarios can be modeled in a formalism based on extended abstract state machines. The attack events, i.e., attack test vectors chosen from the attacks happening in real-world are converted to the test driver specific events ready to be tested against the attack signatures. The proposed framework is implemented and evaluated using the most common attack scenarios. The framework is useful to test software with respect to potential attacks which can significantly reduce the risk of security vulnerabilities.
M. Zulkernine, M. F. Raihan, M. G. Uddin
CRIOP: A Human Factors Verification and Validation Methodology That Works in an Industrial Setting
Abstract
We evaluated CRIOP, a Human Factors (HF) based methodology, for the purpose of capturing the Norwegian Oil & Gas (O&G) industry’s opinion of CRIOP to identify how it is used and suggest potential improvements. CRIOP has been a preferred method in the Norwegian O&G industry and is used for Verification and Validation (V&V) of a Control Centre’s ability to safely and effectively handle all modes of operations. CRIOP consists of one introduction part, one checklist part and one scenario based part. We based our study on interviews of 21 persons, an online survey with 23 respondents and firsthand experience in workshops. The results showed that CRIOP is an effective Control Centre design V&V tool. Highlighted issues were timing, stakeholders, planning and preparation of the analysis, adapting the checklists and the workshop facilitators’ competence. We conclude that CRIOP is an effective V&V tool, applied and appreciated by the Norwegian O&G industry.
Andreas Lumbe Aas, Stig Ole Johnsen, Torbjørn Skramstad

Fault Tolerance

Reliability Analysis for the Advanced Electric Power Grid: From Cyber Control and Communication to Physical Manifestations of Failure
Abstract
The advanced electric power grid is a cyber-physical system comprised of physical components, such as transmission lines and generators, and a network of embedded systems deployed for their cyber control. The objective of this paper is to qualitatively and quantitatively analyze the reliability of this cyber-physical system. The original contribution of the approach lies in the scope of failures analyzed, which crosses the cyber-physical boundary by investigating physical manifestations of failures in cyber control. As an example of power electronics deployed to enhance and control the operation of the grid, we study Flexible AC Transmission System (FACTS) devices, which are used to alter the flow of power on specific transmission lines. Through prudent fault injection, we enumerate the failure modes of FACTS devices, as triggered by their embedded software, and evaluate their effect on the reliability of the device and the reliability of the power grid on which they are deployed. The IEEE118 bus system is used as our case study, where the physical infrastructure is supplemented with seven FACTS devices to prevent the occurrence of four previously documented potential cascading failures.
Ayman Z. Faza, Sahra Sedigh, Bruce M. McMillin
Increasing the Reliability of High Redundancy Actuators by Using Elements in Series and Parallel
Abstract
A high redundancy actuator (HRA) is composed of a high number of actuation elements, increasing both the travel and the force above the capability of an individual element. This provides inherent fault tolerance: if one of the elements fails, the capabilities of the actuator may be reduced, but it does not become dysfunctional. This paper analyses the likelihood of reductions in capabilities. The actuator is considered as a multi-state system, and the approach for k-out-of-n:G systems can be extended to cover the case of the HRA. The result is a probability distribution that quantifies the capability of the HRA. By comparing the distribution for different configurations, it is possible to identify the optimal configuration of an HRA for a given situation.
Thomas Steffen, Frank Schiller, Michael Blum, Roger Dixon
AN-Encoding Compiler: Building Safety-Critical Systems with Commodity Hardware
Abstract
In the future, we expect commodity hardware to be used in safety-critical applications. However, in the future commodity hardware is expected to become less reliable and more susceptible to soft errors because of decreasing feature size and reduced power supply. Thus, software-implemented approaches to deal with unreliable hardware will be needed. To simplify the handling of value failures, we provide failure virtualization in the sense that we transform arbitrary value failures caused by erroneous execution into fail-stop failures. The latter ones are easier to handle. Therefore, we use the arithmetic AN-code because it provides very good error detection capabilities. Arithmetic codes are suitable for the protection of commodity hardware because guarantees can be provided independent of the executing hardware. This paper presents the encoding compiler EC-AN which applies AN-encoding to arbitrary programs. According to our knowledge, this is the first in software implemented complete AN-encoding. Former encoding compilers either encode only small parts of applications or trade-off safety to enable complete AN-encoding.
Christof Fetzer, Ute Schiffel, Martin Süßkraut

Dependability

Component-Based Abstraction in Fault Tree Analysis
Abstract
To handle the complexity of safety-critical embedded systems, it is not appropriate to develop functionality and consider safety in separate tasks, or to consider software only as a black box in safety analyses. Rather, safety aspects have to be integrated as tightly as possible into the system and software development process and its models. But existing safety analyses and models do not fit well with software development tasks such as architectural design and do not take advantage of their strengths. To solve this problem, this paper extends fault tree analysis by hierarchical component-based abstraction, enabling fault tree analysis to be integrated into a component-oriented model-based design approach and to handle the complexity of software architectural design.
Dominik Domis, Mario Trapp
A Foundation for Requirements Analysis of Dependable Software
Abstract
We present patterns for expressing dependability requirements, such as confidentiality, integrity, availability, and reliability. The paper considers random faults as well as certain attacks and therefore supports a combined safety and security engineering. The patterns - attached to functional requirements - are part of a pattern system that can be used to identify missing requirements.
Denis Hatebur, Maritta Heisel
Establishing a Framework for Dynamic Risk Management in ‘Intelligent’ Aero-Engine Control
Abstract
The behaviour of control functions in safety critical software systems is typically bounded to prevent the occurrence of known system level hazards. These bounds are typically derived through safety analyses and can be implemented through the use of necessary design features. However, the unpredictability of real world problems can result in changes in the operating context that may invalidate the behavioural bounds themselves, for example, unexpected hazardous operating contexts as a result of failures or degradation. For highly complex problems it may be infeasible to determine the precise desired behavioural bounds of a function that addresses or minimises risk for hazardous operation cases prior to deployment. This paper presents an overview of the safety challenges associated with such a problem and how such problems might be addressed. A self-management framework is proposed that performs on-line risk management. The features of the framework are shown in context of employing intelligent adaptive controllers operating within complex and highly dynamic problem domains such as Gas-Turbine Aero Engine control. Safety assurance arguments enabled by the framework necessary for certification are also outlined.
Zeshan Kurd, Tim Kelly, John McDermid, Radu Calinescu, Marta Kwiatkowska
Backmatter
Metadaten
Titel
Computer Safety, Reliability, and Security
herausgegeben von
Bettina Buth
Gerd Rabe
Till Seyfarth
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-04468-7
Print ISBN
978-3-642-04467-0
DOI
https://doi.org/10.1007/978-3-642-04468-7