Skip to main content

2014 | Buch

Computer Safety, Reliability, and Security

SAFECOMP 2014 Workshops: ASCoMS, DECSoS, DEVVARTS, ISSE, ReSA4CI, SASSUR. Florence, Italy, September 8-9, 2014. Proceedings

herausgegeben von: Andrea Bondavalli, Andrea Ceccarelli, Frank Ortmeier

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of 6 workshops co-located with SAFECOMP 2014, the 33rd International Conference on Computer Safety, Reliability, and Security, held in Florence, Italy, in September 2014. The 32 revised full and 10 short papers presented were carefully reviewed and selected from 58 submissions. They are complemented with 6 introduction to each of the workshops: Architecting Safety in Collaborative Mobile Systems, ASCoMS'14; ERCIM/EWICS/ARTEMIS Workshop on Dependable Embedded and Cyberphysical Systems and Systems-of-Systems, DECSoS'14; DEvelopment, Verification and VAlidation of cRiTical Systems, DEVVARTS'14; Integration of Safety and Security Engineering, ISSE'14; Reliability and Security Aspects for Critical Infrastructure Protection, ReSA4CI'14; Next Generation of System Assurance Approaches for Safety-Critical Systems, SASSUR'14.

Inhaltsverzeichnis

Frontmatter

Architecting Safety in Collaborative Mobile Systems (ASCoMS’14)

3rd Workshop on Architecting Safety in Collaborative Mobile Systems (ASCoMS)

This volume contains the papers presented at ASCoMS 2014: the 3rd Workshop on Architecting Safety in Collaborative Mobile Systems held on September 8, 2014 in Firenze as part of SAFECOMP Conference 2014. As for the two previous years the workshop was held at SAFECOMP.

Renato Librino, Martin Törngren
Intelligent Transport Systems - The Role of a Safety Loop for Holistic Safety Management

An ITS represents a Cyber-Physical System (CPS), which will involve information exchange at operational level as well as potential explicit collaboration between separate entities (systems of systems). Specific emphasis is required to manage the complexity and safety of such future CPS. In this paper we focus on model-based approaches for these purposes for analyzing and managing safety throughout the lifecycle of ITS. We argue that: (1) run-time risk assessment will be necessary for efficient ITS; (2) an information centric approach will be instrumental for future ITS to support all aspects of safety management – a “safety loop”; (3) a formal basis is required to deal with the large amounts of information present in an ITS. We elaborate these arguments and discuss what is required to support their realization.

Kenneth Östberg, Martin Törngren, Fredrik Asplund, Magnus Bengtsson
Safety Verification of Multiple Autonomous Systems by Formal Approach

We have studied verification of a line tracing robot using model checking. In this paper, we extend the model to multiple autonomous systems, and describe the advantages of applying model checking and difficulties. The targeted line tracing robot usually has only one or two sensors to detect a line painted on white background, and it traces the line according to the read value of the sensors. It is easy to trace if the line is simple straight line. However, lines sometimes become complicated by existence of random sequential corners. Those robots are often used in robot competitions for university students in Japan. Driving time, accuracy and robustness are evaluated in such competitions. The robot is usually designed as a stand-alone. Here, we extend such line tracing robots to multiple autonomous robots by adding communication functions and proximity sensors. We consider multiple lines to be crossed where robots might hit each other. Although the introduced model is simple, it has enough power to provide a structure where we can discuss safety and robustness using model checking. Our proposed method can also treat time constraints of robot controls.

Kozo Okano, Toshifusa Sekizawa
Checking Verification Compliance of Technical Safety Requirements on the AUTOSAR Platform Using Annotated Semi-formal Executable Models

Implementing AUTOSAR-based embedded systems that adhere to ISO 26262 is not trivial. High-level safety goals have to be refined to functional safety requirements and technical HW and SW safety requirements. SW safety requirements allocated to the application as well as the underlying AUTOSAR platform. Finding relevant safety requirements on the AUTOSAR basic software are a challenge. AUTOSAR specifications provide incomplete lists of requirements which might be relevant. In this paper we address this challenge by providing tool support to automatically extract relevant functional requirements for given safety scenarios. A conservative estimation gives that the safety-relevant part of the overall requirements can be as small as 30%, which reduce the necessary rigid testing effort. An electronic parking brake example is presented as a demonstration of concept.

Martin Skoglund, Hans Svensson, Henrik Eriksson, Thomas Arts, Rolf Johansson, Alex Gerdes
Evaluation of Safety Rules in a Safety Kernel-Based Architecture

Kernel-based architectures have been proposed as a possible solution to build safe cooperative systems with improved performance. These systems adjust their operation mode at run-time, depending on the actual quality of sensor data used in control loops and on the execution timeliness of relevant control functions. Sets of safety rules, defined at design-time, express the conditions concerning data quality and timeliness that need to be satisfied for the system to operate safely in each operation mode.

In this paper we propose a solution for practically expressing these safety rules at design-time, and for evaluating them at run-time. This evaluation is done using periodically collected information about safety-related variables. For expressing the rules we adopt the XML language. The run-time solution is based on a safety rules evaluation engine, which was designed for efficiency and scalability. We describe the architecture of the engine, the solution for structuring data in memory and the rule evaluation algorithm. A simple sensor-based control system is considered to exemplify how the safety rules are expressed.

Eric Vial, António Casimiro
Driving with Confidence: Local Dynamic Maps That Provide LoS for the Gulliver Test-Bed

The design of automated driving systems aims at reducing the human error and increasing the fuel efficiency by letting the vehicles map their surroundings and drive autonomously. One of the system challenges on the road is that at any time the environment can stop meeting the system’s operational conditions (and then resume meeting the requirements at some later point in time). Thus, as vehicles map their surroundings, they should also provide information that can help the vehicles to know whether the operational conditions are met with respect to the confidence that they have about the mapped information.

We design and implement key services of

Local Dynamic Maps

(LDMs) that are based on on-board and remote sensory information. The LDM provides the position of all nearby noticeable objects along with the LDM’s confidence about these positions. The design also includes an extension that allows the vehicular system to agree on the lowest common ability to meet the operational conditions.

We evaluate the performance of a key component in our pilot implementation together with a set of test cases that validate the proposed design. Our current findings show that the presented ideas can accelerate the deployment of automated driving systems.

Christian Berger, Oscar Morales, Thomas Petig, Elad Michael Schiller
Sensor- and Environment Dependent Performance Adaptation for Maintaining Safety Requirements

Driving assistance or automated driving depends to a large extent on the correct perception of the environment. Because automated driving functions have to be proven safe under all operational conditions, worst-case assumptions concerning the sensors and also the environment have to be assumed. In this paper, we propose a scheme that allows taking weaker assumptions. This is based on a continuous assessment of the quality of sensor data, a model of the interaction between the control process and the environment and the possibility to adapt the performance. We present an example of a car autonomously driving a simple course and adapting its speed according to the environment and the confidence in the perceived sensor data. We derive a set of simple safety rules used to adjust performance that, in the case given in the example affects the cruising speed.

Tino Brade, Georg Jäger, Sebastian Zug, Jörg Kaiser
Collaborative Development of Safety-Critical Automotive Systems: Exchange, Views and Metrics

Automotive system development involves a large set of organizations and disciplines. In particular, vehicle manufacturers rely on a large set of suppliers to provide components and systems. To successfully develop and integrate these components, stakeholders exchange requirement specifications that define in detail the component properties. Because of the complexity of a typical automotive system, requirement specifications are error prone and time consuming to negotiate with a correct result. In addition, most systems have safety implications and require rigorous means to achieve and argue safety. Recent autonomous and semi-autonomous systems are particularly complex and critical.

The Synligare project addresses these challenges by providing model-based technologies to assist collaborative development of safety critical systems. The project is working along three lines as explained below.

Model Exchange:

Being able to exchange models rather than documents to convey engineering information improves efficiency and precision in collaboration between stakeholders. Version and variant information is an important aspect to secure validity of information.

Views:

Understanding system solutions and analysis results is difficult as more and more aspects need to be considered. Appropriate views, based on formalized system representations, makes engineering information more accessible.

Metrics:

Development status and system properties can sometimes be represented and tracked by means of metrics. Such automatically and continuously provided measures, makes development effort more predictable and indirectly ensure safety.

This paper will describe aspects on exchange, views and metrics identified in the Synligare project, and illustrate with examples how it can be applied in practical system development.

Johan Ekberg, Urban Ingelsson, Henrik Lönn, Magnus Skoog, Jan Söderberg
Towards Energy Efficient, High-Speed Communication in WSNs

Traditionally, protocols in wireless sensor networks focus on low-power operation with low data-rates. In addition, a small set of protocols provides high throughput communication. With sensor networks developing into general propose networks, we argue that protocols need to provide both: low data-rates at high energy-efficiency and, additionally, a high throughput mode. This is essential, for example, to quickly collect large amounts of raw-data from a sensor.

This paper presents a set of practical extensions to the low-power, low-delay routing protocol ORW. We introduce the capability to handle multiple, concurrent bulk-transfers in dynamic application scenarios. Overall, our extensions allow ORW to reach an almost 500% increase in the throughput with less than a 25% increase of the power consumption during a bulk transfer. Thus, we show that instead of developing a new protocol from scratch, we can carefully enhance an existing, energy-efficient protocol with high-throughput extensions. Both the energy-efficient low data-rate mode and the high throughput extensions transparently co-exist inside a single protocol.

Attila Nagy, Olaf Landsiedel
Comparing Adaptive TDMA against a Clock Synchronization Approach

Teams of cooperating robots are getting more popular fostered by hardware platforms as well as technologies and techniques for coordination that are becoming widely available. The wireless communication is one such technology that impacts directly on the quality provided by the cooperative applications running atop. In cases where the team robots transmit periodically, it has been shown that synchronizing their transmissions so that they occur out of phase is beneficial, e.g. in a TDMA manner. However, persisting periodic interfering traffic may increase collisions with the team traffic and thus downgrade the channel quality. The Adaptive TDMA self-synchronized protocol was then proposed to improve the resilience to this type of interference. In this paper we assess the effectiveness of Adaptive TDMA in comparison with a traditional TDMA implementation based on clock synchronization. The results of practical experiments show a reduction in packet losses when the interfering traffic is short, typically single packet.

Luis Almeida, Frederico Santos, Luis Oliveira

ERCIM/EWICS/ARTEMIS Workshop on Dependable Embedded and Cyberphysical Systems and Systems-of-Systems (DECSoS’14)

Introduction: ERCIM/EWICS/ARTEMIS Workshop on Dependable Embedded and Cyberphysical Systems and Systems-of-Systems (DECSoS’14) at SAFECOMP 2014
A European Approach to Critical Systems Engineering

This workshop at SAFECOMP follows already its own tradition since 2006. In the past, it focussed on the conventional type of “embedded systems”, covering all dependability aspects (in the meaning of IFIP WG 10.4, defined by Avizienis, Lapries, Kopetz, Voges and others). To emphasize more the relationship to physics, mechatronics and the notion of interaction with an unpredictable environment, the terminology changed to “cyber-physical systems” (CPS). Collaboration and co-operation of these systems with each other and humans, and the interplay of safety and security are leading to new challenges in verification, validation and certification/qualification.

Erwin Schoitsch, Amund Skavhaug
True Error or False Alarm? Refining Astrée’s Abstract Interpretation Results by Embedded Tester’s Automatic Model-Based Testing

A failure of safety-critical software may cause high costs or even endanger human beings. Contemporary safety standards require to identify potential functional and non-functional hazards and to demonstrate that the software does not violate the relevant safety goals. Typically for ensuring functional program properties model-based testing is used while non-functional properties like occurrence of runtime errors are addressed by abstract interpretation-based static analysis. Hence the verification process is split into two distinct parts – currently without any synergy between them being exploited. In this article we present an approach to couple model-based testing with static analysis based on a tool coupling between Astrée and BTC Embedded

Tester

®

. Astrée reports all potential runtime errors in C programs. This makes it possible to prove the absence of runtime errors, but typically users have to deal with false alarms, i.e. spurious notifications about potential runtime errors. Investigating alarms to find out whether they are true errors which have to be fixed, or whether they are false alarms can cause significant effort. The key idea of this work is to apply model-based testing to automatically find test cases for alarms reported by the static analyzer. When a test case reproducing the error has been found, it has been proven that it is a true error; when no error has been found with full test coverage, it has been proven to be a false alarm. This can significantly reduce the alarm analysis effort and reduces the level of expertise needed to perform the code-level software verification. We describe the underlying concept and report on experimental results and future work.

Sayali Salvi, Daniel Kästner, Tom Bienmüller, Christian Ferdinand
Proving Compliance of Implementation Models to Safety Specifications

Current safety standards like the ISO 26262 require a continuous safety argumentation starting from the initial hazard and risk assessment, down to the implementation of hardware and software. To enable re-use of components and ease handling of changes in the system, modular safety cases are addressed by many research projects. Current approaches are focusing on hierarchical safety specifications describing the relevant fault propagation behavior. Nevertheless, it needs to be ensured that the final implementation meets the safety specification. Currently, this is at most a manual and error prone process of matching fault trees or test results to the specification. In this paper, we present an automated approach based on fault-injection and model checking for proving the compliance of an implementation to a safety specification. In our multi-aspect analysis, (safety and functional aspect) we rely on the popular specification mechanism of safety contracts and implementations modeled in Matlab/Stateflow.

Markus Oertel, Omar Kacimi, Eckard Böde
MTBF Inconsistency Analysis on Inferred Product Breakdown Structures

This article describes our current work on the combination of an ontology-based knowledge representation and formal analysis procedures. We use formalized system engineering knowledge and partial architectural information (induced by a set of requirements) to formalize natural language requirements and to identify inconsistencies based on this formalization. Our analysis combines requirements specified by patterns and an ontology-based product breakdown structure. As an example, we identify inconsistencies between Mean Time Between Failure (MTBF) specifications of systems and their subsystems.

Christian Ellen, Martin Böschen, Thomas Peikenkamp
Critical Systems Verification in MetaMORP(h)OSY

Multi Agent Systems (MAS) methodologies are emerging as a new approach for modeling and developing complex distributed systems. When complex constraints have to be verified on critical systems Model Driven Engineering (MDE) methodologies allow for the design and implementation of systems

correct by construction

. Usually verification is enforced by formal analysis. This paper presents MetaMORP(h)OSY (Meta-modeling of Mas Object-based with Real-time specification in Project Of complex SYstems) methodology and framework. They provide a mean for building MAS models used to verify properties (and requirements) of Critical Systems following a MDE approach. In particular, this work describes model transformation algorithms used in MetaMORP(h)OSY to verify real-time and timed reachability requirements.

Rocco Aversa, Beniamino Di Martino, Francesco Moscato
Report on the Railway Use-Case of the Crystal Project: Objectives and Progress

This paper aims at describing the contribution of the technological brick Safety Architect to the CRYSTAL project. The goal of the CRYSTAL project is to provide a platform of interoperability between tools supporting all the steps constituting the lifecycle of a product.

Based on a railway use-case, the goal is to provide support for realization of safety analysis with All4tec tool Safety Architect (especially automating the filling of safety documents). The first steps have consisted in the automatic generation of FMEA. The automatic management of the Hazard Log through DOORS is currently in development, and the next steps will deal with the change management facilities.

Alexandre Ginisty, Frédérique Vallée, Elie Soubiran, Vidal-delmas Tchapet-Nya
Contract-Based Analysis for Verification of Communication-Based Train Control (CBTC) System

In this paper we apply the contract theory to the analysis of the door control functionality in a metro train. The system under development is specified and modeled by rail domain experts. Contract theory is used to formalize some safety requirements that can be then automatically analyzed by our developed tool suite, Formal Specs Verifier (FSV). The produced work that derives by working with and on FSV represents a good starting point for matching the industrial needs in the field of system analysis and testing and for the definition of new analysis methods that provides indications on how to efficiently reduce the effort of making an exhaustive testing.

Marco Carloni, Orlando Ferrante, Alberto Ferrari, Gianpaolo Massaroli, Antonio Orazzo, Ida Petrone, Luigi Velardi
An Interoperable Testing Environment for ERTMS/ETCS Control Systems

Verification of functional requirements of critical control systems requires a hard testing activity regulated by international standards. As testing often forms more than fifty percent of the total development cost, to support the verification processes by automated solutions is a key factor for achieving lower effort and costs and reducing time to market. The ultimate goal of the ongoing work here described is the development of an interoperable testing environment supporting the

system level testing

of railway ERTMS/ETCS control systems. The testing environment will provide a standardized interface to enable the integration testing between sub-systems developed by different companies/suppliers. We present the first outcomes obtained within the ARTEMIS project CRYSTAL which tackles the challenge to establish and push forward an Interoperability Specification (IOS) as an open European standard for the development of safety-critical embedded systems.

Gregorio Barberio, Beniamino Di Martino, Nicola Mazzocca, Luigi Velardi, Aniello Amato, Renato De Guglielmo, Ugo Gentile, Stefano Marrone, Roberto Nardone, Adriano Peron, Valeria Vittorini
Modelling Resilient Systems-of-Systems in Event-B

Ensuring resilience – the ability to remain dependable in dynamic environment – constitutes a major challenge for engineering systems-of-systems (SoS). In this paper, we take a mission-centric view on the behaviour of SoS and demonstrate how to formally reason about their dependability. We use Event-B as our modelling framework and demonstrate how to formally specify and verify generic system-wide dependability properties as well as the dynamic behaviour of SoS. The proposed approach is exemplified by a case study – a flight formation system. As a result, we argue that Event-B offers a scalable approach to formal modelling of SoS and facilitates engineering of resilient SoS.

Linas Laibinis, Inna Pereverzeva, Elena Troubitsyna
Towards Assured Dynamic Configuration of Safety-Critical Embedded Systems

Assuring systems quality is an inherent part of developing safety-critical embedded systems. Currently, continuous increase of systems complexity, in particular that of software, makes this development challenging. In response, more and more software faults are remaining unidentified at design-time so that changes and maintenance need to be performed at an increased rate. Unfortunately, today’s safety-critical systems are not designed to be upgraded or maintained in a seamless way, so that the overhead of performing changes may be considerable, especially when such changes require to re-verify and re-validate the whole system.

In this paper, we present an approach to perform software changes in the operation and maintenance phase of the systems lifecycle. Changes are performed dynamically, by replacing parts of software (i.e., software components) with their functionally equal out-of-the-box instances. In order to prevent the impact of changes on systems integrity, we provide a support to model and to analyze the system. The main outcome here is that specific kind of changes can be maintained without adding any development costs.

Nermin Kajtazovic, Christopher Preschern, Andrea Höller, Christian Kreiner
Towards Trust Assurance and Certification in Cyber-Physical Systems

We are currently witnessing a 3

rd

industrial revolution, driven by ever more interconnected distributed systems of systems, running under the umbrella term of cyber-physical systems (CPS). In the context of this paradigm, different types of computer-based systems from different application domains collaborate with each other in order to render higher level services that could not be rendered by single systems alone. However, the tremendous potential of CPS is inhibited due to significant engineering challenges with respect to the systems safety and security. Traditional methodologies are not applicable to CPS without further ado and new solutions are therefore required. In this paper, we present potential solution ideas that are currently investigated by the European EMC² research project.

Daniel Schneider, Eric Armengaud, Erwin Schoitsch

DEvelopment, Verification and VAlidation of cRiTical Systems (DEVVARTS’14)

Introduction to the Safecomp 2014 Workshop: DEvelopment, Verification and VAlidation of cRiTical Systems (DEVVARTS ’14)

The DEVVARTS ’14 workshop focuses on novel methods for the development, verification and validation (V&V) and certification of Critical Systems, where the necessary effort for V&V frequently exceeds the core development time when using traditional methods. The “soft” IT industry rapidly turns to system integration based on the reuse of hardware and software components, but for safety related applications this will still evolve primarily due to the lack of composable V&V and certification. All this poses serious difficulties to companies, which are on one hand constrained to meet predefined quality goals, whereas, on the other hand, are required to deliver systems at acceptable cost and time to market. Large companies mainly follow a brute-force approach by focused large volume investment into tooling and in-house training, but even high-tech SMEs are highly vulnerable to the new challenges. Definition of methods, strategies and tools assuring an adequate and simultaneously productive V&V is one of the most challenging goals. It is hard to establish a proper tradeoff between achievable quality with a particular technique (in terms of RAMS attributes) and the costs required for achieving it. The situation is even worse in the case of integration of existing SW in a safety critical system to be certified, since, assessing products which encompass COTS software is a challenge although modern standards consider this possibility. An additional concern is the usage of recently adopted methods for SW development like MDD, since the certification of systems using software developed with these supports is at the limit of the applicability of the existing standards, and only the most recent ones are aligned with these ‘modern’ methods.

Francesco Brancati, Nuno Laranjeiro, Ábel Hegedüs
Verification of Fault-Tolerant System Architectures Using Model Checking

Model checking is a formal method that has proven useful for verifying e.g. logic designs of safety systems used in nuclear plants. However, redundant subsystems are implemented in nuclear plants in order to achieve a certain level of fault-tolerance. A formal system-level analysis that takes into account both the detailed logic design of the systems and the potential failures of the hardware equipment is a difficult challenge. In this work, we have created new methodology for modelling hardware failures, and used it to enable the verification of the fault-tolerance of the plant using model checking. We have used an example probabilistic risk assessment (PRA) model of a fictional nuclear power plant as reference and created a corresponding model checking model that covers several safety systems of the plant. Using the plant-level model we verified several safety properties of the nuclear plant. We also analysed the fault-tolerance of the plant with regard to these properties, and used abstraction techniques to manage the large plant-level model. Our work is a step towards being able to exhaustively verify properties on a single model that covers the entire plant. The developed methodology follows closely the notations of PRA analysis, and serves as a basis for further integration between the two approaches.

Jussi Lahtinen
Verification of a Real-Time Safety-Critical Protocol Using a Modelling Language with Formal Data and Behaviour Semantics

Formal methods have an important role in ensuring the correctness of safety critical systems. However, their application in industry is always cumbersome: the lack of experts and the complexity of formal languages prevents the efficient application of formal verification techniques. In this paper we take a step in the direction of making formal modelling simpler by introducing a framework which helps designers to construct formal models efficiently. Our formal modelling framework supports the development of traditional transition systems enriched with complex data types with type checking and type inference services, time dependent behaviour and timing parameters with relations. In addition, we introduce a toolchain to provide formal verification. Finally, we demonstrate the usefulness of our approach in an industrial case study.

Tamás Tóth, András Vörös
Visualization of Model-Implemented Fault Injection Experiments

MODIFI is a fault injection tool targeting software developed as Simulink models. In this paper, we describe three techniques for visualizing fault injection results obtained using the MODIFI tool. The first technique shows the progress of a fault injection campaign, and the outcome of individual experiments, using a 3D visualization of the fault injection campaign. The second technique, referred to as sensitivity profiling, identifies parts of a model that are sensitive for a specific fault model. The third technique shows how error propagates in a Simulink model. The sensitivity profiling and error propagation techniques are based on intuitive coloring of Simulink blocks. The three visualization techniques are demonstrated using a Brake-by-Wire system.

Daniel Skarin, Jonny Vinter, Rickard Svenningsson
Cost-Effective Testing for Critical Off-the-Shelf Services

Defining cost-effective verification and validation tools is one of the biggest research challenges of the area. Such tools speedup and reduce the cost of the assessment of Off-The-Shelf (OTS) software components that must undergo proper certification or approval processes to be used in critical scenarios. Previously we introduced the design of framework for testing of critical OTS applications and services, to improve reusability, thus aiming to reduce testing time and costs. In this paper we present an implementation of the framework that allows applying, in a cost-effective fashion, functional testing, robustness testing and penetration testing to web services. We present details on the implementation and we describe the procedure to use the framework to conduct testing campaigns in web services. Finally, the framework usability and utility is demonstrated based on a case study.

Fabio Duchi, Nuno Antunes, Andrea Ceccarelli, Giuseppe Vella, Francesco Rossi, Andrea Bondavalli
On Security Countermeasures Ranking through Threat Analysis

Security analysis and design are key activities for the protection of critical systems and infrastructures. Traditional approaches consist first in applying a qualitative threat assessment that identifies the attack points. Results are then used as input for the security design such that appropriate countermeasures are selected. In this paper we propose a novel approach for the selection and ranking of security controlling strategies which is driven by quantitative threat analysis based on attack graphs. It consists of two main steps: i) a threat analysis, performed to evaluate attack points and paths identifying those that are feasible, and to rank attack costs from the perspective of an attacker; ii) controlling strategies, to derive the appropriate monitoring rules and the selection of countermeasures are evaluated, based upon the provided values and ranks. Indeed, the exploitation of such threat analysis allows to compare different controlling strategies and to select the one that fits better the given set of functional and security requirements. To exemplify our approach, we adopt part of an electrical power system, the Customer Energy Management System (CEMS), as reference scenario where the steps of threat analysis and security strategies are applied.

Nicola Nostro, Ilaria Matteucci, Andrea Ceccarelli, Felicita Di Giandomenico, Fabio Martinelli, Andrea Bondavalli
Enabling Cross-Domain Reuse of Tool Qualification Certification Artefacts

The development and verification of safety-critical systems increasingly relies on the use of tools which automate/replace/ supplement complex verification and/or development tasks. The safety of such systems risks to be compromised, if the tools fail. To mitigate this risk, safety standards (e.g. DO-178C/DO330, IEC 61508) define prescriptive tool qualification processes. Compliance with these processes can be required for (re-)certification purposes. To enable reuse and thus reduce time and cost related to certification, cross-domain tool manufacturers need to understand what varies and what remains in common when transiting from one domain to another. To ease reuse, in this paper we focus on verification tools and model a cross-domain tool qualification process line. Finally, we discuss how reusable cross-domain process-based arguments can be obtained.

Barbara Gallina, Shaghayegh Kashiyarandi, Karlheinz Zugsbratl, Arjan Geven

Integration of Safety and Security Engineering (ISSE’14)

1st International Workshop on the Integration of Safety and Security Engineering (ISSE ’14)

The growing complexity of critical systems is creating new challenges for safety and security engineering practices: it is now expected that delivered products implement more and more complex features, while respecting strict requirements on safety and security. For such systems, an ever-increasing portion of design effort is therefore spent on safety and security assessment and verification. Applying safety verification without considering security properties is no longer possible since safety decisions have an impact on system security properties and vice-versa.

Laurent Rioux, John Favaro
From Safety Models to Security Models: Preliminary Lessons Learnt

We aim at developing common models and tools to assess both safety and security of avionics platforms so we studied the adaptation of models devised for Safety assessment in order to analyse security. In this paper, we describe a security modelling ana analysis approach based on the AltaRica language and associated tools, we illustrate the approach with an avionics case-study. We report lessons learnt about the convergence and divergence points between security and safety with respect to modelling and analysis techniques.

Pierre Bieber, Julien Brunel
FMVEA for Safety and Security Analysis of Intelligent and Cooperative Vehicles

Safety and security are two important aspects in the analysis of cyber-physical systems (CPSs). In this short paper, we apply a new safety and security analysis method to intelligent and cooperative vehicles, in order to examine attack possibilities and failure scenarios. The method is based on the FMEA technique for safety analysis, with extensions to cover information security. We examine the feasibility and efficiency of the method, and determine the next steps for developing the combined analysis method.

Christoph Schmittner, Zhendong Ma, Paul Smith
Uniform Approach of Risk Communication in Distributed IT Environments Combining Safety and Security Aspects

The trend to compose real time systems with standard IT known from conventional office domains results in heterogeneous technical environments. Examples are modern industrial process automation networks. It is a challenging task, because of potential impacts of security incidents to the system safety. For example, robot control units could be manipulated by malicious codes. The term “risk communication” is introduced, to describe alarm communication in human-machine interaction scenarios. User adapted risk communication between humans and industrial automation systems, including home robotics, can prevent hazards and/or threats to the entire system safety and security. Current safety and security risk communication standards are compared to examine the adequacy for our uniform approach. This paper focuses on alarm system standards in the industrial process automation domain and intrusion detection systems from the conventional desktop IT domain. A uniform model based approach for risk communication in distributed IT environments is introduced.

Jana Fruth, Edgar Nett

Reliability and Security Aspects for Critical Infrastructure Protection (ReSA4CI’14)

Introduction to the Safecomp 2014 Workshop: Reliability and Security Aspects for Critical Infrastructure Protection (ReSA4CI 2014)

The ReSA4CI workshop aims at providing a forum for researchers and engineers in academia and industry to foster an exchange of research results, experiences, and products in the area of reliable, dependable, and secure computing for critical systems protection from both a theoretical and practical perspective. Its ultimate goal is to envision new trends and ideas about aspects of designing, implementing, and evaluating reliable and secure solutions for the next generation critical infrastructures. Critical Infrastructures present several challenges in the fields of distributed systems, dependability and security methods and approaches crucial for improving trustworthiness on ICT facilities. The workshop aims at presenting the advancement on the state of art in these fields and spreading their adoption in several scenarios involving main infrastructures for modern society.

Silvia Bonomi, Ilaria Matteucci
Modeling and Evaluation of Maintenance Procedures for Gas Distribution Networks with Time-Dependent Parameters

Gas networks comprise a special class of infrastructure, with relevant implications on safety and availability of universal services. In this context, the ongoing deregulation of network operation gives relevance to modeling and evaluation techniques supporting predictability of dependability metrics. We propose a modeling approach that represents maintenance procedures as a multi-phased system, with parameters depending on physical and geographical characteristics of the network, working hours, and evolution of loads over the day. The overall model is cast into a non-Markovian variant of stochastic Petri nets, which allows concurrent execution of multiple generally distributed transitions but maintains a complexity independent of network size and topology. Solution is achieved through an interleaved execution of fluid-dynamic analysis of the network and analytic solution of the stochastic model of the procedure. Solution provides availability measures for individual sections of the network as well as global quality of service parameters.

Laura Carnevali, Marco Paolieri, Fabio Tarani, Enrico Vicario, Kumiko Tadano
Quantification of the Impact of Cyber Attack in Critical Infrastructures

In this paper we report on a recent study of the impact of cyber-attacks on the resilience of complex industrial systems. We describe our approach to building a hybrid model consisting of both the system under study and an Adversary, and we demonstrate its use on a complex case study - a reference power transmission network (NORDIC 32), enhanced with a detailed model of the computer and communication system used for monitoring, protection and control. We studied the resilience of the modelled system under different scenarios: i) a base-line scenario in which the modelled system operates in the presence of accidental failures without cyber-attacks; ii) scenarios in which cyber-attacks can occur. We discuss the usefulness of our findings and outline directions for further work.

Oleksandr Netkachov, Peter Popov, Kizito Salako
Probabilistic Inference in the Physical Simulation of Interdependent Critical Infrastructure Systems

One of the main tasks that can be performed with a Bayesian Network (BN) is the probabilistic inference of unobserved values given evidence. Recently, a framework for physical simulation of critical infrastructures was introduced, accounting for interdependencies and uncertainty; this framework includes the modeling of the interconnected components of a critical infrastructure network as a BN. In this paper we address the problem of the

triangulation

of the resulting BN, that is the first step in many exact inference algorithms.

Paolo Franchin, Luigi Laura
Energy-Based Detection of Multi-layer Flooding Attacks on Wireless Sensor Network

Ensuring cyber security on Wireless Sensor Network (WSN) is a challenging task since nodes are devices with very limited resources. Existing Intrusion Detection Systems (IDSs) solutions either ensure protection from attacks at one specific OSI layer, or they ensure multi-layer protection but with more relevant computational costs. In this work we propose a new solution which aims at detecting attacks at different OSI layers by minimizing the number of features required to perform intrusion detection activities on a WSN node. In this work we consider multi-layer flooding attack performed at routing and application layers; our experimental tests show that a high correlation exists between the features of these attacks available at the corresponding layers and energy consumption. This allows to use energy consumption as the only feature to detect both the attacks even if they are performed at different OSI layers.

Cesario Di Sarno, Alessia Garofalo
Towards a Non-intrusive Recognition of Anomalous System Behavior in Data Centers

In this paper we propose a monitoring system of a data center that is able to infer when the data center is getting into an anomalous behavior by analyzing the power consumption at each server and the data center network traffic. The monitoring system is non-intrusive in the sense that there is no need to install software on the data center servers. The monitoring architecture embeds two Elman Recurrent Networks (RNNs) to predict power consumed by each data center component starting from data center network traffic and viceversa. Results obtained along six mounts of experiments, within a data center, show that the architecture is able to classify anomalous system behaviors and normal ones by analyzing the error between the actual values of power consumption and network traffic and the ones inferred by the two RNNs.

Roberto Baldoni, Adriano Cerocchi, Claudio Ciccotelli, Alessandro Donno, Federico Lombardi, Luca Montanari
Toward Resilience Assessment in Business Process Architectures

This paper investigates options to access the resilience of business process architectures, thereby connecting the two hitherto unconnected areas of Business Process Management and Information System Resilience. The overarching goal is to provide for robust and reliable business process execution even under adverse and unexpected situations. Specifically, this paper focuses on one particular resilience indicator as a basis for assessment, namely

time

. This is because timeliness and time behavior of activities in business processes directly mirror effects and impacts of a changing environment on the business process. We develop an approach based on process mining to analyze the event logs generated during the execution of processes which extract probability distributions of a process’s time behavior to model the effects of occured events. A case study substantiates the applicability of the approach.

Richard M. Zahoransky, Thomas Koslowski, Rafael Accorsi

Next Generation of System Assurance Approaches for Safety-Critical Systems (SASSUR’14)

Introduction to SASSUR 2014

The interest in and need for new safety assurance and certification approaches is undoubtedly increasing. First of all, critical systems are becoming more pervasive every day. They are used for a wide range of daily activities related to transportation, healthcare, or energy consumption, and for increasingly novel applications. Fully implantable artificial hearts and unmanned aerial vehicles are just two examples. Society increasingly depends on these systems, and on their safe operation. At the same time safety assurance and certification is becoming increasingly complex. This is a result of, for instance, evolution of regulatory practice, the increase in the size and complexity of the systems, the need for holistic assessment of cyber-physical systems, and the application of new technologies for enabling features such as autonomous, cooperative, or self-adaptive system behaviour. In addition, the application of new technologies in safety-critical systems potentially introduces new vulnerabilities that are not known yet but could affect to safety integrity.

Alejandra Ruiz, Tim Kelly, Jose Luis de la Vara
Assuring Avionics – Updating the Approach for the 21st Century

This position paper outlines a number of challenges currently faced by the aerospace community in addressing system, software, and hardware safety. These challenges include increasing complexity, lagging regulatory guidance, a divergent set of design assurance guidelines, and ever advancing technology. To address these challenges, four recommendations are offered: consolidation of design assurance, increased resiliency in product design, a move to less prescriptive standards in favor of a goal-based approach, and the imposition of personnel qualification.

Tom Ferrell, Uma Ferrell
Rethinking of Strategy for Safety Argument Development

A ‘strategy’ in Goal Structuring Notation (GSN) aims to help safety-case developers and reviewers to understand the inferences in a hierarchy of safety claims. However, the identification and elaboration of ‘strategies’ in argument development is not always straightforward in practice. In this paper, we revisit the role of strategies in the development of safety cases and examine the application of strategies in some existing argument structures. Four main sources of information are identified as the basis of strategy formulation. A list of generic strategy types for argument decomposition and refinement are analysed in order to facilitate the safety case development and review processes for assuring system safety.

Linling Sun, Nuno Silva, Tim Kelly
Towards a Cross-Domain Software Safety Assurance Process for Embedded Systems

In this work, we outline a cross-domain assurance process for safety-relevant software in embedded systems. This process aims to be applied in various different application domains and in conjunction with any development methodology. With this approach we plan to reduce the growing effort for safety assessment in embedded systems by reusing safety analysis techniques and tools for the product development in different domains.

Marc Zeller, Kai Höfig, Martin Rothfelder
A Software Safety Verification Method Based on System-Theoretic Process Analysis

Modern safety-critical systems are increasingly reliant on software. Software safety is an important aspect in developing safety-critical systems, and it must be considered in the context of the system level into which the software will be embedded. STPA (System-Theoretic Process Analysis) is a modern safety analysis approach which aims to identify the potential hazardous causes in complex safety-critical systems at the system level. To assure that these hazardous causes of an unsafe software’s behaviour cannot happen, safety verification involves demonstrating whether the software fulfills those safety requirements and will not result in a hazardous state. We propose a method for verifying of software safety requirements which are derived at the system level to provide evidence that the hazardous causes cannot occur (or reduce the associated risk to a low acceptable level). We applied the method to a cruise control prototype to show the feasibility of the proposed method.

Asim Abdulkhaleq, Stefan Wagner
Quantifying Uncertainty in Safety Cases Using Evidential Reasoning

Dealing with uncertainty is an important and difficult aspect of analyses and assessment of complex systems. A real-time large-scale complex critical system involves many uncertainties, and assessing probabilities to represent these uncertainties is itself a complex task. Currently, the certainty with which safety requirements are satisfied and the consideration of the other confidence factors often remains implicit in the assessment process. Many publications in the past have detailed the structure and content of safety cases and Goal Structured Notation (GSN). This paper does not intend to repeat them. Instead, this paper outlines a novel solution to accommodate uncertainty in the safety cases development and assessment using the

Evidential-Reasoning

approach

- a mathematical technique for reasoning about uncertainty and evidence. The proposed solution is a bottom-up approach that first performs low-level evidence assessments that makes any uncertainty explicit, and then automatically propagates this confidence up to the higher-level claims. The solution would enable safety assessors and managers to accurately summarise their judgement and make doubt or ignorance explicit.

Sunil Nair, Neil Walkinshaw, Tim Kelly
Metamodel Comparison and Model Comparison for Safety Assurance

In safety-critical domains, conceptual models are created in the form of metamodels using different concepts from possibly overlapping domains. Comparison between those conceptual models can facilitate the reuse of models from one domain to another. This paper describes the mappings detected when comparing metamodels and models used for safety assurance. We use a small use case to discuss the mappings between metamodels and models, and the relations between model elements expressed in mappings. Finally, an illustrative case study is used to demonstrate our approach.

Yaping Luo, Luc Engelen, Mark van den Brand
Does Visualization Speed Up the Safety Analysis Process?

The goal of this paper is to present our experience in utilizing the power of the information visualization (InfoVis) field to accelerate the safety analysis process of Component Fault Trees (CFT) in embedded systems. For this, we designed and implemented an interactive visual tool called ESSAVis, which takes the CFT model as input and then calculates the required safety information (e.g., the information on minimal cut sets and their probabilities) that is needed to measure the safety criticality of the underlying system. ESSAVis uses this information to visualize the CFT model and allows users to interact with the produced visualization in order to extract the relevant information in a visual form. We compared ESSAVis with ESSaRel, a tool that models the CFT and represents the analysis results in textual form. We conducted a controlled user evaluation study where we invited 25 participants from different backgrounds, including 6 safety experts, to perform a set of tasks to analyze the safety aspects of a given system in both tools. We compared the results in terms of accuracy, efficiency, and level of user acceptance. The results of our study show a high acceptance ratio and higher accuracy with better performance for ESSAVis compared to the text-based tool ESSaRel. Based on the study results, we conclude that visual-based tools really help in analyzing the CFT model more accurately and efficiently. Moreover, the study opens the door to thoughts about how the power of visualization can be utilized in such domains to accelerate the safety assurance process in embedded systems.

Ragaad AlTarawneh, Max Steiner, Davide Taibi, Shah Rukh Humayoun, Peter Liggesmeyer
Agile Change Impact Analysis of Safety Critical Software

Change Impact Analysis (CIA) is an important task for all who develops and maintains safety critical software. Many of the safety standards that are used in the development and use of systems with a certified safety integrity level (SIL) requires changes of such systems to be initiated by a CIA. The resulting CIA report will identify planned changes that may threaten the existing safety level. The challenge with CIA is that there are no practical guidelines on how to conduct and report such an analysis. This has led to a practice where most changes lead to extensive up-front analysis that may be costly and delay the change process itself. In this paper we propose a new strategy for CIA based on the principles of agile software development and the SafeScrum approach to establish a more efficient in-process impact analysis. We discuss several benefits of this approach, like resource savings, shorter time to initiate the change process, better prioritization and management of the change process, and others.

Tor Stålhane, Geir Kjetil Hanssen, Thor Myklebust, Børge Haugset
Backmatter
Metadaten
Titel
Computer Safety, Reliability, and Security
herausgegeben von
Andrea Bondavalli
Andrea Ceccarelli
Frank Ortmeier
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-10557-4
Print ISBN
978-3-319-10556-7
DOI
https://doi.org/10.1007/978-3-319-10557-4