Skip to main content

2011 | Buch

Computer Safety, Reliability, and Security

30th International Conference,SAFECOMP 2011, Naples, Italy, September 19-22, 2011. Proceedings

herausgegeben von: Francesco Flammini, Sandro Bologna, Valeria Vittorini

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 30th International Conference on Computer Safety, Reliability, and Security, SAFECOMP 2011, held in Naples, Italy, in September 2011. The 34 full papers presented together were carefully reviewed and selected from 100 submissions. The papers are organized in topical sections on RAM evaluation, complex systems dependability, formal verification, risk and hazard analysis, cybersecurity and optimization methods.

Inhaltsverzeichnis

Frontmatter

Session 1: Ram Evaluation 1

The Effect of Correlated Failure Rates on Reliability of Continuous Time 1-Out-of-2 Software

In this paper we study the effects on system reliability of the correlation over input space partitions between the rates of failure of two-channel fault-tolerant control software. We use a continuous-time semi-Markov model to describe the behavior of the system. We demonstrate via simulation that the variation of the failure rates of the channels over the partitions of the input space can affect system reliability very significantly. With a plausible range of model parameters we observed that the mean time to system failure may vary by

more than an order of magnitude

: positive correlation between the channel rates makes the system less reliable while negative correlation between the channel rates implies that the system is more reliable than assuming constant failure rates for the channels. Our observations seem to make a case for

more detailed reliability measurements

than is typically undertaken in practice.

Peter Popov, Gabriele Manno
Model-Driven Availability Evaluation of Railway Control Systems

Maintenance of real-world systems is a complex task involving several actors, procedures and technologies. Proper approaches are needed in order to evaluate the impact of different maintenance policies considering cost/benefit factors. To that aim, maintenance models may be used within availability, performability or safety models, the latter developed using formal languages according to the requirements of international standards. In this paper, a model-driven approach is described for the development of formal maintenance and reliability models for the availability evaluation of repairable systems. The approach facilitates the use of formal models which would be otherwise difficult to manage, and provides the basis for automated models construction. Starting from an extension to maintenance aspects of the MARTE-DAM profile for dependability analysis, an automated process based on model-to-model transformations is described. The process is applied to generate a Repairable Fault Trees model from the MARTE-DAM specification of the Radio Block Centre - a modern railway controller.

Simona Bernardi, Francesco Flammini, Stefano Marrone, José Merseguer, Camilla Papa, Valeria Vittorini

Session 2: Complex Systems Dependability 1

Vertical Safety Interfaces – Improving the Efficiency of Modular Certification

Modular certification is a technique for transferring the modularity of an embedded system’s architecture to the traditionally monolithic craft of safety engineering. Particularly when applying integrated architectures like AUTOSAR or IMA, modular certification allows the construction of modular safety cases, which ensures the flexible handling of platforms and applications. However, the task of integrating these safety cases is still a manual and expensive endeavor, lowering the intended flexibility of an integrated architecture. We propose a tool-supported semi-automatic integration method that preserves the architecture’s flexibility and helps to lower the integration costs. Our method is based on a language capable of specifying the conditions for a valid integration of a platform and of an application using a contract-based approach to model safety case interfaces. This paper presents the language in detail.

Bastian Zimmer, Susanne Bürklen, Michael Knoop, Jens Höfflinger, Mario Trapp
DALculus – Theory and Tool for Development Assurance Level Allocation

The Development Assurance Level (DAL) indicates the level of rigor of the development of a software or hardware function of an aircraft. We propose a theory formalizing the DAL allocation rules found in the ARP4754a recommended practices. A tool implementing this theory was developed in order to assist the safety specialists when checking or optimizing a DAL allocation.

Pierre Bieber, Rémi Delmas, Christel Seguin
Towards Cross-Domains Model-Based Safety Process, Methods and Tools for Critical Embedded Systems: The CESAR Approach

The CESAR project aims at elaborating a Reference Technology Platform usable across several application domains (Aeronautics, Automotive, Industrial Automation, Railway and Space) for the cost effective development and validation of safety related embedded systems. Safety and, more generally, dependability are therefore major topics addressed by the project. This paper focuses on the work performed on safety requirements and approaches to be supported by a common Reference Technology Platform. We analyse and compare the industrial practice, applicable standards and state of the art so as to identify which and how safety views should be supported. We focus in particular on the major axes investigated by the project, formal model-based techniques for requirements engineering and component-based engineering. Preliminary realisations and case studies confirm the interest and provide refined requirements for the final version of the platform.

Jean-Paul Blanquart, Eric Armengaud, Philippe Baufreton, Quentin Bourrouilh, Gerhard Griessnig, Martin Krammer, Odile Laurent, Joseph Machrouh, Thomas Peikenkamp, Cecile Schindler, Tormod Wien

Session 3: Formal Verification 1

From Probabilistic Counterexamples via Causality to Fault Trees

In recent years, several approaches to generate probabilistic counterexamples have been proposed. The interpretation of stochastic counterexamples, however, continues to be problematic since they have to be represented as sets of paths, and the number of paths in this set may be very large. Fault trees (FTs) are a well-established industrial technique to represent causalities for possible system hazards resulting from system or system component failures. In this paper we suggest a method to automatically derive FTs from counterexamples, including a mapping of the probability information onto the FT. We extend the structural equation approach by Pearl and Halpern, which is based on Lewis counterfactuals, so that it serves as a justification for the causality that our proposed FT derivation rules imply. We demonstrate the usefulness of our approach by applying it to an industrial case study.

Matthias Kuntz, Florian Leitner-Fischer, Stefan Leue
Rigorous Evidence of Freedom from Concurrency Faults in Industrial Control Software

In the power generation industry, digital control systems may play an important role in plant safety. Thus, these systems are the object of rigorous analyzes and safety assessments. In particular, the quality, correctness and dependability of control systems software need to be justified. This paper reports on the development of a tool-based methodology to address the demonstration of freedom from intrinsic software faults related to concurrency and synchronization, and its practical application to an industrial control software case study. We describe the underlying theoretical foundations, the main mechanisms involved in the tools and the main results and lessons learned from this work. An important conclusion of the paper is that the used verification techniques and tools scale efficiently and accurately to industrial control system software, which is a major requirement for real-life safety assessments.

Richard Bonichon, Géraud Canet, Loïc Correnson, Eric Goubault, Emmanuel Haucourt, Michel Hirschowitz, Sébastien Labbé, Samuel Mimram

Session 4: Risk and Hazard Analysis

Evolutionary Risk Analysis: Expert Judgement

New systems and functionalities are continuously deployed in complex domains such as Air Traffic Management (ATM). Unfortunately, methodologies provide limited support in order to deal with changes and to assess their impacts on critical features (e.g. safety, security, etc.). This paper is concerned with how change requirements affect security properties. A change requirement is a specification of changes that are to be implemented in a system. The paper reports our experience to support an evolutionary risk analysis in order to assess change requirements and their impacts on security properties. In particular, this paper discusses how changes to structured risk analysis models are perceived by domain experts by presenting insights from a risk assessment exercise that uses the CORAS model-driven risk analysis in an ATM case study. It discusses how structured models supporting risk analysis help domain experts to analyse and assess the impact of changes on critical system features.

Massimo Felici, Valentino Meduri, Bjørnar Solhaug, Alessandra Tedeschi
Computer-Aided PHA, FTA and FMEA for Automotive Embedded Systems

The shift of the automotive industry towards powertrain electrification introduces new automotive sensors, actuators and functions that lead to an increasing complexity of automotive embedded systems. The safety-criticality of these systems demands the application of analysis techniques such as PHA (Preliminary Hazard Analysis), FTA (Fault Tree Analysis) and FMEA (Failure Modes and Effects Analysis) in the development process. The early application of PHA allows to identify and classify hazards and to define top-level safety requirements. Building on this, the application of FTA and FMEA supports the verification of a system architecture defining an embedded system together with connected sensors and controlled actuators. This work presents a modeling framework with automated analysis and synthesis capabilities that supports a safety engineering workflow using the domain-specific language EAST-ADL. The contribution of this work is (1) the definition of properties that indicate the correct application of the workflow using the language. The properties and a model integrating the work products of the workflow are used for the automated detection of errors (property checker) and the automated suggestion and application of corrective measures (model corrector). Furthermore, (2) fault trees and a FMEA table can be automatically synthesized from the same model. The applicability of this computer-aided and tightly integrated approach is evaluated using the case study of a hybrid electric vehicle development.

Roland Mader, Eric Armengaud, Andrea Leitner, Christian Kreiner, Quentin Bourrouilh, Gerhard Grießnig, Christian Steger, Reinhold Weiß

Session 5: Cybersecurity

A Statistical Anomaly-Based Algorithm for On-line Fault Detection in Complex Software Critical Systems

The next generation of software systems in Large-scale Complex Critical Infrastructures (LCCIs) requires efficient runtime management and reconfiguration strategies, and the ability to take decisions on the basis of current and past behavior of the system. In this paper we propose an anomalybased approach for the detection of online faults, which is able to (i) cope with highly variable and non-stationary environment and to (ii) work without any initial training phase. The novel algorithm is based on Statistical Predictor and Safety Margin (SPS), which was initially developed to estimate the uncertainty in time synchronization mechanisms.

The SPS anomaly detection algorithm has been experimented on a case study from the Air Traffic Management (ATM) domain. Results have been compared with an algorithm, which adopts static thresholds, in the same scenarios [5]. Experimental results show limitations of static thresholds in highly variable scenarios, and the ability of SPS to fulfill the expectations.

Antonio Bovenzi, Francesco Brancati, Stefano Russo, Andrea Bondavalli
Security Analysis of Smart Grid Data Collection Technologies

In the last few years we are witnessing a dramatic increase in cyber-attacks targeted against Critical Infrastructures. Attacks against Critical Infrastructures are especially dangerous because they are tailored to disrupt assets which are essential to the functioning of the society as a whole. Examples of Critical Infrastructure sectors include transportation, communication, and utilities. Among these, power grids are possibly the most critical, due to the strong dependency of virtually all Critical Infrastructures on the power infrastructure. We have conducted a security analysis of two key technologies which enable data collection in Power Grids, namely synchrophasor devices and Phasor Data Concentrators. We emphasize that the study has been conducted on a commercial synchrophasor produced by a major vendor, and on a widely used open source product for the Phasor Data Concentrator application. We describe the experimental setup, present the main results, and comment the findings of our research.

Luigi Coppolino, Salvatore D’Antonio, Ivano Alessandro Elia, Luigi Romano

Session 6: RAM Evaluation 2

Modeling Aircraft Operational Reliability

The success of an aircraft mission is subject to the fulfillment of some operational requirements before and during each flight. As these requirements depend essentially on the aircraft system components and the mission profile, the effects of failures can be very significant if they are not anticipated. Hence, one should be able to assess the aircraft operational reliability with regard to its missions in order to be able to cope with failures. This paper addresses aircraft operational reliability modeling to support maintenance planning during the mission achievement. We develop a modeling approach to represent the aircraft system operational state taking into account the mission profile as well as the maintenance facilities available at the flight stop locations involved in the mission. It is illustrated using Stochastic Activity Networks (SANs) formalism, based on an aircraft subsystem.

Kossi Tiassou, Karama Kanoun, Mohamed Ka, Christel Seguin, Chris Papadopoulos
An Integrated Approach for Availability and QoS Evaluation in Railway Systems

Prediction of service availability in railway systems requires an increasing attention by designers and operators in order to satisfy acceptable service quality levels offered to passengers. For this reason it is necessary to reach high availability standards, relying on high-dependable system components or identifying effective operational strategies addressed to mitigate failure effects. To this purpose, in this paper an innovative architecture is proposed to simulate railway operation in order to conduct different kinds of analysis. This architecture encompasses a set of components considering, in an integrated way, several system features. Finally an application to a first case study demonstrates the impact on quality of service and service availability of different recovery strategies. Complexity of a railway system requires a heterogeneous working group composed of experts in transport and in computer science areas, with the support of industry.

Antonino Mazzeo, Nicola Mazzocca, Roberto Nardone, Luca D’Acierno, Bruno Montella, Vincenzo Punzo, Egidio Quaglietta, Immacolata Lamberti, Pietro Marmo

Session 7: Case Studies

Using a Software Safety Argument Pattern Catalogue: Two Case Studies

Software safety cases encourage developers to carry out only those safety activities that actually reduce risk. In practice this is not always achieved. To help remedy this, the SSEI at the University of York has developed a set of software safety argument patterns. This paper reports on using the patterns in two real-world case studies, evaluating the patterns’ use against criteria that includes flexibility, ability to reveal assurance decits and ability to focus the case on software contributions to hazards. The case studies demonstrated that the safety patterns can be applied to a range of system types regardless of the stage or type of development process, that they help limit safety case activities to those that are significant for achieving safety, and that they help developers nd assurance deficits in their safety case arguments. The case study reports discuss the difficulties of applying the patterns, particularly in the case of users who are unfamiliar with the approach, and the authors recognise in response the need for better instructional material. But the results show that as part of the development of best practice in safety, the patterns promise signicant benets to industrial safety case creators.

Richard Hawkins, Kester Clegg, Rob Alexander, Tim Kelly
Integration of a System for Critical Infrastructure Protection with the OSSIM SIEM Platform: A dam case study

In recent years the monitoring and control devices in charge of supervising the critical processes of Critical Infrastructures have been victims of cyber attacks. To face such threat, organizations providing critical services are increasingly focusing on protecting their network infrastructures. Security Information and Event Management (SIEM) frameworks support network protection by performing centralized correlation of network asset reports. In this work we propose an extension of a commercial SIEM framework, namely OSSIM by AlienVault, to perform the analysis of the reports (events) generated by monitoring, control and security devices of the dam infrastructure. Our objective is to obtain evidences of misuses and malicious activities occurring at the dam monitoring and control system, since they can result in issuing hazardous commands to control devices. We present examples of misuses and malicious activities and procedures to extend OSSIM for analyzing new event types.

Luigi Coppolino, Salvatore D’Antonio, Valerio Formicola, Luigi Romano
A Case Study on State-Based Robustness Testing of an Operating System for the Avionic Domain

This paper investigates the impact of state on robustness testing, by enhancing the traditional approach with the inclusion of the OS state in test cases definition. We evaluate the relevance of OS state and the effects of the proposed strategy through an experimental campaign on the file system of a Linux-based OS, to be adopted by Finmeccanica for safety-critical systems in the avionic domain. Results show that the OS state plays an important role in testing those corner cases not covered by traditional robustness testing.

Domenico Cotroneo, Domenico Di Leo, Roberto Natella, Roberto Pietrantuono

Session 8: Formal Verification 2

Formal Methods for the Certification of Autonomous Unmanned Aircraft Systems

In this paper we assess the feasibility of using formal methods, and model checking in particular, for the certification of Unmanned Aircraft Systems (UAS) within civil airspace. We begin by modelling a basic UAS control system in

Promela

, and verify it against a selected subset of the CAA’s Rules of the Air using the

Spin

model checker. Next we build a more advanced UAS control system using the autonomous agent language Gwendolen, and verify it against the small subset of the Rules of the Air using the agent model checker AJPF. We introduce more advanced autonomy into the UAS agent and show that this too can be verified. Finally we compare and contrast the various approaches, discuss the paths towards full certification, and present directions for future research.

Matt Webster, Michael Fisher, Neil Cameron, Mike Jump
Verifying Functional Behaviors of Automotive Products in EAST-ADL2 Using UPPAAL-PORT

We study the use of formal modeling and verification techniques at an early stage in the development of safety-critical automotive products which are originally described in the domain specific architectural language EAST-ADL2. This architectural language only focuses on the structural definition of functional blocks. However, the behavior inside each functional block is not specified and that limits formal modeling and analysis of systems behaviors as well as efficient verification of safety properties. In this paper, we tackle this problem by proposing one modeling approach, which formally captures the behavioral execution inside each functional block and their interactions, and helps to improve the formal modeling and verification capability of EAST-ADL2: the behavior of each elementary function of EAST-ADL2 is specified in UPPAAL Timed Automata. The formal syntax and semantics are defined in order to specify the behavior model inside EAST-ADL2 and their interactions. A composition of the functional behaviors is considered a network of Timed Automata that enables us to verify behaviors of the entire system using the UPPAAL model checker. The method has been demonstrated by verifying the safety of the Brake-by-wire system design.

Eun-Young Kang, Pierre-Yves Schobbens, Paul Pettersson

Poster Session

Establishing Confidence in the Usage of Software Tools in Context of ISO 26262

The development of safety-critical electric/electronic (E/E) automotive systems is performed by an increasing number of software tools. Hence it is very important that software tool malfunctions do not have an impact on the final product. This paper proposes a systematic methodology to establish confidence in the usage of software tools. The approach has been developed on the basis of an industrial development project and is compliant to the framework required by the standard ISO 26262. The methodology is based on a multi-layered analysis that systematically identifies the risk of tool-introduced errors and error detection failures and allows for the derivation of the tool confidence level (TCL). The benefit of this methodology is to identify and reuse already existing verification measures in the development process for establishing confidence in the usage of software tools. Furthermore, the approach allows introducing new verification measures to optimize the overall development process.

Joachim Hillebrand, Peter Reichenpfader, Irenka Mandic, Hannes Siegl, Christian Peer
Fault-Based Generation of Test Cases from UML-Models – Approach and Some Experiences

In principle, automated test case generation – both from source code and models – is a fairly evolved technology, which is on the way to common use in industrial testing and quality assessment of safety-related, software-intensive systems. However, common coverage measures such as branch or MC/DC for source code and states or transitions for state-based models provide only very limited information about the covered (implementation) faults. Fault-based test case generation tries to improve this situation by looking for detecting faults explicitly. This paper describes an approach combining fault- and model-based testing which has been realized in the European project MOGENTES, using UML state machines for representing requirements, and discusses results of its application to a use case from the automotive domain.

Rupert Schlick, Wolfgang Herzner, Elisabeth Jöbstl
ISO/IEC 15504-10: Motivations for Another Safety Standard

This paper presents the new standard ISO/IEC 15504 Part 10 Safety Extension. It has been developed to extend the well-known ISO/IEC 15504 standard for process assessment and improvement in order to make consistent judgment regarding process capability or improvement priorities for safety related systems development. In order to avoid misunderstanding and confute reluctance and worry related to such a new standard, its contents, purpose and intended are explained in this paper. Moreover, comparison between the ISO/IEC 15504 Part 10 and other existing safety standards for software is provided as well as a discussion on possible integrations and consequent benefits of its usage.

Giuseppe Lami, Fabrizio Fabbrini, Mario Fusani
Automatic Synthesis of SRN Models from System Operation Templates for Availability Analysis

In order to cost-effectively verify whether system designs of information systems satisfy availability requirements, it is reasonable to utilize a model-based availability assessment of system design containing administrative operation procedures and a system configuration, because it does not require installing and testing in a real environment. However, since the model-based availability assessments typically require special expertise in mathematical modeling, it would be difficult for a practical system designer to build a correct availability model to assess his/her system design. Although there have been several methods to automatically synthesize the availability model from widely-used design description languages, the synthesized models do not capture impacts caused by operations in operation procedures on availability. To address this issue, this paper proposes a method to automatically synthesize an availability model in the form of stochastic reward net (SRN) from Systems Modeling Language (SysML) diagrams to specify operation procedures and system configurations. Modeling all the features of individual operations is impractical because the amount of required information in SysML diagrams input by system designers becomes larger as the number of features increases. To design the availability models with a smallest possible number of features, we classify typical availability-related features of operations into

operation templates

. The feasibility of the proposed method is studied by a case study based on a real system of a local government. We succeeded in synthesizing the availability models from the SysML diagrams based on an operation procedure and system configuration of the real system, and analyzing the synthesized availability models with an existing model analysis tool.

Kumiko Tadano, Jiangwen Xiang, Masahiro Kawato, Yoshiharu Maeno
A Collaborative Event Processing System for Protection of Critical Infrastructures from Cyber Attacks

We describe an Internet-based collaborative environment that protects geographically dispersed organizations of a critical infrastructure (e.g., financial institutions, telco providers) from coordinated cyber attacks. A specific instance of a collaborative environment for detecting malicious inter-domain port scans is introduced. This instance uses the open source Complex Event Processing (CEP) engine ESPER to correlate massive amounts of network traffic data exhibiting the evidence of those scans. The paper presents two inter-domain SYN port scan detection algorithms we designed, implemented in ESPER, and deployed on the collaborative environment; namely, Rank-based SYN (R-SYN) and Line Fitting. The paper shows the usefulness of the collaboration in terms of detection accuracy. Finally, it shows how Line Fitting can both achieve a higher detection accuracy with a smaller number of participants than R-SYN, and exhibit better detection latencies than R-SYN in the presence of low link bandwidths (i.e., less than 3Mbit/s) connecting the organizations to Esper.

Leonardo Aniello, Giuseppe Antonio Di Luna, Giorgia Lodi, Roberto Baldoni
A Fault-Tolerant, Dynamically Scheduled Pipeline Structure for Chip Multiprocessors

This paper presents a dynamically scheduled pipeline structure for chip multiprocessors (CMPs). This technique exploits existing Simultaneous Multithreading (SMT), superscalar chip multiprocessors’ redundancy to provide low-overhead, and broad coverage of faults at the cost of performance degradation for processors. This pipeline structure operates in two modes: 1) high-performance and 2) highly-reliable. In high-performance mode, each core works as a real SMT, superscalar processor. Whereas, the main contribution of the highly-reliable mode is: 1) To enhance the reliability of the system without adding extra redundancy strictly for fault tolerance, 2) To detect both transient and permanent faults, and 3) To recover existing faults. The experimental results show that the diagnosis mechanism quickly and accurately diagnoses faults. The fault detection latency for this technique is equal to the pipeline length of the processor, while it provides high fault detection coverage. Moreover, the reliable processor can function quite capably in the presence of both transient and permanent faults, despite of not using redundancy beyond which is already available in a modern microprocessor. Also, in the highly-reliable mode, the static and dynamic power consumption is declined by 25% and 36%, respectively.

Hananeh Aliee, Hamid Reza Zarandi
FloGuard: Cost-Aware Systemwide Intrusion Defense via Online Forensics and On-Demand IDS Deployment

Detecting intrusions early enough can be a challenging and expensive endeavor. While intrusion detection techniques exist for many types of vulnerabilities, deploying them all to catch the small number of vulnerability exploitations that might actually exist for a given system is not cost-effective. In this paper, we present FloGuard, an on-line intrusion forensics and on-demand detector selection framework that provides systems with the ability to deploy the right detectors dynamically in a cost-effective manner when the system is threatened by an exploit. FloGuard relies on often easy-to-detect symptoms of attacks, e.g., participation in a botnet, and works backwards by iteratively deploying off-the-shelf detectors closer to the initial attack vector. The experiments using the EggDrop bot and systems with real vulnerabilities show that FloGuard can efficiently localize the attack origins even for unknown vulnerabilities, and can judiciously choose appropriate detectors to prevent them from being exploited in the future.

Saman Aliari Zonouz, Kaustubh R. Joshi, William H. Sanders
Reducing Complexity of Data Flow Testing in the Verification of a IEC-62304 Flexible Workflow System

In the development of SW applications, the workflow abstraction gives primary relevance to the way how some process can be accomplished through a sequence of connected steps. This largely conditions analysis, implementation architecture, and verification. In particular, testing activities are naturally oriented towards a data flow approach, which effectively exercises dependencies among steps. In several application scenarios, the workflow model cannot completely determine the sequencing of actions and it must rather leave space to variability. While easily encompassed both in the analysis and implementation stages, this comprises a major hurdle for the testing stage due to the explosion in the number of allowed execution orders and paths.

We address the problem reporting on the verification of the control software of a Computer Assisted Surgery system. In this case, the workflow abstraction captures the constraints of a medical protocol, and variability in the order of steps reflects dynamic adaptation of the course of actions to the specific characteristics of each patient. This largely increases the testing effort needed to accomplish the prescriptions of the IEC-62304 certification standard. To cope with the problem, we show how data flow analysis can be used to identify an appropriate set of constraints that can be exploited in the verification stage, so as to reduce the test suite while preserving coverage.

Federico Cruciani, Enrico Vicario
Improvement of Processes and Methods in Testing Activities for Safety-Critical Embedded Systems

In order to sustain competitiveness in transport domain, especially in automotive, aerospace and rail, it is extremely important to control and optimize the entire development process of complex safety-critical embedded systems. In this context, the ARTEMIS EU-project CESAR (Cost-Efficient methods and processes for SAfety Relevant embedded systems) aims to boost cost efficiency of embedded systems development, safety and certification processes by an order of magnitude. We want to achieve the above target in the railway domain with particular emphasis on the Verification and Validation (V&V) process where activities to be performed, due to their complexity, require a significant amount of economical resources. Starting from an industrial use case (the On- Board Unit of the European Railway Traffic Management System Level 1, ERTMS L1) we provide a methodology that overcomes some weaknesses in testing processes. It supports requirements analysis and automatic test cases generation, avoiding a computational explosion.

Giuseppe Bonifacio, Pietro Marmo, Antonio Orazzo, Ida Petrone, Luigi Velardi, Alessio Venticinque

Session 9: Formal Verification 3

On the Adoption of Model Checking in Safety-Related Software Industry

In the last fifteen years, model checking has been applied successfully in the design and verification of many safety related software systems. However, it is not yet routinely adopted in the industry of safety-critical systems. In this paper we introduce the model checking technique and its relations to safety; then we survey the sensible areas of research related to the current and potential industrial application of this technique, exploring the current trends, that in our opinion will bring to a wider adoption of model checking in the next years.

Alessandro Fantechi, Stefania Gnesi
Equivalence Checking between Function Block Diagrams and C Programs Using HW-CBMC

Controllers in safety critical systems such as nuclear power plants often use Function Block Diagrams (FBDs) to design embedded software. The design program are translated into programming languages such as C to compile it into machine code for particular target hardware. It is required to verify equivalence between the design and the implementation, because the implemented program should have same behavior with the design. This paper introduces a technique about verifying equivalence between a design written in FBDs and its implementation written in C language using HW-CBMC. To demonstrate the effectiveness of our proposal, as a case study, we used one of 18 shutdown logics in a prototype of Advanced Power Reactor’s (APR-1400) Reactor Protection System (RPS) in Korea. Our approach is effective to check equivalence between FBDs and ANSI-C programs if the automatically generated Verilog program is translated into appropreate one of the HW-CBMC.

Dong-Ah Lee, Junbeom Yoo, Jang-Soo Lee
A Framework for Simulation and Symbolic State Space Analysis of Non-Markovian Models

Formal methods supporting development of safety-critical systems require tools that can be integrated within composed environments. Sirio is a framework for simulation and analysis of various timed extensions of Petri Nets, supporting correctness verification and quantitative evaluation of timed concurrent systems. As a characterizing trait, Sirio is expressly designed to support reuse and to facilitate extensions such as the definition of new reward measures, new variants of the analysis, and new models with a different semantics. We describe here the functional responsibilities and the SW architecture of the framework.

Laura Carnevali, Lorenzo Ridi, Enrico Vicario

Session 10: Optimization Methods

Model-Based Multi-objective Safety Optimization

It is well-known that in many safety critical applications safety goals are antagonistic to other design goals or even antagonistic to each other. This is a big challenge for the system designers who have to find the best compromises between different goals.

In this paper, we show how model-based safety analysis can be combined with multi-objective optimization to balance a safety critical system wrt. different goals. In general the presented approach may be combined with almost any type of (quantitative) safety analysis technique. For additional goal functions, both analytic and black-box functions are possible, derivative information about the functions is not necessary. As an example, we use our quantitative model-based safety analysis in combination with analytical functions describing different other design goals. The result of the approach is a set of

best compromises

of possible system variants.

Technically, the approach relies on genetic algorithms for the optimization. To improve efficiency and scalability to complex systems, elaborate estimation models based on artificial neural networks are used which speed up convergence. The whole approach is illustrated and evaluated on a real world case study from the railroad domain.

Matthias Güdemann, Frank Ortmeier
Tradeoff Exploration between Reliability, Power Consumption, and Execution Time

We propose an off-line scheduling heuristics which, from a given software application graph and a given multiprocessor architecture (homogeneous and fully connected), produces a static multiprocessor schedule that optimizes three criteria: its

length

(crucial for real-time systems), its

reliability

(crucial for dependable systems), and its

power consumption

(crucial for autonomous systems). Our tricriteria scheduling heuristics, TSH, uses the

active replication

of the operations and the data-dependencies to increase the reliability, and uses

dynamic voltage and frequency scaling

to lower the power consumption.

Ismail Assayad, Alain Girault, Hamoudi Kalla

Session 11: Complex Systems Dependability 2

Criticality-Driven Component Integration in Complex Software Systems

Complex software systems are commonly developed by integrating multiple, occasionally Off-The-Shelf (OTS), components. This process results into a more modular design and reduces development costs; however, it raises new dependability challenges in case of safety critical systems. Testing activities conducted during the development of the individual components might be not enough to ensure a proper safety level after the integration. The failures of the components and their impact on the overall system safety have to be assessed in critical scenarios. This paper proposes a method to support component integration in complex software systems. The method uses (i) the knowledge of the architectural dependencies among the system components, and (ii) the results of failure-modes emulation experiments, to assess both error propagation phenomena within the system and the criticality of the components in the system architecture. This information is valuable to design effective error-mitigation means and, when needed, to select the most suitable OTS item if multiple equivalent options are available. The method is applied to a real world Air Traffic Control system, developed in the context of an academic-industrial collaboration.

Antonio Pecchia, Roberto Pietrantuono, Stefano Russo
On the Use of Semantic Technologies to Model and Control Security, Privacy and Dependability in Complex Systems

In this paper a semantic approach is presented to model and control Security, Privacy and Dependability (SPD) in complex interconnected environment composed by heterogeneous Embedded Systems.

Usually, only the individual properties are locally considered to obtain desired functionalities and this could result in sub-optimal solutions. With the use of modern semantic technologies (like OWL or reasoning engines) it is possible to model not only the individual parameters but also the relations between the different (and dynamically changing) parts of the systems, thus providing enriched knowledge and more useful information that could feed control algorithms.

The model presented in this paper is based on the results obtained during the first phase of the pSHIELD project (conceived and lead by Finmeccanica) and it is focused on a concrete application coming from a critical scenario in railway environment: the monitoring of freight trains transporting hazardous material.

Andrea Fiaschetti, Francesco Lavorato, Vincenzo Suraci, Andi Palo, Andrea Taglialatela, Andrea Morgagni, Renato Baldelli, Francesco Flammini
Backmatter
Metadaten
Titel
Computer Safety, Reliability, and Security
herausgegeben von
Francesco Flammini
Sandro Bologna
Valeria Vittorini
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-24270-0
Print ISBN
978-3-642-24269-4
DOI
https://doi.org/10.1007/978-3-642-24270-0