Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 34th International Conference on Computer Safety, Reliability, and Security, SAFECOMP 2015, held in Delft, The Netherlands, in September 2014. The 32 revised full papers presented together with 3 invited talks were carefully reviewed and selected from 104 submissions. The papers are organized in topical sections on flight systems, automotive embedded systems, automotive software, error detection, medical safety cases, medical systems, architecture and testing, safety cases, security attacks, cyber security and integration, and programming and compiling.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Frontmatter

Medical Devices, Electronic Health Records and Assuring Patient Safety: Future Challenges?

The patient safety movement was triggered by publications showing that modern health care is more unsafe than road travel and that more patients are killed annually by avoidable adverse events than by breast cancer [1]. As a result, an urgent need to improve patient safety has dominated international health care systems over the last decade. Some examples of safety issues that healthcare actively tries to address are: reducing the incidence of hospitalacquired infections, avoiding errors with patient identification (wrong patient, wrong procedure, wrong side), errors with drug prescription and administration (wrong drug, wrong dose, wrong route), recognizing deteriorating patients earlier to allow timely life-saving treatment, developing systems for rapid appropriate treatment for stroke and myocardial infarction and improving care for frail elderly patients with multiple diseases using many drugs. Addressing these issues has proven more difficult than anticipated and actual progress in patient safety has been frustratingly slow. Reference [2] Root cause analysis of serious adverse events invariably points to problems with communication and orientation as the most important contributing factors. Reference [3] Given that for centuries doctors used their - often illegible - handwriting to take notes and prescribe drugs, it is understandable that the advent of electronic health records (EHR) created huge anticipation for safer and improved work flows, as well as better connectivity between care givers - both within the hospital as between the hospital and general practitioners, nursing homes, rehabilitation facilities and pharmacies. By signing the

Health Information Technology for Economic and Clinical Health (HITECH)

Act in 2009 and incentivizing EHR adoption, the Obama administration made implementation of electronic health records an integral part of improving efficiency and safety of health care in the United States.

Cor J. Kalkman

Cyber (In-)security of Industrial Control Systems: A Societal Challenge

Our society and its citizens increasingly depend on the undisturbed functioning of critical infrastructures (CI), their products and services. Many of the CI services as well as other organizations use Industrial Control Systems (ICS) to monitor and control their mission-critical processes. Therefore, it is crucial that the functioning of ICS is well protected inter alia against cyber threats. The cyber threat areas to ICS comprise the lack of proper governance as well as cyber security aspects related to organizational, system and network management, technology and technical issues. Moreover, newer functionality entering organizations is often controlled by embedded ICS which hide itself from those that are responsible for cyber security. The immature cyber security posture of ICS and their connectivity with public networks pose a major risk to society. This article explores the threats, provide some examples of cyber incidents with ICS, and will discuss the ICS security challenges to our societies.

Eric Luiijf

Flight Systems

Frontmatter

Modeling Guidelines and Usage Analysis Towards Applying HiP-HOPS Method to Airborne Electrical Systems

Aircraft development process requires safety assessment to ensure aircraft continued airworthiness by guaranteeing that hazards related to aircraft functions are properly addressed. Safety analyses require increasingly more reliable and efficient solutions, particularly for complex and highly integrated aircraft systems. Fault Tree Analysis (FTA) is a safety technique broadly applied in aerospace industry. The generation of fault trees can be facilitated by using the HiP-HOPS method proposed by Dr. Yiannis Papadopoulos. HiP-HOPS supports semi-automatically generation of fault trees based on system architectural model and annotations regarding system failure modes. In this paper, we investigate the usage of HiP-HOPS method in airborne electrical systems. We propose modeling guidelines, in order to help engineers and analysts to build system models more suitable to the application of HiP-HOPS. We apply both HiP-HOPS and guidelines in a case study and evaluate HiP-HOPS applicability using criteria as acceptability, suitability and practicality.

Carolina D. Villela, Humberto H. Sano, Juliana M. Bezerra

The Formal Derivation of Mode Logic for Autonomous Satellite Flight Formation

Satellite formation flying is an example of an autonomous distributed system that relies on complex coordinated mode transitions to accomplish its mission.While the technology promises significant economical and scientific benefits, it also poses a major verification challenge since testing the system on the ground is impossible. In this paper, we experiment with formal modelling and proof-based verification to derive mode logic for autonomous flight formation. We rely on refinement in Event-B and proof-based verification to create a detailed specification of the autonomic actions implementing the coordinated mode transitions. By decomposing system-level model, we derive the interfaces of the satellites and guarantee that their communication supports correct mode transitions despite unreliability of the communication channel. We argue that a formal systems approach advocated in this paper constitutes a solid basis for designing complex autonomic systems.

Anton Tarasyuk, Inna Pereverzeva, Elena Troubitsyna, Timo Latvala

Automotive Embedded Systems

Frontmatter

Simulation of Automotive Security Threat Warnings to Analyze Driver Interpretations and Emotional Transitions

With the evolution of cars into complex electronic-mechanical systems and related increasing relevance of electronic manipulation and malicious attacks on automotive IT, Security warnings are becoming also more important. This paper presents the findings regarding the potential effect of IT security warnings in vehicles. Different warning approaches were designed and analyzed in driving simulator tests, based upon three representative IT security threats and three variations of the information quantity and recommended action. The potential effect of these warnings was measured using three scenarios, including the simulated consequences, e.g. sudden swerve of vehicle. We analyzed the implications on drivers reaction, task performance, thoughts and emotions to derive the stress level. We found a positive effect of given recommendations due to the lack of security awareness in automotive IT, accompanied by a high variety of warnings cause interpretations. Especially without given recommendation a higher rate of ignorance was observed, leading to accidents.

Robert Altschaffel, Tobias Hoppe, Sven Kuhlmann, Jana Dittmann

Improving Dependability of Vision-Based Advanced Driver Assistance Systems Using Navigation Data and Checkpoint Recognition

Advanced Driver Assistance Systems (ADAS), like adaptive cruise control, collision avoidance, and, ultimately, autonomous driving are increasingly evolving into safety-critical systems. These ADAS frequently rely on proper function of Computer-Vision Systems (CVS), which is hard to assess in a timely manner, due to their sensitivity to the variety of illumination conditions (e.g. weather conditions, sun brightness). On the other hand, self-awareness information is available in the vehicle, such as maps and localization data (e.g. GPS).

This paper studies how the combination of diverse environmental information can improve the overall vision-based ADAS reliability. To this extent we present a concept of a Computer-Vision Monitor (CVM) that identifies predefined landmarks in the vehicles surrounding, based on digital maps and localization data, and that checks whether the CVS correctly identifies said landmarks.We formalize and assess the reliability improvement of our solution by means of a fault-tree analysis.

Ayhan Mehmed, Sasikumar Punnekkat, Wilfried Steiner, Giacomo Spampinato, Martin Lettner

Safely Using the AUTOSAR End-to-End Protection Library

The AUTOSAR End-to-End library is used to protect data. On the producer side a counter and checksum are added, such that on the consumer side it can be detected whether there was a communication failure. For optimal bus utilisation, it is a common solution that a producer publishes data that is read by many consumers. If the data also needs to be protected, this results in an End-to-Many-Ends solution.

In this paper, we analyse the impact of an End-to-Many-Ends solution on the safety guarantees of the AUTOSAR End-to-End Protection. In particular with focus on the problem that arises when the consumers read the messages with a periodicity that differs from the producer. It turns out that this common situation severely reduces the safety guarantees these standard components offer. In this report we analyze these reductions on different architectures.

Thomas Arts, Stefano Tonetta

A Structured Validation and Verification Method for Automotive Systems Considering the OEM/Supplier Interface

The released ISO 26262 standard for automotive systems requires several validation and verification activities. These validation and verification activities have to be planned and performed jointly by the OEMs and the suppliers. In this paper, we present a systematic, structured and model-based method to plan the required validation and verification activities and collect the results. Planning and the documentation of performed activities are represented by a UML notation extended with stereotypes. The UML model supports the creation of the artifacts required by ISO 26262, enables document generation and a rigorous check of several constraints expressed in OCL. We illustrate our method using the example of an electronic steering column lock system.

Kristian Beckers, Isabelle Côté, Thomas Frese, Denis Hatebur, Maritta Heisel

Automotive Software

Frontmatter

Model-Based Analysis for Safety Critical Software

Safety-relevant software developed within the automotive domain is subject to the safety standard ISO 26262. In particular, a supplier must show that implemented safety mechanisms sufficiently address relevant failure modes. This involves complex and costly testing procedures.

We introduce an early analysis approach for safety mechanisms implemented in safety-relevant software by combining model checking and model-based testing. Model checking is applied to verify the correctness of an abstract amodel of the system under test. The verified model is then used to automatically generate tests for the verification of the implemented Safety Elements. The approach has been evaluated in an industrial case study, addressing Analogue Digital Converters as part of the motor control within a hybrid electric vehicle. The results suggest that our approach allows to create high quality test suites. In addition, the test model helps to reduce misunderstandings due to imprecise specification of safety mechanisms.

Stefan Gulan, Jens Harnisch, Sven Johr, Roberto Kretschmer, Stefan Rieger, Rafael Zalman

Integrated Safety Analysis Using Systems-Theoretic Process Analysis and Software Model Checking

Safety-critical systems are becoming increasingly more complex and reliant on software. The increase in complexity and software renders ensuring the safety of such systems increasingly difficult. Formal verification approaches can be used to prove the correctness of software; however, even perfectly correct software could lead to an accident. The difficulty is in defining appropriate safety requirements. STPA (Systems- Theoretic Process Analysis) is a modern safety analysis approach which aims to identify the potential hazardous causes in complex systems. Model checking is an efficient technique to verify software against its requirements. In this paper, we propose an approach that integrates safety analysis and verification activities to demonstrate how a systematic combination between these approaches can help safety and software engineers to derive the software safety requirements and verify them to recognize software risks.We illustrate the proposed approach by the example of the adaptive cruise control system.

Asim Abdulkhaleq, Stefan Wagner

Back-to-Back Fault Injection Testing in Model-Based Development

Today, embedded systems across industrial domains (e.g., avionics, automotive) are representatives of software-intensive systems with increasing reliance on software and growing complexity. It has become critically important to verify software in a time, resource and cost effective manner. Furthermore, industrial domains are striving to comply with the requirements of relevant safety standards. This paper proposes a novel workflow along with tool support to evaluate robustness of software in model-based development environment, assuming different abstraction levels of representing software. We then show the effectiveness of our technique, on a brake-by-wire application, by performing back-to-back fault injection testing between two different abstraction levels using MODIFI for the Simulink model and GOOFI-2 for the generated code running on the target microcontroller. Our proposed method and tool support facilitates not only verifying software during early phases of the development lifecycle but also fulfilling back-to-back testing requirements of ISO 26262 [1] when using model-based development.

Peter Folkesson, Fatemeh Ayatolahi, Behrooz Sangchoolie, Jonny Vinter, Mafijul Islam, Johan Karlsson

Error Detection

Frontmatter

Understanding the Effects of Data Corruption on Application Behavior Based on Data Characteristics

In this paper, the results of an experimental study on the error sensitivities of application data are presented.We develop a portable software-implemented fault-injection (SWIFI) tool that, on top of performing single-bit flip fault injections and capturing their effects on application behavior, is also data-level aware and tracks the corrupted application data to report their high-level characteristics (usage type, size, user, memory space location). After extensive testing of NPB-serial (7.8M fault injections), we are able to characterize the sensitivities of data based on their high-level characteristics. Moreover, we conclude that application data are error sensitive in parts; depending on their type, they have distinct and wide less-sensitive bit ranges either at the MSBs or LSBs. Among other uses, such insight could drive the development of sensitivity-aware protection mechanisms of application data.

Georgios Stefanakis, Vijay Nagarajan, Marcelo Cintra

A Multi-layer Anomaly Detector for Dynamic Service-Based Systems

Revealing anomalies to support error detection in complex systems is a promising approach when traditional detection mechanisms (e.g., based on event logs, probes and heartbeats) are considered inadequate or not applicable. The detection capability of such complex system can be enhanced observing different layers to achieve richer information that describes the system status. Relying on an algorithm for statistical anomaly detection, in this paper we present the definition and implementation of an anomaly detector able to monitor data acquired from multiple layers, namely the Operating system and the Application Server, of a remote physical or virtual node. As case study, such monitoring system is applied to a node of the Secure! crisis management service-based system. Results show the monitor performance, the intrusiveness of the probes, and ultimately the improved detection capability achieved observing data from the different layers.

Andrea Ceccarelli, Tommaso Zoppi, Massimiliano Itria, Andrea Bondavalli

Medical Safety Cases

Frontmatter

Safety Case Driven Development for Medical Devices

Medical devices are safety-critical systems that must comply with standards during their development process because of their intrinsic potential of producing harms. Although the existing trend of an increasing complexity of medical hardware and software components, very little has been done in order to apply more mature safety practices already present on other industrial scenarios. This paper proposes a methodology to enhance the Model-Based System Engineering (MBSE) state-of-art practices from the safety perspective, encouraging the use of safety cases and providing guidance on how to show the correspondent traceability for the development artifacts. We illustrate our methodology and its usage in the context of an industrial Automated External Defibrillator (AED). We suggest that medical device industry could learn from other domains and adapt its development to take into account the hazards and risks along the development, providing more sophisticated justification, as, for example, the impact of design decisions.

Alejandra Ruiz, Paulo Barbosa, Yang Medeiros, Huascar Espinoza

Towards an International Security Case Framework for Networked Medical Devices

Medical devices (MDs) are becoming increasingly networked. Given, that safety is the most significant factor within then MD industry and the radical shift in MDs design to enable them to be networked, it would make sense that strong security requirements associated with networking of a device should be put in place to protect such devices from becoming increasingly vulnerable to security risks. However, this is not the case. Networked MDs may be at risk. In an attempt to reduce this risk to the MD industry there are a number of upcoming regulatory changes, which will affect the development of networked MDs, how they are regulated and how they are managed in operation. Consequently, an industry-wide issue exists as there is currently no standardised way to assist organisations to satisfy such security related requirements. This paper describes ongoing research for the development of an innovative framework to improve the overall security practices adopted during MD development, in operation and through to retirement.

Anita Finnegan, Fergal McCaffery

Medical Systems

Frontmatter

Systems-Theoretic Safety Assessment of Robotic Telesurgical Systems

Robotic surgical systems are among the most complex medical cyber-physical systems on the market. Despite significant improvements in design of those systems through the years, there have been ongoing occurrences of safety incidents that negatively impact patients during procedures. This paper presents an approach for systems-theoretic safety assessment of robotic telesurgical systems using software-implemented fault injection. We used a systems-theoretic hazard analysis technique (STPA) to identify the potential safety hazard scenarios and their contributing causes in RAVEN II, an open-source telerobotic surgical platform. We integrated the robot control software with a software-implemented fault injection engine that measures the resilience of system to the identified hazard scenarios by automatically inserting faults into different parts of the software. Representative hazard scenarios from real robotic surgery incidents reported to the U.S. Food and Drug Administration (FDA) MAUDE database were used to demonstrate the feasibility of the proposed approach for safety-based design of robotic telesurgical systems.

Homa Alemzadeh, Daniel Chen, Andrew Lewis, Zbigniew Kalbarczyk, Jaishankar Raman, Nancy Leveson, Ravishankar Iyer

Towards Assurance for Plug & Play Medical Systems

Traditional safety-critical systems are designed and integrated by a systems integrator. The system integrator can asses the safety of the completed system before it is deployed. In medicine, there is a desire to transition from the traditional approach to a new model wherein a user can combine various devices post-hoc to create a new composite system that addresses a specific clinical scenario. Ensuring the safety of these systems is challenging: Safety is a property of systems that arises from the interaction of system components and it’s not possible to asses overall system safety by assessing a single component in isolation. It is unlikely that end-users will have the engineering expertise or resources to perform safety assessments each time they create a new composite system. In this paper we describe a platform-oriented approach to providing assurance for plug & play medical systems as well as an associated assurance argument pattern.

Andrew L. King, Lu Feng, Sam Procter, Sanjian Chen, Oleg Sokolsky, John Hatcliff, Insup Lee

Risk Classification of Data Transfer in Medical Systems

Nowadays, the hospital IT network is increasingly used to transport data between medical devices and information systems. The increase in network integration and the importance of the transported data results in high dependency on the IT network in the clinical setting. Until now, risk classification methods focused on two individual components of a medical system: medical devices and medical software. In this paper, we present a tool to classify patient safety risks of data transfer in medical systems by indicating the dependency on the IT network. The new method shifts the focus from separate components to the intended use of the entire system. It supports communication about risks and enables us to link risk analysis techniques and safety measures to the classification. The tool can be used in the design phase and is the start of a risk management process to secure safe use of a medical system.

Dagmar Rosenbrand, Rob de Weerd, Lex Bothe, Jan Jaap Baalbergen

Requirement Engineering for Functional Alarm System for Interoperable Medical Devices

This paper addresses the problem of high-assurance operation for medical cyber-physical systems built from interoperable medical devices. Such systems are different from most cyber-physical systems due to their “plug-and-play” nature: they are assembled as needed at a patient’s bedside according to a specification that captures the clinical scenario and required device types. We need to ensure that such a system is assembled correctly and operates according to its specification. In this regard, we aim to develop an alarm system that would signal interoperability failures. We study how plug-and-play interoperable medical devices and systems can fail by means of hazard analysis that identify hazardous situations that are unique to interoperable systems. The requirements for the alarm system are formulated as the need to detect these hazardous situations. We instantiate the alarm requirement generation process through a case-study involving an interoperable medical device setup for airway-laser surgery.

Krishna K. Venkatasubramanian, Eugene Y. Vasserman, Vasiliki Sfyrla, Oleg Sokolsky, Insup Lee

Architectures and Testing

Frontmatter

The Safety Requirements Decomposition Pattern

Safety requirement specifications usually have heterogeneous structures, most likely based on the experience of the engineers involved in the specification process. Consequently, it gets difficult to ensure that recommendations given in standards are considered, e.g., evidence that the requirements are complete and consistent with other development artifacts. To address this challenge, we present in this paper the Safety Requirements Decomposition Pattern, which aims at supporting the decomposition of safety requirements that are traceable to architecture and failure propagation models. The effectiveness of the approach has been observed in its application in different domains, such as automotive, avionics, and medical devices. In this paper, we present its usage in the context of an industrial Automated External Defibrillator system.

Pablo Oliveira Antonino, Mario Trapp, Paulo Barbosa, Edmar C. Gurjão, Jeferson Rosário

Automatic Architecture Hardening Using Safety Patterns

Safety critical systems or applications must satisfy safety requirements ensuring that catastrophic consequences of combined component failures are avoided or kept below a satisfying probability threshold. Therefore, designers must define a hardened architecture (or implementation) of each application, which fulfills the required level of safety by integrating redundancy and safety mechanisms. We propose a methodology which, given the nominal functional architecture, uses constraint solving to select automatically a subset of system components to update and appropriate safety patterns to apply to meet safety requirements. The proposed ideas are illustrated on an avionics flight controller case study.

Kevin Delmas, Rémi Delmas, Claire Pagetti

Modeling the Impact of Testing on Diverse Programs

This paper presents a model of diverse programs that assumes there are a common set of potential software faults that are more or less likely to exist in a specific program version. Testing is modeled as a specific ordering of the removal of faults from each program version. Different models of testing are examined where common and diverse test strategies are used for the diverse program versions. Under certain assumptions, theory suggests that a common test strategy could leave the proportion of common faults unchanged, while diverse test strategies are likely to reduce the proportion of common faults. A review of the available empirical evidence gives some support to the assumptions made in the fault-based model. We also consider how the proportion of common faults can be related to the expected reliability improvement.

Peter Bishop

Safety Cases

Frontmatter

A Model for Safety Case Confidence Assessment

Building a safety case is a common approach to make expert judgement explicit about safety of a system. The issue of confidence in such argumentation is still an open research field. Providing quantitative estimation of confidence is an interesting approach to manage complexity of arguments. This paper explores the main current approaches, and proposes a new model for quantitative confidence estimation based on Belief Theory for its definition, and on Bayesian Belief Networks for its propagation in safety case networks.

Jérémie Guiochet, Quynh Anh Do Hoang, Mohamed Kaaniche

Towards a Formal Basis for Modular Safety Cases

Safety assurance using argument-based safety cases is an accepted best-practice in many safety-critical sectors. Goal Structuring Notation (GSN), which is widely used for presenting safety arguments graphically, provides a notion of modular arguments to support the goal of incremental certification. Despite the efforts at standardization, GSN remains an informal notation whereas the GSN standard contains appreciable ambiguity especially concerning modular extensions. This, in turn, presents challenges when developing tools and methods to intelligently manipulate modular GSN arguments. This paper develops the elements of a theory of modular safety cases, leveraging our previous work on formalizing GSN arguments. Using example argument structures we highlight some ambiguities arising through the existing guidance, present the intuition underlying the theory, clarify syntax, and address

modular arguments, contracts, well-formedness

and

well-scopedness

of modules. Based on this theory, we have a preliminary implementation of modular arguments in our toolset, AdvoCATE.

Ewen Denney, Ganesh Pai

Security Attacks

Frontmatter

Quantifying Risks to Data Assets Using Formal Metrics in Embedded System Design

This paper addresses quantifying security risks associated with data assets within design models of embedded systems. Attack and system behaviours are modelled as time-dependent stochastic processes. The presence of the time dimension allows accounting for dynamic aspects of potential attacks and a system: the probability of a successful attack changes as time progresses; and a system possesses different data assets as its execution unfolds. These models are used to quantify two important attributes of security: confidentiality and integrity. In particular, likelihood/consequence-based measures of confidentiality and integrity losses are proposed to characterise security risks to data assets. In our method, we consider attack and system behaviours as two separate models that are later elegantly combined for security analysis. This promotes knowledge reuse and avoids adding extra complexity in the system design process. We demonstrate the effectiveness of the proposed method and metrics on smart metering devices.

Maria Vasilevskaya, Simin Nadjm-Tehrani

ISA2R: Improving Software Attack and Analysis Resilience via Compiler-Level Software Diversity

The current IT landscape is characterized by

software monoculture

: All installations of one program version are identical. This leads to a huge return of investment for attackers who can develop a single attack once to compromise millions of hosts worldwide. Software diversity has been proposed as an alternative to software monoculture. In this paper we present a collection of diversification transformations called ISA

2

R, developed for the low-level virtual machine (LLVM). By diversifying the properties crucial to successful exploitation of a vulnerability, we render exploits that work on one installation of a software ineffective against others. Through this we enable developers to add protective measures automatically during compilation. In contrast to similar existing tools, ISA

2

R provides protection against a wider range of attacks and is applicable to all programming languages supported by LLVM.

Rafael Fedler, Sebastian Banescu, Alexander Pretschner

Cyber Security and Integration

Frontmatter

Barriers to the Use of Intrusion Detection Systems in Safety-Critical Applications

Intrusion detection systems (IDS) provide valuable tools to monitor for, and militate against, the impact of cyber-attacks. However, this paper identifies a range of theoretical and practical concerns when these software systems are integrated into safety-critical applications. Whitelist approaches enumerate the processes that can legitimately exploit system resources. Any other access requests are interpreted to indicate the presence of malware. Whitelist approaches cannot easily be integrated into safety-related systems where the use of legacy applications and Intellectual Property (IP) barriers associated with the extensive use of sub-contracting make it different to enumerate the resource requirements for all valid processes. These concerns can lead to a high number of false positives. In contrast, blacklist intrusion detection systems characterize the behavior of known malware. In order to be effective, blacklist IDS must be updated at regular intervals as new forms of attack are identified. This raises enormous concerns in safety-critical environments where extensive validation and verification requirements ensure that software updates must be rigorously tested. In other words, there is a concern that the IDS update might itself introduce bugs into a safety-related system. Isolation between an IDS and a safety related application minimizes this threat. For instance, information diodes limit interference by ensuring that an IDS is restricted to read-only access on a safety related network. Further problems arise in determining what to do when an IDS identifies a possible attack, given that false positives can increase risks to the public during an emergency shutdown.

Chris W. Johnson

Stochastic Modeling of Safety and Security of the e-Motor, an ASIL-D Device

This paper offers a stochastic model and a combined analysis of safety and security of the e-Motor, an ASIL D (ISO 26262) compliant device designed for use with AUTOSAR CAN bus.

The paper argues that in the absence of credible data on the likelihood and payload of cyber attacks on newly developed devices a sensible approach would be to separate the concerns: (i) the payloads that may affect the device’s safety can be identified using standard hazard analysis techniques; (ii) the difficulty with the parameterization of a stochastic model can be alleviated by applying sensitivity analysis for a plausible range of model parameter values.

Peter T. Popov

Organisational, Political and Technical Barriers to the Integration of Safety and Cyber-Security Incident Reporting Systems

Many companies must report cyber-incidents to regulatory organisations, including the US Securities and Exchange Commission and the European Network and Information Security Agency. Unfortunately, these security systems have not been integrated with safety reporting schemes. This leads to confusion and inconsistency when, for example a cyber-attack undermines the safe operation of critical infrastructures. The following pages explain this lack of integration. One reason is a clash of reporting cultures when safety related systems are intended to communicate lessons as widely as possible to avoid any recurrence of previous accidents. In contrast, disclosing the details of a security incident might motivate further attacks. There are political differences between the organisations that conventionally gather data on cyber-security incidents, national telecoms regulators, and those that have responsibility for the safety of application processes, including transportation and energy regulators. At a more technical level, the counterfactual arguments that identify root causes in safety-related accidents cannot easily be used to reason about the malicious causes of future security incidents. Preventing the cause of a previous attack provides little assurance that a motivated adversary will not succeed with another potential vector. The closing sections argue that we must address these political, organisational and technical barriers to integration given the growing threat that cyber-attacks pose for a host of complex, safety-critical applications.

Chris W. Johnson

A Comprehensive Safety, Security, and Serviceability Assessment Method

Dependability is a superordinate concept regrouping different system attributes such as reliability, safety, security, or availability and non-functional requirements for modern embedded systems. These different attributes, however, might lead to different targets. Furthermore, the non-unified methods to manage these different attributes might lead to inconsistencies, which are identified in late development phases. The aim of the paper is to present a combined approach for system dependability analysis to be applied in early development phases. This approach regroups state-of-the-art methods for safety, security, and reliability analysis, thus enabling consistent dependability targets identification across the three attributes. This, in turn, is a pre-requisite for consistent dependability engineering along the development lifecycle. In the second part of the document the experiences of this combined dependability system analysis method are discussed based on an automotive application.

Georg Macher, Andrea Höller, Harald Sporer, Eric Armengaud, Christian Kreiner

Programming and Compiling

Frontmatter

Source-Code-to-Object-Code Traceability Analysis for Avionics Software: Don’t Trust Your Compiler

One objective of structural coverage analysis according to RTCA DO-178C for avionic software of development assurance level A (DAL-A) is to either identify object code that was not exercised during testing, or to provide evidence that all code has been tested in an adequate way. Therefore comprehensive tracing information from source code to object code is required, which is typically derived using a manual source-code-to-object-code (STO) traceability analysis. This paper presents a set of techniques to perform automatic STO traceability analysis using abstract interpretation, which we have implemented in a toolsuite called Rtt-Sto. At its core, the tool tries to prove that the control flow graphs of the object code and the source are isomorphic. Further analyses, such as memory allocation analysis and store analysis are then performed on top. Our approach has been applied during low-level verification for DAL-A avionics software, where the effort for STO analysis was significantly reduced due to a high degree of automation. Importantly, the associated analysis process was accepted by the responsible certification authorities.

Jörg Brauer, Markus Dahlweid, Tobias Pankrath, Jan Peleska

Automated Generation of Buffer Overflow Quick Fixes Using Symbolic Execution and SMT

In many C programs, debugging requires significant effort and can consume a lot of time. Even if the bug’s cause is known, detecting a bug in such programs and generating a bug fix patch manually is a tedious task. In this paper, we present a novel approach used to generate bug fixes for buffer overflow automatically using static execution, code patch patterns, quick fix locations, user input saturation and Satisfiability Modulo Theories (SMT). The generated patches are syntactically correct, can be semi-automatically inserted into code and do not need additional human refinement. We evaluated our approach on 58C open source programs contained in the Juliet test suite and measured an overhead of 0.59 approach is generalizable and can be applied with other bug checkers that we developed.

Paul Muntean, Vasantha Kommanapalli, Andreas Ibing, Claudia Eckert

A Software-Based Error Detection Technique for Monitoring the Program Execution of RTUs in SCADA

A Supervisory Control and Data Acquisition (SCADA) system is an Industrial Control System (ICS) which controls large scale industrial processes including several sites over long distances and consists of some Remote Terminal Units (RTUs) and a Master Terminal Unit (MTU). RTUs collect data from sensors and control actuators situated at remote sites and send data to the MTU through a network. Since RTUs operate in a harsh industrial environment, fault tolerance is a key requirement particularly for safety-critical industrial processes. Studies show that a significant number of transient faults due to a harsh environment result in control flow errors in the RTU’s processors. A software error detection technique has been proposed to detect control flow errors in several RTUs. For experimental evaluation 30,000 faults injected on network; the average performance and memory overheads are about 33.20 % and 36.79 %, respectively and this technique detected more than 96.32 % of injected faults.

Navid Rajabpour, Yasser Sedaghat

Real-World Types and Their Application

Software systems sense and affect real world objects and processes in order to realize important real-world systems. For these systems to function correctly, such software should obey constraints inherited from the real world. Typically, neither important characteristics of real-world entities nor the relationships between such entities and their machine-world representations are specified explicitly in code, and important opportunities for detecting errors due to mismatches are lost. To address this problem we introduce real-world types to document in software both relevant characteristics of real-world entities and the relationships between real-world entities and machine-level representations. These constructs support specification and automated static detection of such mismatches in programs written in ordinary languages. We present a prototype implementation of our approach for Java and case studies in which previously unrecognized real-world type errors in several real systems were detected.

Jian Xiang, John Knight, Kevin Sullivan

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise