Skip to main content

2008 | Buch

Computer Safety, Reliability, and Security

27th International Conference, SAFECOMP 2008 Newcastle upon Tyne, UK, September 22-25, 2008 Proceedings

herausgegeben von: Michael D. Harrison, Mark-Alexander Sujan

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 27th International Conference on Computer Safety, Reliability, and Security, SAFECOMP 2008, held in Newcastle upon Tyne, UK, in September 2008. The 32 revised full papers presented together with 3 keynote papers and a panel session were carefully reviewed and selected from 115 submissions. The papers are organized in topical sections on software dependability, resilience, fault tolerance, security, safety cases, formal methods, dependability modelling, as well as security and dependability.

Inhaltsverzeichnis

Frontmatter

Keynote Papers

Critical Information Infrastructures: Should Models Represent Structures or Functions?

The common approaches to modeling and analyzing complex socio-technical systems, of which Critical Information Infrastructures is one example, assumes that they can be completely specified. The methods emphasize how systems are composed or structured and how component failures propagate. Since socio-technical systems always are underspecified, they cannot be analyzed in the same way. The alternative is to focus on their functions, and how the variability of functions can combine to create non-linear effects. An example of that is the Functional Resonance Analysis method (FRAM).

Erik Hollnagel
Security and Interoperability for MANETs and a Fixed Core

The problem of ensuring security and interoperability for mobile ad hoc networks that connect to a valuable fixed core infrastructure is discussed. The problem is broken down into three interdependent research areas of Security versus Risk; Identity Management; Verification, Validation and Certification. The research issues for each area are discussed in detail.

Colin O’Halloran, Andy Bates
Technology, Society and Risk

There remains a healthy debate among those working in the fun-ctional safety field over issues that appear to be fundamental to the discipline. Coming from an industry that is a relative newcomer to this discipline I look to the more established industries to give a lead. Not only are they in debate about key issues, the approaches taken do not always transfer easily to a mass market product, developed within very tight business constraints. Key issues that are debated include:

What is meant by risk, what is acceptable risk and who does the accepting?

How do we justify that an acceptable risk has been, or will be, achieved?

What role does the development process play?

What is meant by the concept of a Safety Integrity Level?

In this talk I will air some views on these questions based on my experience of deve-loping automotive systems and authoring industry sector guidelines and standards in the hope that this will provoke informed discussion.

Roger Rivett
Panel: Complexity and Resilience

The complexity of modern-day information systems creates large dependability challenges. As described in the ReSIST (Resilience for Survivability in IST) working programme [1], “current state-of-knowledge and state-of-the-art reasonably enable the construction and operation of critical systems, be they safety-critical (e.g., avionics, railway signalling, nuclear control) or availability-critical (e.g., back-end servers for transaction processing). However, the situation drastically worsens when considering large, networked, evolving, systems either fixed or mobile, with demanding requirements driven by their domain of application, i.e., ubiquitous systems. There is statistical evidence that these emerging systems suffer froma significant drop in dependability and security in comparison with the former systems. There is thus a

dependability and security gap

opening in front of us that, if not filled, will endanger the very basis and advent of information systems.”

Aad van Moorsel

Software Dependability

The Effectiveness of T-Way Test Data Generation

This paper reports the results of a study comparing the effectiveness of automatically generated tests constructed using random and

t

-way combinatorial techniques on safety related industrial code using mutation adequacy criteria. A reference point is provided by hand generated test vectors constructed during development to establish minimum acceptance criteria. The study shows that 2-way testing is not adequate measured by mutants kill rate compared with hand generated test set of similar size, but that higher factor

t

-way test sets can perform at least as well. To reduce the computation overhead of testing large numbers of vectors over large numbers of mutants a staged optimising approach to applying

t

-way tests is proposed and evaluated which shows improvements in execution time and final test set size.

Michael Ellims, Darrel Ince, Marian Petre
Towards Agile Engineering of High-Integrity Systems

We describe the results of a pilot study on the application of an agile process to building a high-integrity software system. The challenges in applying an agile process in this domain are outlined, and potential solutions for dealing with issues of communication, scalability, and system complexity are proposed. We report on the safety process, argumentation generated to support the process, and the technology and tools used to strengthen the agile process in terms of support for verification and validation.

Richard F. Paige, Ramon Charalambous, Xiaocheng Ge, Phillip J. Brooke
SafeSpection – A Systematic Customization Approach for Software Hazard Identification

Software is an integral part of many technical systems and responsible for the realization of safety-critical features contained therein. Consequently, software has to be carefully considered in safety analysis efforts to ensure that it does not cause any system hazards. Safety engineering approaches borrowed from systems engineering, like Failure Mode and Effect Analysis, Fault Tree Analysis, or Hazard and Operability Studies, have been applied on software-intensive systems. However, in order to be successful, tailoring is needed to the characteristics of software and the concrete application context. Furthermore, due to the manual and expert-dependent nature of these techniques, the results are often not repeatable and address mainly syntactic issues. This paper presents the concepts of a customization framework to support the definition and implementation of project-specific software hazard identification approaches. The key-concepts of the approach, generic guide-phrases, and tailoring concepts to create objective, project-specific support to detect safety-weaknesses of software-intensive systems are introduced.

Christian Denger, Mario Trapp, Peter Liggesmeyer
Integrating Safety Analyses and Component-Based Design

In recent years, awareness of how software impacts safety has increased rapidly. Instead of regarding software as a black box, more and more standards demand safety analyses of software architectures and software design. Due to the complexity of software-intensive embedded systems, safety analyses easily become very complex, time consuming, and error prone. To overcome these problems, safety analyses have to be integrated into the complete development process as tightly as possible. This paper introduces an approach to integrating safety analyses into a component-oriented, model-based software engineering approach. The reasons for this are twofold: First, component- and model-based development have already been proven in practical use to handle complexity and reduce effort. Second, they easily support the integration of functional and non-functional properties into design, which can be used to integrate safety analyses.

Dominik Domis, Mario Trapp
Modelling Support for Design of Safety-Critical Automotive Embedded Systems

This paper describes and demonstrates an approach that promises to bridge the gap between model-based systems engineering and the safety process of automotive embedded systems. The basis for this is the integration of safety analysis techniques, a method for developing and managing Safety Cases, and a systematic approach to model-based engineering – the EAST-ADL2 architecture description language. Three areas are highlighted: (1) System model development on different levels of abstraction. This enables fulfilling many requirements on software development as specified by ISO-CD-26262; (2) Safety Case development in close connection to the system model; (3) Analysis of mal-functional behaviour that may cause hazards, by modelling of errors and error propagation in a (complex and hierarchical) system model.

DeJiu Chen, Rolf Johansson, Henrik Lönn, Yiannis Papadopoulos, Anders Sandberg, Fredrik Törner, Martin Törngren

Resilience

Resilience in the Aviation System

This paper presents an overview of the main characteristics of the civil aviation domain and their relation with concepts coming from the approach of resilience engineering. Our objective is to first outline the structural properties of the aviation domain (i.e. regulations, standards, relationships among the various actors, system dynamics), to then present some example processes that bear an effect on the system resilience. We will in particular reason on training and on the role of automation, to discuss how and to what extent they impact on system resilience. We contend that, in a complex system like aviation, resilience engineering is not a matter of simple technical upgrades, rather is about facing contradictory tensions and dynamic system changes. This paper contains a pilot’s first-hand reflections, so it aims to stimulate discussion on some issues that are still open, rather than providing solutions.

Antonio Chialastri, Simone Pozzi
Resilience Markers for Safer Systems and Organisations

If computer systems are to be designed to foster resilient performance it is important to be able to identify contributors to resilience. The emerging practice of Resilience Engineering has identified that people are still a primary source of resilience, and that the design of distributed systems should provide ways of helping people and organisations to cope with complexity. Although resilience has been identified as a desired property, researchers and practitioners do not have a clear understanding of what manifestations of resilience look like. This paper discusses some examples of strategies that people can adopt that improve the resilience of a system. Critically, analysis reveals that the generation of these strategies is only possible if the system facilitates them. As an example, this paper discusses practices, such as reflection, that are known to encourage resilient behavior in people. Reflection allows systems to better prepare for oncoming demands. We show that contributors to the practice of reflection manifest themselves at different levels of abstraction: from individual strategies to practices in, for example, control room environments. The analysis of interaction at these levels enables resilient properties of a system to be ‘seen’, so that systems can be designed to explicitly support them. We then present an analysis of resilience at an organisational level within the nuclear domain. This highlights some of the challenges facing the Resilience Engineering approach and the need for using a collective language to articulate knowledge of resilient practices across domains.

Jonathan Back, Dominic Furniss, Michael Hildebrandt, Ann Blandford
Modeling and Analyzing Disaster Recovery Plans as Business Processes

The importance of business continuity and disaster recovery (BC/DR) plans has grown considerably in the recent years, becoming a well-established practice to achieve organization’s resiliency. There are several applicable standards, like BS 25999-1:2006, sets of guidelines and best practices in this field. BC/DR plans are typically text documents and exercising is still the main measure used to verify them. On the contrary, to the common practice we suggest to model BC/DR plans as business processes using ARIS methodology and models, which have proven successful in the Enterprise Resource Planning systems projects. This provides uniform representation of BC/DR plans that can be applied across the whole distributed organization, strengthens the efficiency of traditional manual analysis techniques like walk-throughs, helps to achieve completeness, consistency and makes possible computer simulation of BC/DR processes. Timing and dynamic behavior, resource utilization and completeness properties have been also defined. It is possible to analyze them with computer support based on proposed ARIS model of BC/DR plan.

Andrzej Zalewski, Piotr Sztandera, Marcin Ludzia, Marek Zalewski

Fault Tolerance

Analysis of Nested CRC with Additional Net Data in Communication

Cyclic Redundancy Check (CRC) is an established coding method to ensure a low probability of undetected errors in data transmission. CRC is widely used in industrial field bus systems where communication is often executed through different layers. Some layers have their own CRC and add their own specific data to the net data that is meant to be sent. Up to now, this nesting is not yet included in the safety proof of systems. Hence, additional effort is made to achieve a required degree of safety which was probably on hand but could not be proven. The paper presents an approach to involve the nesting in the calculation of the residual error probability based on methods of coding theory. This approach helps to reduce the number of worst case assumptions in the overall safety proof and finally to reduce the necessary online efforts like the number of parity bits.

Tina Mattes, Frank Schiller, Annemarie Mörwald, Thomas Honold
Symbolic Reliability Analysis of Self-healing Networked Embedded Systems

In recent years, several network online algorithms have been studied that exhibit self-x properties such as self-healing or self-adaption. These properties are used to improve systems characteristics like, e.g., fault-tolerance, reliability, or load-balancing.

In this paper, a symbolic reliability analysis of self-healing networked embedded systems that rely on self-reconfiguration and self-routing is presented. The proposed analysis technique respects resource constraints such as the maximum computational load or the maximum memory size, and calculates the achievable reliability of a given system. This analytical approach considers the topology of the system, the properties of the resources, and the executed applications. Moreover, it is independent of the used online algorithms that implement the self-healing properties, but determines the achievable upper bound for the systems reliability. Since this analysis is not tailored to a specific online algorithm, it allows a reasonable decision making on the used algorithm by enabling a rating of different self-healing strategies. Experimental results show the effectiveness of the introduced technique even for large networked embedded systems.

Michael Glaß, Martin Lukasiewycz, Felix Reimann, Christian Haubelt, Jürgen Teich
Investigation and Reduction of Fault Sensitivity in the FlexRay Communication Controller Registers

It is now widely believed that FlexRay communication protocol will become the de-facto standard for distributed safety-critical automotive systems. In this paper, the fault sensitivity of the FlexRay communication controller registers are investigated using transient single bit-flip fault injection. To do this, a FlexRay bus network, composed of four nodes, was modeled. A total of 135,600 transient single bit-flip faults were injected to all 408 accessible single-bit and multiple-bit registers of the communication controller in one node. The results showed that among all 408 accessible registers, 30 registers were immediately affected by the injected faults. The results also showed that 26.2% of injected faults caused at least one error. Based on the fault injection results, the TMR and the Hamming code techniques were applied to the most sensitive parts of the FlexRay protocol. These techniques reduced the fault affection to the registers from 26.2% to 10.3% with only 13% hardware overhead.

Yasser Sedaghat, Seyed Ghassem Miremadi

Security

Secure Interaction Models for the HealthAgents System

Distributed decision support systems designed for healthcare use can benefit from services and information available across a decentralised environment. The sophisticated nature of collaboration among involved partners who contribute services or sensitive data in this paradigm, however, demands careful attention from the beginning of designing such systems. Apart from the traditional need of secure data transmission across clinical centres, a more important issue arises from the need of consensus for access to system-wide resources by separately managed user groups from each centre. A primary concern is the determination of interactive tasks that should be made available to authorised users, and further the clinical resources that can be populated into interactions in compliance with user clinical roles and policies. To this end, explicit interaction modelling is put forward along with the contextual constraints within interactions that together enforce secure access, the interaction participation being governed by system-wide policies and local resource access being governed by node-wide policies. Clinical security requirements are comprehensively analysed, prior to the design and building of our security model. The application of the approach results in a Multi-Agent System driven by secure interaction models. This is illustrated using a prototype of the HealthAgents system.

Liang Xiao, Paul Lewis, Srinandan Dasmahapatra
Security Challenges in Adaptive e-Health Processes

E-health scenarios demand system-based support of process-oriented information systems. As most of the processes in this domain have to be flexibly adapted to meet exceptional or unforeseen situations, flexible process-oriented information systems (POIS) are needed which support ad-hoc deviations at the process instance level. However, e-health scenarios are also very sensitive with regard to privacy issues. Therefore, an adequate access rights management is essential as well. The paper addresses challenges which occur when flexible POIS and adequate rights management have to be put together.

Michael Predeschly, Peter Dadam, Hilmar Acker
An Efficient e-Commerce Fair Exchange Protocol That Encourages Customer and Merchant to Be Honest

A new e-Commerce fair exchange protocol is presented in this paper. The protocol is for exchanging payment with digital product (such as computer software) between customer (C) and merchant (M). It makes use of Trusted Third Party (TTP) but its use is kept to minimum when disputes arise. In this respect it is an optimistic fair exchange protocol. A new idea, in which if the parties are willing to exchange then they are encouraged to be honest, is originated in this protocol. The protocol has the following features: (1) It comprises four messages to be exchanged between C and M in the exchange phase; (2) It guarantees strong fairness for both C and M so that by the end of executing the protocol both C and M will have each other’s items or no one has got anything; (3) It allows both parties (C and M) to check the correctness of the item of the other party before they send their item; (4) It resolves disputes automatically online by the help of the Trusted Third Party (TTP); and (5) The proposed protocol is efficient in that it has a low number of modular exponentiations (which is the most expensive operations) when compared to other protocols in the literature.

Abdullah Alaraj, Malcolm Munro
Creating a Secure Infrastructure for Wireless Diagnostics and Software Updates in Vehicles

A set of guidelines for creating a secure infrastructure for wireless diagnostics and software updates in vehicles is presented. The guidelines are derived from a risk assessment for a wireless infrastructure. From the outcome of the risk assessment, a set of security requirements to counter the identified security risks were developed. The security requirements can be viewed as guidelines to support a secure implementation of the wireless infrastructure. Moreover, we discuss the importance of defining security policies.

Dennis K. Nilsson, Ulf E. Larson, Erland Jonsson
Finding Corrupted Computers Using Imperfect Intrusion Prevention System Event Data

With the increase of attacks on the Internet, a primary concern for organizations is how to protect their network. The objectives of a security team are 1) to prevent external attackers from launching successful attacks against organization computers that could become compromised, 2) to ensure that organization computers are not vulnerable (e.g., fully patched) so that in either case the organization computers do not start launching attacks. The security team can monitor and block malicious activity by using devices such as intrusion prevention systems. However, in large organizations, such monitoring devices could record a high number of events. The contributions of this paper are 1) to introduce a method that ranks potentially corrupted computers based on imperfect intrusion prevention system event data, and 2) to evaluate the method based on empirical data collected at a large organization of about 40,000 computers. The evaluation is based on the judgment of a security expert of which computers were indeed corrupted. On the one hand, we studied how many computers classified as of high concern or of concern were indeed corrupted (i.e., true positives). On the other hand, we analyzed how many computers classified as of lower concern were in fact corrupted (i.e., false negatives).

Danielle Chrun, Michel Cukier, Gerry Sneeringer
Security Threats to Automotive CAN Networks – Practical Examples and Selected Short-Term Countermeasures

The IT security of automotive systems is an evolving area of research. To analyse the current situation we performed several practical tests on recent automotive technology, focusing on automotive systems based on CAN bus technology. With respect to the results of these tests, in this paper we discuss selected countermeasures to address the basic weaknesses exploited in our tests and also give a short outlook to requirements, potential and restrictions of future, holistic approaches.

Tobias Hoppe, Stefan Kiltz, Jana Dittmann

Safety Cases

Constructing a Safety Case for Automatically Generated Code from Formal Program Verification Information

Formal methods can in principle provide the highest levels of assurance of code safety by providing formal proofs as explicit evidence for the assurance claims. However, the proofs are often complex and difficult to relate to the code, in particular if it has been generated automatically. They may also be based on assumptions and reasoning principles that are not justified. This causes concerns about the trustworthiness of the proofs and thus the assurance claims. Here we present an approach to systematically construct safety cases from information collected during a formal verification of the code, in particular from the construction of the logical annotations necessary for a formal, Hoare-style safety certification. Our approach combines a generic argument that is instantiated with respect to the certified safety property (i.e., safety claims) with a detailed, program-specific argument that can be derived systematically because its structure directly follows the course the annotation construction takes through the code. The resulting safety cases make explicit the formal and informal reasoning principles, and reveal the top-level assumptions and external dependencies that must be taken into account. However, the evidence still comes from the formal safety proofs. Our approach is independent of the given safety property and program, and consequently also independent of the underlying code generator. Here, we illustrate it for the AutoFilter system developed at NASA Ames.

Nurlida Basir, Ewen Denney, Bernd Fischer
Applying Safety Goals to a New Intensive Care Workstation System

In hospitals today, there is a trend towards the integration of different devices. Clinical workflow demands are growing for the integration of formally independent devices such as ventilator systems and patient monitoring systems. On one hand, this optimizes workflow and reduces training costs. On the other hand, testing complexity and effort required to ensure safety increase. This in turn gives rise to new challenges in the design of such systems. System designers must change their mindset because they are now designing a set of distributed systems instead of a single system which is only connected to a central monitoring system. In addition, the complexity of such workstation systems is much higher than that of individual devices. This paper presents a case-study on an intensive care workstation. To cope with this complexity, different use-cases have been devised and a set of safety goals have been defined for each use-case. The influence of the environment on the use-cases is highlighted and some measures to ensure data integrity within the workstation system are shown.

Uwe Becker
Safety Assurance Strategies for Autonomous Vehicles

Assuring safety of autonomous vehicles requires that the vehicle control system can perceive the situation in the environment and react to actions of other entities. One approach to vehicle safety assurance is based on the assumption that hazardous sequences of events should be identified during hazard analysis and then some means of hazard avoidance and mitigation, like barriers, should be designed and implemented. Another approach is to design a system which is able to dynamically examine the risk associated with possible actions and then select the safest action to carry it out. Dynamic risk assessment requires maintaining the situation awareness and prediction of possible future situations. We analyse how these two approaches can be applied for autonomous vehicles and what strategies can be used for safety argumentation.

Andrzej Wardziński
Expert Assessment of Arguments: A Method and Its Experimental Evaluation

Argument structures are commonly used to develop and present cases for safety, security and other properties. Such argument structures tend to grow excessively. To deal with this problem, appropriate methods of their assessment are required. Two objectives are of particular interest: (1) systematic and explicit assessment of the compelling power of an argument, and (2) communication of the result of such an assessment to relevant recipients. The paper gives details of a new method which deals with both problems. We explain how to issue assessments and how they can be aggregated depending on the types of inference used in arguments. The method is fully implemented in a software tool. Its application is illustrated by examples. The paper also includes the results of experiments carried out to validate and calibrate the method.

Lukasz Cyra, Janusz Górski

Formal Methods

Formal Verification by Reverse Synthesis

In this paper we describe a novel yet practical approach to the formal verification of implementations. Our approach splits verification into two major parts. The first part verifies an implementation against a low-level specification written using source-code annotations. The second extracts a high-level specification from the implementation with the low-level specification, and proves that it implies the original system specification from which the system was built. Semantics-preserving refactorings are applied to the implementation in both parts to reduce the complexity of the verification. Much of the approach is automated. It reduces the verification burden by distributing it over separate tools and techniques, and it addresses both functional correctness and high-level properties at separate levels. As an illustration, we give a detailed example by verifying an optimized implementation of the Advanced Encryption Standard (AES) against its official specification.

Xiang Yin, John C. Knight, Elisabeth A. Nguyen, Westley Weimer
Deriving Safety Software Requirements from an AltaRica System Model

This paper presents a methodology to derive software functional requirements from Preliminary System Safety Assessment analysis (PSSA) of helicopter turboshaft engines. The proposed process starts by extracting functional failure paths from system failure propagation models, using AltaRica models and AltaRica tools. Then the paper shows how to analyse these paths to generate minimal combinations of functional software requirements. This approach is applied to a part of the control system of a helicopter turboshaft engine.

Sophie Humbert, Christel Seguin, Charles Castel, Jean-Marc Bosc
Model-Based Implementation of Real-Time Systems

A method is presented for modeling, verification and automatic programming of PLC controllers. The method offers a formal model of requirements, the means for defining and verifying safe behavior, and a technique for generating program code. The modeling language is UML state machine, which provides a widely accepted means for writing a specification at a suitable high level of abstraction. Such an abstract specification can be validated by the user, verified by means of a model-checker and translated automatically into a program code, which preserves the correctness and safety of the specification. The program code is written in one of the standardized IEC 61131 languages.

Krzysztof Sacha
Early Prototyping of Wireless Sensor Network Algorithms in PVS

We describe an approach of using the evaluation mechanism of the specification and verification system

PVS

to support formal design exploration of WSN algorithms at the early stages of their development. The specification of the algorithm is expressed with an extensible set of programming primitives, and properties of interest are evaluated with ad hoc network simulators automatically generated from the formal specification. In particular, we build on the

PVSio

package as the core base for the network simulator. According to requirements, properties of interest can be simulated at different levels of abstraction. We illustrate our approach by specifying and simulating a standard routing algorithm for wireless sensor networks.

Cinzia Bernardeschi, Paolo Masci, Holger Pfeifer

Dependability Modelling

Analyzing Fault Susceptibility of ABS Microcontroller

In real-time safety-critical systems, it is important to predict the impact of faults on their operation. For this purpose we have developed a test bed based on software implemented fault injection (SWIFI). Faults are simulated by disturbing the states of registers and memory cells. Analyzing reactive and embedded systems with SWIFI tools is a new challenge related to the simulation of an external environment for the system, designing test scenarios and result qualification. The paper presents our original approach to these problems verified for an ABS microcontroller. We show fault susceptibility of the ABS microcontroller and outline software techniques to increase fault robustness.

Dawid Trawczynski, Janusz Sosnowski, Piotr Gawkowski
A Formal Approach for User Interaction Reconfiguration of Safety Critical Interactive Systems

The paper proposes a formal description technique and a supporting tool that provide a means to handle both static and dynamic aspects of input and output device configurations and reconfigurations. More precisely, in addition to the notation, the paper proposes an architecture for the management of failure on input and output devices by means of reconfiguration of in/output device configuration and interaction techniques. Such reconfiguration aims at allowing operators to continue interacting with the interactive system even though part of the hardware side of the user interface is failing. These types of problems arise in domains such as command and control systems where the operator is confronted with several display units. The contribution presented in the paper thus addresses usability issues (improving the ways in which operators can reach their goals while interacting with the system) by increasing the reliability of the system using diverse configuration both for input and output devices.

David Navarre, Philippe Palanque, Sandra Basnyat
The Wrong Question to the Right People. A Critical View of Severity Classification Methods in ATM Experimental Projects

The knowledge of operational experts plays a fundamental role in performing safety assessments in safety critical organizations. The complexity and socio-technical nature of such systems produce hazardous situations which require a thorough understanding of concrete operational scenarios and cannot be anticipated by simply analyzing single failures of specific functions. This paper addresses some limitations regarding state-of-the-art safety assessment techniques, with special reference to the use of severity classes associated to specific outcomes (e.g. accident, incident, no safety effect, etc.). Such classes tend to assume a linear link between single hazards considered in isolation and specified consequences for safety, thus neglecting the intrinsic complexity of the systems under analysis and reducing the opportunities for an effective involvement of operational experts. An alternative approach is proposed to overcome these limitations, by allowing operational people to prioritize the severity of hazards observed in concrete operational scenarios and by involving them in the definition of the possible means of mitigation.

Alberto Pasquini, Simone Pozzi, Luca Save

Security and Dependability

A Context-Aware Mandatory Access Control Model for Multilevel Security Environments

Mandatory access control models have traditionally been employed as a robust security mechanism in multilevel security environments like military domains. In traditional mandatory models, the security classes associated with entities are context-insensitive. However, context-sensitivity of security classes may be required in some environments. Moreover, as computing technology becomes more pervasive, flexible access control mechanisms are needed. Unlike traditional approaches for access control, such access decisions depend on the combination of the required credentials of users and the context of the system. Incorporating context-awareness into mandatory access control models results in a model appropriate for handling such context-aware policies and context- sensitive class association mostly needed in multilevel security environments. In this paper, we introduce a context-aware mandatory access control model (CAMAC) capable of dynamic adaptation of access control policies to the context, and handling context-sensitive class association, in addition to preservation of confidentiality and integrity. One of the most significant characteristics of the model is its high expressiveness which allows us to express various mandatory access control models such as Bell-LaPadula, Biba, Dion, and Chinese Wall with it.

Jafar Haadi Jafarian, Morteza Amini, Rasool Jalili
Formal Security Analysis of Electronic Software Distribution Systems

Software distribution to target devices like factory controllers, medical instruments, vehicles or airplanes is increasingly performed electronically over insecure networks. Such software often implements vital functionality, and so the software distribution process can be highly critical, both from the safety and the security perspective. In this paper, we introduce a novel software distribution system architecture with a generic core component, such that the overall software transport from the supplier to the target device is an interaction of several instances of this core component communicating over insecure networks. The main advantage of this architecture is reduction of development and certification costs. The second contribution of this paper describes the validation and verification of the proposed system. We use a mix of formal methods, more precisely the AVISPA tool, and the Common Criteria (CC) methodology, to achieve high confidence in the security of the software distribution system at moderate costs.

Monika Maidl, David von Oheimb, Peter Hartmann, Richard Robinson
The Advanced Electric Power Grid: Complexity Reduction Techniques for Reliability Modeling

The power grid is a large system, and analyzing its reliability is computationally intensive, rendering conventional methods ineffective. This paper proposes techniques for reducing the complexity of representations of the grid, resulting in a mathematically tractable problem to which our previously developed reliability analysis techniques can be applied. The IEEE118 bus system is analyzed as an example, incorporating cascading failure scenarios reported in the literature.

Ayman Z. Faza, Sahra Sedigh, Bruce M. McMillin
Automating the Processes of Selecting an Appropriate Scheduling Algorithm and Configuring the Scheduler Implementation for Time-Triggered Embedded Systems

Predictable system behaviour is a necessary (but not sufficient) condition when creating safety-critical and safety-related embedded systems. At the heart of such systems there is usually a form of scheduler: the use of time-triggered schedulers is of particular concern in this paper. It has been demonstrated in previous studies that the problem of determining the task parameters for such a scheduler is NP-hard. We have previously described an algorithm (“TTSA1”) which is intended to address this problem. This paper describes an extended version of this algorithm (“TTSA2”) which employs task segmentation to increase schedulability. We show that the TTSA2 algorithm is highly efficient when compared with alternative “branch and bound” search schemes.

Ayman K. Gendy, Michael J. Pont
Backmatter
Metadaten
Titel
Computer Safety, Reliability, and Security
herausgegeben von
Michael D. Harrison
Mark-Alexander Sujan
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-87698-4
Print ISBN
978-3-540-87697-7
DOI
https://doi.org/10.1007/978-3-540-87698-4

Premium Partner