Skip to main content
main-content

Über dieses Buch

​This book constitutes the refereed proceedings of the 15th International Conference on Software Engineering and Formal Methods, SEFM 2017, held in Trento, Italy, in September 2017.
The 17 full papers and 6 short papers presented were carefully reviewed and selected from 102 submissions. The papers deal with a large range of topics in the following research areas: new frontiers in software architecture; software verification and testing; software development methods; application and technology transfer; security and safety; and design principles.

Inhaltsverzeichnis

Frontmatter

Information Flow Tracking for Linux Handling Concurrent System Calls and Shared Memory

Abstract
Information flow control can be used at the Operating System level to enforce restrictions on the diffusion of security-sensitive data. In Linux, information flow trackers are often implemented as Linux Security Modules. They can fail to monitor some indirect flows when flows occur concurrently and affect the same containers of information. Furthermore, they are not able to monitor the flows due to file mappings in memory and shared memory between processes. We first present two attacks to evade state-of-the-art LSM-based trackers. We then describe an approach, formally proved with Coq [12] to perform information flow tracking able to cope with concurrency and in-memory flows. We demonstrate its implementability and usefulness in Rfblare, a race condition-free version of the flow tracking done by KBlare [4].
Laurent Georget, Mathieu Jaume, Guillaume Piolle, Frédéric Tronel, Valérie Viet Triem Tong

Focused Certification of an Industrial Compilation and Static Verification Toolchain

Abstract
SPARK 2014 is a subset of the Ada 2012 programming language that is supported by the GNAT compilation toolchain and multiple open source static analysis and verification tools. These tools can be used to verify that a SPARK 2014 program does not raise language-defined run-time exceptions and that it complies with formal specifications expressed as subprogram contracts. The results of analyses at source code level are valid for the final executable only if it can be shown that compilation/verification tools comply with a common deterministic programming language semantics.
In this paper, we present: (a) a mechanized formal semantics for a large subset of SPARK 2014, (b) an architecture for creating certified/certifying analysis and verification tools for SPARK, and (c) tools and mechanized proofs that instantiate that architecture to demonstrate that SPARK-relevant Ada run-time checks inserted by the GNAT compiler are correct; this includes mechanized proofs of correctness for abstract interpretation-based static analyses that are used to certify correctness of GNAT run-time check optimizations.
A by-product of this work is a substantial amount of open source infrastructure that others in academia and industry can use to develop mechanized semantics, and mechanically verified correctness proofs for analyzers/verifiers for realistic programming languages.
Zhi Zhang, Robby, John Hatcliff, Yannick Moy, Pierre Courtieu

A Complete Generative Label Model for Lattice-Based Access Control Models

Abstract
Lattice-based access control models (LBAC) initiated by Bell-LaPadula (BLP)/Biba models, and consolidated by Denning have played a vital role in building secure systems via Information Flow Control (IFC). IFC systems typically label data and track labels, while allowing users to exercise appropriate access privileges. This is defined through a finite set of security classes over a lattice. Recently, IFC has also been playing a crucial role in formally establishing the security of operating systems/programs. Towards such a goal, researchers often use assertions to keep track of the flow of information from one subject/object to another object/subject. Specifying and realizing these assertions will be greatly benefitted, if the underlying labels of objects/subjects can be interpreted in terms of access permissions/rights of subjects/objects as well as subjects/objects that have influenced them; these would lead to automatic generation of proof obligations/assertions. Thus, if one can arrive at a label model for LBAC that satisfies properties like (i) intuitive and expressive labels, (ii) completeness w.r.t. Denning’s lattice model, and (iii) efficient computations on labels, then building/certifying secure systems using LBAC will be greatly benefitted.
In this paper, we arrive at such a semantic generative model (that tracks readers/writers of objects/subjects) for the Denning’s lattice model, and establish a strong correspondence between syntactic label policies and semantically labelled policies. Such a correspondence leads to the derivation of the recently proposed Readers-Writers Flow Model (RWFM). It may be noted that RWFM [11] also deals with declassification rules which is not discussed here as it is not relevant here. The relationship, further establishes that the RWFM  label model provides an application-independent concrete generative label model that is sound and complete wrt Denning’s Model. We define the semantics of information flow in this label model, and argue that reading and writing induce possibly different pre-orders on the set of subjects. Hence, the subject relations become explicit, making it possible to derive relations from the labels. We further define a notion of information dominance on subjects and show that the notion of principal hierarchy can be naturally defined that is consistent with the IFC model; this perhaps overcomes the adverse impact on the flow policy that is often experienced during the classical approach of defining the hierarchy orthogonally. This enables us to realize Role-Based Access Control (RBAC) structurally and enforce information flow security. Further, we demonstrate how the underlying label model succinctly subsumes various lattice-based control models like BLP, Biba, RBAC, Chinese wall model, etc.
N. V. Narendra Kumar, R. K. Shyamasundar

From Model Checking to a Temporal Proof for Partial Models

Abstract
Three-valued model checking has been proposed to support verification when some portions of the model are unspecified. Given a formal property, the model checker returns true if the property is satisfied, false and a violating behavior if it is not, maybe and a possibly violating behavior if it is possibly satisfied, i.e., its satisfaction may depend on how the unspecified parts are refined. Model checking, however, does not explain the reasons why a property holds, or possibly holds. Theorem proving can instead do it by providing a formal proof that explains why a property holds, or possibly holds in a system. Integration of theorem proving with model checking has only been studied for classical two-valued logic – hence, for fully specified models. This paper proposes a unified approach that enriches three-valued model checking with theorem proving to generate proofs which explain why true and maybe results are returned.
Anna Bernasconi, Claudio Menghi, Paola Spoletini, Lenore D. Zuck, Carlo Ghezzi

Modeling and Reasoning on Requirements Evolution with Constrained Goal Models

Abstract
We are interested in supporting software evolution caused by changing requirements and/or changes in the operational environment of a software system. For example, users of a system may want new functionality or performance enhancements to cope with growing user population (changing requirements). Alternatively, vendors of a system may want to minimize costs in implementing requirements changes (evolution requirements). We propose to use Constrained Goal Models (CGMs) to represent the requirements of a system, and capture requirements changes in terms of incremental operations on a goal model. Evolution requirements are then represented as optimization goals that minimize implementation costs or customer value. We then exploit reasoning techniques to derive optimal new specifications for an evolving software system. CGMs offer an expressive language for modelling goals that comes with scalable solvers that solve hybrid constraint and optimization problems using a combination of Satisfiability Modulo Theories (SMT) and Optimization Modulo Theories (OMT) techniques. We evaluate our proposal by modeling and reasoning with a goal model for a standard exemplar used in Requirement Engineering.
Chi Mai Nguyen, Roberto Sebastiani, Paolo Giorgini, John Mylopoulos

Participatory Verification of Railway Infrastructure by Representing Regulations in RailCNL

Abstract
Designs of railway infrastructure (tracks, signalling and control systems, etc.) need to comply with comprehensive sets of regulations describing safety requirements, engineering conventions, and design heuristics. We have previously worked on automating the verification of railway designs against such regulations, and integrated a verification tool based on Datalog reasoning into the CAD tools of railway engineers. This was used in a pilot project at Norconsult AS (formerly Anacon AS). In order to allow railway engineers with limited logic programming experience to participate in the verification process, in this work we introduce a controlled natural language, RailCNL, which is designed as a middle ground between informal regulations and Datalog code. Phrases in RailCNL correspond closely to those in the regulation texts, and can be translated automatically into the input language of the verifier. We demonstrate a prototype system which, upon detecting regulation violations, traces back from errors in the design through the CNL to the marked-up original text, allowing domain experts to examine the correctness of each translation step and better identify sources of errors. We also describe our design methodology, based on CNL best practices and previous experience with creating verification front-end languages.
Bjørnar Luteberget, John J. Camilleri, Christian Johansen, Gerardo Schneider

An In-Depth Investigation of Interval Temporal Logic Model Checking with Regular Expressions

Abstract
In the last years, the model checking (MC) problem for interval temporal logic (ITL) has received an increasing attention as a viable alternative to the traditional (point-based) temporal logic MC, which can be recovered as a special case. Most results have been obtained by imposing suitable restrictions on interval labeling. In this paper, we overcome such limitations by using regular expressions to define the behavior of proposition letters over intervals in terms of the component states. We first prove that MC for Halpern and Shoham’s ITL (HS), extended with regular expressions, is decidable. Then, we show that formulas of a large class of HS fragments, namely, all fragments featuring (a subset of) HS modalities for Allen’s relations meets, met-by, starts, and started-by, can be model checked in polynomial working space (MC for all these fragments turns out to be PSPACE-complete).
Laura Bozzelli, Alberto Molinari, Angelo Montanari, Adriano Peron

PART: From Partial Analysis Results to a Proof Witness

Abstract
Today, verification tools do not only output yes or no, but also provide correctness arguments or counterexamples. While counterexamples help to fix bugs, correctness arguments are used to increase the trust in program correctness, e.g., in Proof-Carrying Code (PCC). Correctness arguments are well-studied for single analyses, but not when a set of analyses together verifies a program, each of the analyses checking only a particular part. Such a set of partial, complementary analyses is often used when a single analysis would fail or is inefficient on some program parts.
We propose PART\(_\mathrm {PW}\), a technique which allows us to automatically construct a proof witness (correctness argument) from the analysis results obtained by a set of partial, complementary analyses. The constructed proof witnesses are proven to be valid correctness arguments and in our experiments we use them seamlessly and efficiently in existing PCC approaches.
Marie-Christine Jakobs

Specification and Automated Verification of Dynamic Dataflow Networks

Abstract
Dataflow programming has received much recent attention within the signal processing domain as an efficient paradigm for exploiting parallelism. In dataflow programming, systems are modelled as a static network of actors connected through asynchronous order-preserving channels. In this paper we present an approach to contract-based specification and automated verification of dynamic dataflow networks. The verification technique is based on encoding the dataflow networks and contracts in the guarded command language Boogie.
Jonatan Wiik, Pontus Boström

Specification Clones: An Empirical Study of the Structure of Event-B Specifications

Abstract
In this paper we present an empirical study of formal specifications written in the Event-B language. Our study is exploratory, since it is the first study of its kind, and we formulate metrics for Event-B specifications which quantify the diversity of such specifications in practice. We pay particular attention to refinement as this is one of the most notable features of Event-B. However, Event-B is less well-equipped with other standardised modularisation constructs, and we investigate the impact of this by detecting and analysing specification clones at different levels. We describe our algorithm used to identify clones at the machine, context and event level, and present results from an analysis of a large corpus of Event-B specifications. Our study contributes to furthering research into the area of metrics and modularisation in Event-B.
Marie Farrell, Rosemary Monahan, James F. Power

User Studies of Principled Model Finder Output

Abstract
Model-finders such as SAT-solvers are attractive for producing concrete models, either as sample instances or as counterexamples when properties fail. However, the generated model is arbitrary. To address this, several research efforts have proposed principled forms of output from model-finders. These include minimal and maximal models, unsat cores, and proof-based provenance of facts.
While these methods enjoy elegant mathematical foundations, they have not been subjected to rigorous evaluation on users to assess their utility. This paper presents user studies of these three forms of output performed on advanced students. We find that most of the output forms fail to be effective, and in some cases even actively mislead users. To make such studies feasible to run frequently and at scale, we also show how we can pose such studies on the crowdsourcing site Mechanical Turk.
Natasha Danas, Tim Nelson, Lane Harrison, Shriram Krishnamurthi, Daniel J. Dougherty

Using Shared Memory Abstractions to Design Eager Sequentializations for Weak Memory Models

Abstract
Sequentialization translates concurrent programs into equivalent nondeterministic sequential programs so that the different concurrent schedules no longer need to be handled explicitly. However, existing sequentializations assume sequential consistency, which modern hardware architectures no longer guarantee. Here we describe a new approach to embed weak memory models within eager sequentializations. Our approach is based on the separation of intra-thread computations from inter-thread communications by means of a shared memory abstraction (SMA). We give details of SMA implementations for the SC, TSO, and PSO memory models that are based on the idea of individual memory unwindings. We use our approach to implement a new, efficient BMC-based bug finding tool for multi-threaded C programs under SC, TSO, or PSO based on these SMAs, and show experimentally that it is competitive to existing tools.
Ermenegildo Tomasco, Truc Lam Nguyen, Bernd Fischer, Salvatore La Torre, Gennaro Parlato

On Run-Time Enforcement of Authorization Constraints in Security-Sensitive Workflows

Abstract
In previous work, we showed how to use an SMT-based model checker to synthesize run-time enforcement mechanisms for business processes augmented with access control policies and authorization constraints, such as Separation of Duties. The synthesized enforcement mechanisms are able to guarantee both termination and compliance to security requirements, i.e. solving the run-time version of the Workflow Satisfiability Problem (WSP). No systematic approach to specify the various constraints considered in the WSP literature has been provided. In this paper, we first propose a classification of these constraints and then show how to encode them in the declarative input language of the SMT-based model checker used for synthesis. This shows the flexibility of the SMT approach to solve the run-time version of the WSP in presence of different authorization constraints.
Daniel Ricardo dos Santos, Silvio Ranise

Trace Partitioning and Local Monitoring for Asynchronous Components

Abstract
We propose an instrumentation technique for monitoring asynchronous component systems that departs from the traditional runtime verification set-up assuming a single execution trace. The technique generates partitioned traces that better reflect the interleaved execution of the asynchronous components under scrutiny, and lends itself well to local monitoring. We provide argumentation for the qualitative benefits of our approach, demonstrate its implementability for actor-based systems, and justify claims related to the applicability and efficiency gains via an empirical evaluation over a third party component-based system.
Duncan Paul Attard, Adrian Francalanza

Compositional Verification of Interlocking Systems for Large Stations

Abstract
Railway interlocking systems are responsible to grant exclusive access to a route, that is a sequence of track elements, through a station or a network. Formal verification that basic safety rules regarding exclusive access to routes are satisfied by an implementation is still a challenge for networks of large size due to the exponential computation time and resources needed.
Some recent attempts to address this challenge adopt a compositional approach, targeted to track layouts that are easily decomposable into sub-networks such that a route is almost fully contained in a sub-network: in this way granting the access to a route is essentially a decision local to the sub-network, and the interfaces with the rest of the network easily abstract away less interesting details related to the external world.
Following up on previous work, where we defined a compositional verification method that started considering routes that overlap between sub-networks in interlocking systems governing a multi-station line, we attack the verification of large networks, which are typically those in main stations of major cities, and where routes are very intertwined and can hardly be separated into sub-networks that are independent at some degree. At this regard, we study how the division of a complex network into sub-networks, using stub elements to abstract all the routes that are common between sub-networks, may still guarantee compositionality of verification of safety properties.
Alessandro Fantechi, Anne E. Haxthausen, Hugo D. Macedo

Formalizing Timing Diagram Requirements in Discrete Duration Calculus

Abstract
Several temporal logics have been proposed to formalise timing diagram requirements over hardware and embedded controllers. However, succintness and visual structure of a timing diagram are not adequately captured by their formulae [6]. Interval temporal logic QDDC is a highly succint and visual notation for specifying patterns of behaviours [15]. In this paper, we propose a practically useful notation called SeCeNL which enhances the quantifier and negation free fragment of QDDC with features of nominals and limited liveness. We show that for SeCeNL, the satisfiability and model checking problems have elementary complexity as compared to the non-elementary complexity for the full logic QDDC. Next we show that timing diagrams can be naturally, compositionally and succintly formalized in SeCeNL as compared with PSL-Sugar and MTL. We give a linear time translation from timing diagrams to SeCeNL. As our second main result, we propose a linear time translation of SeCeNL into QDDC. This allows QDDC tools such as DCVALID [15, 16] and DCSynth [17] to be used for checking consistency of timing diagram requirements as well as for automatic synthesis of property monitors and controllers. We give an example of a minepump controller to illustrate our tools.
Raj Mohan Matteplackel, Paritosh K. Pandya, Amol Wakankar

On Approximate Diagnosability of Metric Systems

Abstract
The increasing complexity in nowadays engineered systems requires great attention to safety hazards and occurrence of faults, which must be readily detected to possibly restore nominal behavior of the system. The notion of diagnosability plays a key role in this regard, since it corresponds to the possibility of detecting within a finite delay if a fault, or in general a hazardous situation, did occur. In this paper the notion of approximate diagnosability is introduced and characterized for the general class of metric systems, that are typically used in the research community working on hybrid systems to study complex heterogeneous processes in cyber–physical systems. The notion of approximate diagnosability proposed captures the possibility of detecting faults on the basis of measurements corrupted by errors, always introduced by non-ideal sensors in a real environment. A characterization of approximate diagnosability in a set membership framework is provided and the computational complexity of the proposed algorithms analyzed. Then, relations are established between approximate diagnosability of a given metric system and approximate diagnosability of a system that approximately simulates the given one. Application of the proposed results to the study of approximate diagnosability for nonlinear systems, presenting an infinite number of states and of inputs, is finally discussed.
Giordano Pola, Elena De Santis, Maria Domenica Di Benedetto

A Hazard Analysis Method for Systematic Identification of Safety Requirements for User Interface Software in Medical Devices

Abstract
Formal methods technologies have the potential to verify the usability and safety of user interface (UI) software design in medical devices, enabling significant reductions in use errors and consequential safety incidents with such devices. This however depends on comprehensive and verifiable safety requirements to leverage these techniques for detecting and preventing flaws in UI software that can induce use errors. This paper presents a hazard analysis method that extends Leveson’s System Theoretic Process Analysis (STPA) with a comprehensive set of causal factor categories, so as to provide developers with clear guidelines for systematic identification of use-related hazards associated with medical devices, their causes embedded in UI software design, and safety requirements for mitigating such hazards. The method is evaluated with a case study on the Gantry-2 radiation therapy system, which demonstrates that (1) as compared to standard STPA, our method allowed us to identify more UI software design issues likely to cause use-related hazards; and (2) the identified UI software design issues facilitated the definition of precise, verifiable safety requirements for UI software, which could be readily formalized in verification tools such as Prototype Verification System (PVS).
Paolo Masci, Yi Zhang, Paul Jones, José C. Campos

Modular Verification of Information Flow Security in Component-Based Systems

Abstract
We propose a novel method for the verification of information flow security in component-based systems. The method is (a) modular w.r.t. services and components, i.e., overall security is proved to follow from the security of the individual services provided by the components, and (b) modular w.r.t. attackers, i.e., verified security properties can be re-used to demonstrate security w.r.t. different kinds of attacks.
In a first step, user-provided security specifications for individual services are verified using program analysis techniques. In a second step, first-order formulas are generated expressing that component non-interference follows from service-level properties and in a third step that global system security follows from component non-interference. These first-order proof obligations are discharged with a first-order theorem prover. The overall approach is independent of the programming language used to implement the components. We provide a soundness proof for our method and highlight its advantages, especially in the context of evolving systems.
As a proof of concept and to demonstrate the usability of our method, we present a case study, where we verify the security of a system implemented in Java against two types of attackers. We apply the program verification system KeY and the program analysis tool Joana for analyzing individual services; modularity of our approach allows us to use them in parallel.
Simon Greiner, Martin Mohr, Bernhard Beckert

IJIT: An API for Boolean Program Analysis with Just-in-Time Translation

Abstract
Exploration algorithms for explicit-state transition systems are a core back-end technology in program verification. They can be applied to programs by generating the transition system on the fly, avoiding an expensive up-front translation. An on-the-fly strategy requires significant modifications to the implementation, into a form that stores states directly as valuations of program variables. Performed manually on a per-algorithm basis, such modifications are laborious and error-prone.
In this paper we present the Ijit Application Programming Interface (API), which allows users to automatically transform a given transition system exploration algorithm to one that operates on Boolean programs. The API converts system states temporarily to program states just in time for expansion via image computations, forward or backward. Using our API, we have effortlessly extended various non-trivial (e.g. infinite-state) model checking algorithms to operate on multi-threaded Boolean programs. We demonstrate the ease of use of the API, and present a case study on the impact of the just-in-time translation on these algorithms.
Peizun Liu, Thomas Wahl

Specification and Semantic Analysis of Embedded Systems Requirements: From Description Logic to Temporal Logic

Abstract
Due to the increasing complexity of embedded systems, early detection of software/hardware errors has become desirable. In this context, effective yet flexible specification methods that support rigorous analysis of embedded systems requirements are needed. Current specification methods such as pattern-based, boilerplates normally lack meta-models for extensibility and flexibility. In contrast, formal specification languages, like temporal logic, Z, etc., enable rigorous analysis, however, they usually are too mathematical and difficult to comprehend by average software engineers. In this paper, we propose a specification representation of requirements, which considers thematic roles and domain knowledge, enabling deep semantic analysis. The specification is complemented by our constrained natural language specification framework, ReSA, which acts as the interface to the representation. The representation that we propose is encoded in description logic, which is a decidable and computationally-tractable ontology language. By employing the ontology reasoner, Hermit, we check for consistency and completeness of requirements. Moreover, we propose an automatic transformation of the ontology-based specifications into Timed Computation Tree Logic formulas, to be used further in model checking embedded systems.
Nesredin Mahmud, Cristina Seceleanu, Oscar Ljungkrantz

Computing Conditional Probabilities: Implementation and Evaluation

Abstract
Conditional probabilities and expectations are an important concept in the quantitative analysis of stochastic systems, e.g., to analyze the impact and cost of error handling mechanisms in rare failure scenarios or for a utility-analysis assuming an exceptional shortage of resources. This paper reports on the main features of an implementation of computation schemes for conditional probabilities in discrete-time Markov chains and Markov decision processes within the probabilistic model checker Prism and a comparative experimental evaluation. Our implementation has full support for computing conditional probabilities where both the objective and condition are given as linear temporal logic formulas, as well as specialized algorithms for reachability and other simple types of path properties. In the case of Markov chains we provide implementations for three alternative methods (quotient, scale and reset). We support Prism’s explicit and (semi-)symbolic engines. Besides comparative studies exploring the three dimensions (methods, engines, general vs. special handling), we compare the performance of our implementation and the probabilistic model checker Storm that provides facilities for conditional probabilities of reachability properties.
Steffen Märcker, Christel Baier, Joachim Klein, Sascha Klüppelholz

Validating the Meta-Theory of Programming Languages (Short Paper)

Abstract
We report on work in progress in building an environment for the validation of the meta-theory of programming languages artifacts, for example the correctness of compiler translations; the basic idea is to couple property-based testing with binders-aware functional programming as the meta-language for specification and testing. Treating binding signatures and related notions, such as new names generation, \(\alpha \)-equivalence and capture-avoiding substitution correctly and effectively is crucial in the verification and validation of programming language (meta)theory. We use Haskell as our meta-language, since it offers various libraries for both random and exhaustive generation of tests, as well as for binders. We validate our approach on benchmarks of mutations presented in the literature and some examples of code “in the wild”. In the former case, not only did we very quickly (re)discover all the planted bugs, but we achieved that with very little configuration effort with comparison to the competition. In the second case we located several simple bugs that had survived for years in publicly available (academic) code. We believe that our approach adds to the increasing evidence of the usefulness of property-based testing for semantic engineering of programming languages, in alternative or prior to full verification.
Guglielmo Fachini, Alberto Momigliano

Towards Inverse Uncertainty Quantification in Software Development (Short Paper)

Abstract
With the purpose of delivering more robust systems, this paper revisits the problem of Inverse Uncertainty Quantification that is related to the discrepancy between the measured data at runtime (while the system executes) and the formal specification (i.e., a mathematical model) of the system under consideration, and the value calibration of unknown parameters in the model. We foster an approach to quantify and mitigate system uncertainty during the development cycle by combining Bayesian reasoning and online Model-based testing.
Matteo Camilli, Angelo Gargantini, Patrizia Scandurra, Carlo Bellettini

Interpolation-Based Learning as a Mean to Speed-Up Bounded Model Checking (Short Paper)

Abstract
In this paper (This is a short paper accepted in the new ideas and work-in-progress section of SEFM 2017.) we introduce a technique to improve the efficiency of SAT calls in Bounded Model Checking (BMC) problems. The proposed technique is based on exploiting interpolation-based invariants as redundant constraints for BMC.
Previous research addressed the issue using over-approximated state sets generated by BDD-based traversals. While a BDD engine could be considered as an external tool, interpolants are directly related to BMC problems, as they come from SAT-generated refutation proofs, so their role as a SAT-based learning is potentially higher. Our work aims at understanding whether and how interpolants could speed up BMC checks, as they represent constraints on forward and backward reachable states at given unrolling boundaries.
Being this work preliminary, we do not address a tight integration between interpolant generation and exploitation. We thus clearly distinguish an interpolant generation (learning) phase and a subsequent interpolant exploitation phase in a BMC run. We experimentally evaluate costs, benefits, as well as invariant selection options, on a set of publicly available model checking problems.
Gianpiero Cabodi, Paolo Camurati, Marco Palena, Paolo Pasini, Danilo Vendraminetto

Towards Automated Deployment of Self-adaptive Applications on Hybrid Clouds (Short Paper)

Abstract
Cloud computing promises high dynamism, flexibility, and elasticity of applications at lower infrastructure costs. However, resource management, portability, and interoperability remain a challenge for cloud application users, since the current major cloud application providers have not converged to a standard interface, and the deployment supporting tools are highly heterogeneous. Besides, by their very nature, cloud applications bring serious traceability, security and privacy issues. This position paper describes a research thread on an extensible Domain Specific Language (DSL), a platform for the automated deployment, and a generic architecture of an ops application manager for self-adaptive distributed applications on hybrid cloud infrastructures. The idea is to overcome the cited limitations by empowering the cloud applications with self-configuration, self-healing, and self-protection capabilities. Such autonomous governance can be achieved by letting cloud users define their policies concerning security, data protection, dependability and functional compliance behavior using the proposed DSL. Real world trials in different application domains are discussed.
Lom Messan Hillah, Rodrigo Assad, Antonia Bertolino, Marcio Delamaro, Fabio De Rosa, Vinicius Garcia, Francesca Lonetti, Ariele-Paolo Maesano, Libero Maesano, Eda Marchetti, Breno Miranda, Auri Vincenzi, Juliano Iyoda

A Diagnosis Framework for Critical Systems Verification (Short Paper)

Abstract
For critical systems design, the verification tasks play a crucial role. If abnormalities are detected, a diagnostic process must be started to find and understand the root causes before corrective actions are applied. Detection and diagnosis are notions that overlap in common speech. Detection basically means to identify something as unusual, diagnosis means to investigate its root cause. The meaning of diagnosis is also fuzzy, because diagnosis is either an activity - an investigation - or an output result - the nature or the type of a problem. This paper proposes an organizational framework for structuring diagnoses around three principles: that propositional data (including detection) are the inputs of the diagnostic system; that activities are made of methods and techniques; and that associations specialize that relationships between the two preceding categories.
Vincent Leildé, Vincent Ribaud, Ciprian Teodorov, Philippe Dhaussy

Design of Embedded Systems with Complex Task Dependencies and Shared Resource Interference (Short Paper)

Abstract
Languages for embedded systems ensure predictable timing behavior by specifying constraints based on either data streaming or reactive control models of computation. Moreover, various toolsets facilitate the incremental integration of application functionalities and the system design by evolutionary refinement and model-based code generation. Modern embedded systems involve various sources of interference in shared resources (e.g. multicores) and advanced real-time constraints, such as mixed-criticality levels. A sufficiently expressive modeling approach for complex dependency patterns between real-time tasks is needed along with a formal analysis of models for runtime resource managers with timing constraints. Our approach utilizes a model of computation, called Fixed-Priority Process Networks, which ensures functional determinism by unifying streaming and reactive control within a timed automata framework. The tool flow extends the open source TASTE tool-suite with model transformations to the BIP language and code generation tools. We outline the use of our flow on the design of a spacecraft on-board application running on a quad-core LEON4FT processor.
Fotios Gioulekas, Peter Poplavko, Rany Kahil, Panagiotis Katsaros, Marius Bozga, Saddek Bensalem, Pedro Palomo

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise