Skip to main content

Über dieses Buch

Software reuse and integration has been described as the process of creating software systems from existing software rather than building software systems from scratch. Whereas reuse solely deals with the artifacts creation, integration focuses on how reusable artifacts interact with the already existing parts of the specified transformation. Currently, most reuse research focuses on creating and integrating adaptable components at development or at compile time. However, with the emergence of ubiquitous computing, reuse technologies that can support adaptation and reconfiguration of architectures and components at runtime are in demand.

This edited book includes 15 high quality research papers written by experts in information reuse and integration to cover the most recent advances in the field. These papers are extended versions of the best papers which were presented at IEEE International Conference on Information Reuse and Integration and IEEE International Workshop on Formal Methods Integration, which was held in San Francisco in August 2013.



Cloud-Based Tasking, Collection, Processing, Exploitation, and Dissemination in a Case-Based Reasoning System

The current explosion in sensor data has brought us to a tipping point in the intelligence, surveillance, and reconnaissance technologies . This problem can be addressed through the insertion of novel artificial intelligence-based methodologies. The scope of the problem addressed in this chapter is to propose a novel computational intelligence methodology, which can learn to map distributed heterogeneous data to actionable meaning for dissemination. The impact of this approach is that it will provide a core solution to the tasking, collection, processing, exploitation, and dissemination (TCPED) problem. The expected operational performance improvements include the capture and reuse of analyst expertise, an order of magnitude reduction in required bandwidth, and, for the user, prioritized intelligence based on the knowledge derived from distributed heterogeneous sensing. A simple schema example is presented and an instantiation of it shows how to practically create feature search spaces. Given the availability of high-speed parallel processors, such an arrangement allows for the effective conceptualization of non-random causality.
Stuart H. Rubin, Gordon K. Lee

Simulation-Based Validation for Smart Grid Environments: Framework and Experimental Results

Large and complex systems, such as the Smart Grid, are often best understood through the use of modeling and simulation. In particular, the task of assessing a complex system’s risks and testing its tolerance and recovery under various attacks has received considerable attention. However, such tedious tasks still demand a systematic approach to model and evaluate each component in complex systems. In other words, supporting a formal validation and verification without needing to implement the entire system or accessing the existing physical infrastructure is critical since many elements of the Smart Grid are still in the process of becoming standardized for widespread use. In this chapter, we describe our simulation-based approach to understanding and examining the behavior of various components of the Smart Grid in the context of verification and validation. To achieve this goal, we adopt the discrete event system specification (DEVS) modeling methodology, which allows the generalization and specialization of entities in the model and supports a customized simulation with specific variables. In addition, we articulate metrics for supporting our simulation-based verification and validation and demonstrate the feasibility and effectiveness of our approach with a real-world use case.
Wonkyu Han, Mike Mabey, Gail-Joon Ahn, Tae Sung Kim

An Institution for Alloy and Its Translation to Second-Order Logic

Lightweight formal methods, of which Alloy is a prime example, combine the rigour of mathematics without compromising simplicity of use and suitable tool support. In some cases, however, the verification of safety or mission critical software entails the need for more sophisticated technologies, typically based on theorem provers. This explains a number of attempts to connect Alloy to specific theorem provers documented in the literature. This chapter, however, takes a different perspective: instead of focusing on one more combination of Alloy with still another prover, it lays out the foundations to fully integrate this system in the Hets platform which supports a huge network of logics, logic translators and provers. This makes possible for Alloy specifications to “borrow” the power of several, non dedicated proof systems. The chapter extends the authors’ previous work on this subject by developing in full detail the semantical foundations for this integration, including a formalisation of Alloy as an institution, and introducing a new, more general translation of the latter to second-order logic.
Renato Neves, Alexandre Madeira, Manuel Martins, Luís Barbosa

A Framework for Verification of SystemC Designs Using SystemC Waiting State Automata

The SystemC waiting-state automaton is a compositional abstract formal model for verifying properties of SystemC at the transaction level within a delta-cycle: the smallest simulation unit time in SystemC. In this chapter, how to extract automata for SystemC components where we distinguish between threads and methods in SystemC. Then, we propose an approach based on a combination of symbolic execution and computing fixed points via predicate abstraction to infer relations between predicates generated during symbolic execution. Finally, we define how to apply model checking to prove the correctness of the abstract analysis.
Nesrine Harrath, Bruno Monsuez, Kamel Barkaoui

Formal MDE-Based Tool Development

Model-driven engineering (MDE) focuses on creating and exploiting (specific) domain models. It is common to use domain-specific languages (DSL) to describe the concrete elements of such models. MDE tools can easily build DSLs, although it is not clear how to capture dynamic semantics as well as formally verify properties. Formal methods are a well-known solution for providing correct software, but human-machine interaction is usually not addressed. Several industries, particularly the safety-critical industries, use mathematical representations to deal with their problem domains. Such DSLs are difficult to capture, using just MDE tools for instance, because they have specific semantics to provide the desired (core) expected behavior. Thus, we propose a rigorous methodology to create GUI (Graphical User Interface) based DSLs formal tools. We aim at providing a productive and trustworthy development methodology to safety critical industries.
Robson Silva, Alexandre Mota, Rodrigo Rizzi Starr

Formal Modeling and Analysis of Learning-Based Routing in Mobile Wireless Sensor Networks

Limited energy supply is a major concern when dealing with wireless sensor networks (WSNs). Therefore, routing protocols for WSNs should be designed to be energy efficient. This chapter considers a learning-based routing protocol for WSNs with mobile nodes, which is capable of handling both centralized and decentralized routing. A priori knowledge of the movement patterns of the nodes is exploited to select the best routing path, using a Bayesian learning algorithm. While simulation tools cannot generally prove that a protocol is correct, formal methods can explore all possible behaviors of network nodes to search for failures. We develop a formal model of the learning-based protocol and use the rewriting logic tool Maude to analyze both the correctness and efficiency of the model. Our experimental results show that the decentralized approach is twice as energy-efficient as the centralized scheme. It also outperforms the power-sensitive AODV (PS-AODV), an efficient but non-learning routing protocol. Our formal model of Bayesian learning integrates a real data-set which forces the model to conform to the real data. This technique seems useful beyond the case study of this chapter.
Fatemeh Kazemeyni, Olaf Owe, Einar Broch Johnsen, Ilangko Balasingham

On the Use of Anaphora Resolution for Workflow Extraction

In this chapter we present three anaphora resolution approaches for workflow extraction. We introduce a lexical approach and two further approaches based on a set of association rules which are created during a statistical analysis of a corpus of workflows. We implement these approaches in our generic workflow extraction framework. The workflow extraction framework allows to derive a formal representation based on workflows from textual descriptions of instructions, for instance, of aircraft repair procedures from a maintenance manual. The framework applies a pipes-and-filters architecture and uses Natural Language Processing (NLP) tools to perform information extraction steps automatically. We evaluate the anaphora resolution approaches in the cooking domain. Two different evaluation functions are used for the evaluation which compare the extraction result with a golden standard. The syntactic function is strictly limited to syntactical comparison. The semantic evaluation function can use an ontology to infer a semantic distance for the evaluation. The evaluation shows that the most advanced anaphora resolution approach performs best. In addition a comparison of the semantic and syntactic evaluation functions shows that the semantic evaluation function is better suited for the evaluation of the anaphora resolution approaches.
Pol Schumacher, Mirjam Minor, Erik Schulte-Zurhausen

A Minimum Description Length Technique for Semi-Supervised Time Series Classification

In recent years the plunging costs of sensors/storage have made it possible to obtain vast amounts of medical telemetry, both in clinical settings and more recently, even in patient’s own homes . However for this data to be useful, it must be annotated. This annotation, requiring the attention of medical experts is very expensive and time consuming, and remains the critical bottleneck in medical analysis. The technique of Semi-supervised learning is the obvious way to reduce the need for human labor, however, most such algorithms are designed for intrinsically discrete objects such as graphs or strings, and do not work well in this domain, which requires the ability to deal with real-valued objects arriving in a streaming fashion. In this work we make two contributions. First, we demonstrate that in many cases a surprisingly small set of human annotated examples are sufficient to perform accurate classification. Second, we devise a novel parameter-free stopping criterion for semi-supervised learning. We evaluate our work with a comprehensive set of experiments on diverse medical data sources including electrocardiograms. Our experimental results suggest that our approach can typically construct accurate classifiers even if given only a single annotated instance.
Nurjahan Begum, Bing Hu, Thanawin Rakthanmanon, Eamonn Keogh

Interpreting Random Forest Classification Models Using a Feature Contribution Method

Model interpretation is one of the key aspects of the model evaluation process. The explanation of the relationship between model variables and outputs is relatively easy for statistical models, such as linear regressions, thanks to the availability of model parameters and their statistical significance . For “black box” models, such as random forest, this information is hidden inside the model structure. This work presents an approach for computing feature contributions for random forest classification models. It allows for the determination of the influence of each variable on the model prediction for an individual instance. By analysing feature contributions for a training dataset, the most significant variables can be determined and their typical contribution towards predictions made for individual classes, i.e., class-specific feature contribution “patterns”, are discovered. These patterns represent a standard behaviour of the model and allow for an additional assessment of the model reliability for new data. Interpretation of feature contributions for two UCI benchmark datasets shows the potential of the proposed methodology. The robustness of results is demonstrated through an extensive analysis of feature contributions calculated for a large number of generated random forest models.
Anna Palczewska, Jan Palczewski, Richard Marchese Robinson, Daniel Neagu

Towards a High Level Language for Reuse and Integration

The modeling and design of complex systems continues to face grand challenges in feedback and control. Existing languages and tools, either textual or graphical, bring some improvement for such purposes, but much remains to be done in order to readily insure scalability. In this chapter, we propose a language which gathers specialization and composition properties. It is our belief that the latter properties bear the necessary capabilities to overcome the difficulties raised when developing these systems. The language is designed in a way to be specific to complex system domains. It supports a component-based structure that conforms to a user-friendly component assembly. The proposed structure is based on static, dynamic, functional and parametric parts. It is conceived in the spirit of SysML concepts. The language also supports textual and graphical specification. The specified models generate Internal Block Diagrams. A modeling tool is built on the basis of the Eclipse framework.
Thouraya Bouabana-Tebibel, Stuart H. Rubin, Kadaouia Habib, Asmaa Chebba, Sofia Mellah, Lynda Allata

An Exploratory Case Study on Exploiting Aspect Orientation in Mobile Game Porting

Portability is a crucial requirement in the mobile game domain. Aspect-oriented programming has been shown to be a promising solution to implement the portability concerns, and more generally, to be a key technical enabler to transition mobile application development toward systematic software reuse. In this chapter, we report an exploratory case study that critically examines how aspect orientation is practiced in industrial-strength mobile game applications. Our analysis takes into account technical artifacts, organizational structures, and their relationships. Altogether these complementary and synergistic viewpoints allow us to formulate a set of hypotheses and to offer some concrete insights into developing information reuse and integration strategies in the rapidly changing landscape of mobile software development.
Tanmay Bhowmik, Vander Alves, Nan Niu

Developing Frameworks from Extended Feature Models

Frameworks are composed of concrete and abstract classes implementing the functionality of a domain. Applications can reuse framework design and code to improve their quality and be developed more efficiently. However, framework development is a complex task, since it must be adaptable enough to be reused by several applications. In this chapter we present the From Features to Framework (F3) approach, which aims to facilitate the development of frameworks. This approach is divided in two steps: Domain Modeling, in which framework domain is defined in a extended feature model; and Framework Construction, in which the framework is designed and implemented by following a set of patterns from its feature model. Since these steps can be systematically applied, we also present the design of a tool that supports the use of the F3 approach on framework development. Moreover, we performed an experiment that showed that the F3 approach makes framework development easier and more efficient.
Matheus Viana, Rosângela Penteado, Antônio do Prado, Rafael Durelli

About Handling Non-conflicting Additional Information

The focus in this chapter is on logic-based Artificial Intelligence (A.I.) systems that must accommodate some incoming symbolic knowledge that is not inconsistent with the initial beliefs but that however requires a form of belief change. First, we investigate situations where the incoming knowledge is both more informative and deductively follows from the preexisting beliefs: the system must get rid of the existing logically subsuming information. Likewise, we consider situations where the new knowledge must replace or amend some previous beliefs. When the A.I. system is equipped with standard-logic inference capabilities, merely adding this incoming knowledge into the system is not appropriate. In the chapter, this issue is addressed within a Boolean standard-logic representation of knowledge and reasoning. Especially, we show that a prime implicates representation of beliefs is an appealing specific setting in this respect.
Éric Grégoire

A Multi-Layer Moving Target Defense Approach for Protecting Resource-Constrained Distributed Devices

Techniques aimed at continuously changing a system’s attack surface, usually referred to as Moving Target Defense (MTD), are emerging as powerful tools for thwarting cyber attacks. Such mechanisms increase the uncertainty, complexity, and cost for attackers, limit the exposure of vulnerabilities, and ultimately increase overall resiliency. In this chapter, we propose an MTD approach for protecting resource-constrained distributed devices through fine-grained reconfiguration at different architectural layers. We introduce a coverage-based security metric to quantify the level of security provided by each system configuration: such metric, along with other performance metrics, can be adopted to identify the configuration that best meets the current requirements. In order to show the feasibility of our approach in real-world scenarios, we study its application to Wireless Sensor Networks (WSNs), introducing two different reconfiguration mechanisms. Finally, we show how the proposed mechanisms are effective in reducing the probability of successful attacks.
Valentina Casola, Alessandra De Benedictis, Massimiliano Albanese

Protocol Integration for Trust-Based Communication

In mobile ad hoc networks (MANETs), nodes have to cooperate in order to accomplish routing tasks . Nevertheless, they have limited resources, and may behave in a selfish way. On the other hand, the networking infrastructure supporting routing is quite weak faced with such misbehaviors. In this chapter, we propose a trust model for reactive routing in MANETs. The proposed solution applies to any source routing protocol. It is based on mechanisms inspired by the CONFIDANT protocol to install and update trust in the network. The model also integrates new protocols to improve trust in the selected routes. It may adapt to topology changes caused by the mobility of nodes and takes into account new routes learned after the route request phase. Finally, it improves the choice of the safest route towards the destination. Fundamental and elaborate tests prove the efficiency of the solution.
Fatma Laidoui, Thouraya Bouabana-Tebibel


Weitere Informationen

Premium Partner