Skip to main content

About this book

This book constitutes the refereed proceedings of the 50th International Conference on Objects, Models, Components, Patterns, TOOLS Europe 2012, held in Prague, Czech Republic, during May 29-31,2012. The 24 revised full papers presented were carefully reviewed and selected from 77 submissions. The papers discuss all aspects of object technology and related fields and demonstrate practical applications backed up by formal analysis and thorough experimental evaluation. In particular, every topic in advanced software technology is adressed the scope of TOOLS.

Table of Contents


Integrating Efficient Model Queries in State-of-the-Art EMF Tools

Model-driven development tools built on industry standard platforms, such as the Eclipse Modeling Framework (EMF), heavily use model queries in various use cases, such as model transformation, well-formedness constraint validation and domain-specific model execution. As these queries are executed rather frequently in interactive modeling applications, they have a significant impact on the runtime performance of the tool, and also on the end user experience. However, due to their complexity, they can also be time consuming to implement and optimize on a case-by-case basis. The aim of the EMF-IncQuery framework is to address these shortcomings by using declarative queries over EMF models and executing them effectively using a caching mechanism.
In the current paper, we present the new and significantly extended version of the EMF-IncQuery Framework, with new features and runtime extensions that speed up the development and testing of new queries by both IDE and API improvements.
We demonstrate how our high performance queries can be easily integrated with other EMF tools using an entirely new case study in which EMF-IncQuery is deeply integrated into the EMF modeling infrastructure to facilitate the incremental evaluation of derived EAttributes and EReferences.
Gábor Bergmann, Ábel Hegedüs, Ákos Horváth, István Ráth, Zoltán Ujhelyi, Dániel Varró

Poporo: A Formal Methods Tool for Fast-Checking of Social Network Privacy Policies

The increase in use of Smart mobile devices has allowed for an ever growing market for services providers. These services are increasingly used to connect to users’ on-line private information through social networking sites that share and personalise on-line information. This leads to the problem of privacy leaks stemming from an application’s non-adherence to a predefined set of privacy policies. This paper presents a formal methods tool to reliably restrict the access to content in on-line social network services. The Poporo tool builds upon a previous work in which we provided a predicate calculus definition for social networking in B that models social-network content, privacy policies, and social-network friendship relations. This paper presents the implementation and the functionality of our Poporo tool through a running example in the domain of social networking sites.
Néstor Cataño, Sorren Hanvey, Camilo Rueda

DroidSense: A Mobile Tool to Analyze Software Development Processes by Measuring Team Proximity

Understanding the dynamics of a software development process is of paramount importance for managers to identify the most important patterns, to predict potential quality and productivity issues, and to plan and implement corrective actions. Currently, major techniques and tools in this area specialize on acquiring and analyzing data using software metrics, leaving unaddressed the issue of modeling the “physical” activities that developers do. In this paper, we present DroidSense, a non-invasive tool that runs on Android-based mobile phones and collects data about developers involvement in Agile software development activities, e.g. Pair Programming, daily stand-ups, or planning game, by measuring their proximity to computers and also other developers. DroidSense collects data automatically via Bluetooth signal created by other phones, personal computers, and other devices. We explain detailed design and implementation of the tool. Eventually, to show a possible application of DroidSense we present the results of a case study.
Luis Corral, Alberto Sillitti, Giancarlo Succi, Juri Strumpflohner, Jelena Vlasenko

TimeSquare: Treat Your Models with Logical Time

TimeSquare is an Eclipse and model-based environment for the specification, analysis and verification of causal and temporal constraints. It implements the MARTE Time Model and its specification language, the Clock Constraint Specification Language (ccsl). Both MARTE and ccsl heavily rely on logical time, made popular by its use in distributed systems and synchronous languages. Logical Time provides a relaxed form of time that is functional, elastic (can be abstracted or refined) and multiform. TimeSquare is based on the latest model-driven technology so that more than 60% of its code is automatically generated. It provides an XText-based editor of constraints, a polychronous clock calculus engine able to process a partial order conforming to the set of constraints and it supports several simulation policies. It has been devised to be connected to several back-ends developed as new plugins to produce timing diagrams, animate uml models, or execute Java code amongst others.
Julien DeAntoni, Frédéric Mallet

Quality Evaluation of Object-Oriented and Standard Mutation Operators Applied to C# Programs

Mutation testing is a kind of fault injection approach that can be used to generate tests or to assess the quality of test sets. For object-oriented languages, like C#, both object-oriented and standard (traditional) mutation operators should be applied. The methods that can contribute to reducing the number of applied operators and lowering the costs of mutation testing were experimentally investigated. We extended the CREAM mutation tool to support selective testing, sampling and clustering of mutants, and combining code coverage with mutation testing. We propose an approach to quality evaluation and present experimental results of mutation operators applied to C# programs.
Anna Derezińska, Marcin Rudnik

101companies: A Community Project on Software Technologies and Software Languages

101companies is a community project in computer science (or software science) with the objective of developing a free, structured, wiki-accessible knowledge resource including an open-source repository for different stakeholders with interests in software technologies, software languages, and technological spaces; notably: teachers and learners in software engineering or software languages as well as software developers, software technologists, and ontologists. The present paper introduces the 101companies Project. In fact, the present paper is effectively a call for contributions to the project and a call for applications of the project in research and education.
Jean-Marie Favre, Ralf Lämmel, Thomas Schmorleiz, Andrei Varanovich

An Object-Oriented Application Framework for the Development of Real-Time Systems

The paper presents an object-oriented application framework that supports the development of real-time systems. The framework consists of a set of architectural abstractions that allow time-related aspects to be explicitly treated as first-class objects at the application level. Both the temporal behavior of an application and the way the application deals with information placed in a temporal context can be modeled by means of such abstractions, thus narrowing the semantic gap between specification and implementation. Moreover, the framework carefully separates behavioral policies from implementation details improving portability and simplifying the realization of adaptive systems.
Francesco Fiamberti, Daniela Micucci, Francesco Tisato

Measuring Test Case Similarity to Support Test Suite Understanding

In order to support test suite understanding, we investigate whether we can automatically derive relations between test cases. In particular, we search for trace-based similarities between (high-level) end-to-end tests on the one hand and fine grained unit tests on the other. Our approach uses the shared word count metric to determine similarity. We evaluate our approach in two case studies and show which relations between end-to-end and unit tests are found by our approach, and how this information can be used to support test suite understanding.
Michaela Greiler, Arie van Deursen, Andy Zaidman

Enhancing OSGi with Explicit, Vendor Independent Extra-Functional Properties

Current industry and research organisations invest considerable effort to adopt component based programming which is promising rapid development process. Several issues, however, hinder its wider adoption. One of them is the practical use of extra-functional properties (EFPs) that research community aims at integrating to component composition but for which industrial applications are still rare. When extra-functional properties are not considered or mis-interpreted, inconsistencies in application performance, security, reliability, etc. can result at run-time. As a possible solution we have proposed a general extra-functional properties system called EFFCC. In this paper we show how it can be applied to an industrial component model, namely the OSGi framework. This work analyses OSGi from the extra-functional properties viewpoint and shows how it can be enhanced by EFPs, expressed as OSGi capabilities. The proposed benefits of the presented approach are seamless integration of such properties into an existing framework and consistency of their interpretation among different vendors. This should support easier adoption of extra-functional properties in practice.
Kamil Ježek, Premek Brada, Lukáš Holý

Efficient Method Lookup Customization for Smalltalk

Programming languages are still evolving, and programming languages and language features are being designed and implemented every year. Since it is not a trivial task to provide a runtime system for a new language, existing runtime systems such as the Java Virtual Machine or the Common Language Runtime are used to host the new language.
However, most of the high-performance runtime systems were designed for a specific language with a specific semantics. Therefore, if the new language semantics differs from the semantics hard-coded in a runtime system, it has to be emulated on top of features supported by the runtime.
The emulation causes performance overhead.
To overcome the limitations of an emulation, a runtime system may provide a meta-object protocol to alter the runtime semantics. The protocol should fulfill opposing goals: it should be flexible, easy to use, fast and easy to implement at the same time.
We propose a simple meta-object protocol for customization of a method lookup in Smalltalk. A programmer may define his own custom method lookup routine in Smalltalk and let the runtime system to call it when needed. Therefore there is no need to modify the runtime system itself. Our solution provides reasonable performance thanks to low-level support in a runtime system, nevertheless the changes to the runtime system are small and local. At the same time, it provides the flexibility to implement a wide range of features present in modern programming languages.
The presented approach has been implemented and validated on a Smalltalk virtual machine.
Jan Vraný, Jan Kurš, Claus Gittinger

Fake Run-Time Selection of Template Arguments in C++

C++ does not support run-time resolution of template type arguments. To circumvent this restriction, we can instantiate a template for all possible combinations of type arguments at compile time and then select the proper instance at run time by evaluation of some provided conditions. However, for templates with multiple type parameters such a solution may easily result in a branching code bloat. We present a template metaprogramming algorithm called for_id that allows the user to select the proper template instance at run time with theoretical minimum sustained complexity of the branching code.
Daniel Langr, Pavel Tvrdík, Tomáš Dytrych, Jerry P. Draayer

Supporting Compile-Time Debugging and Precise Error Reporting in Meta-programs

Compile-time meta-programming is an advanced language feature enabling to mix programs with definitions that are executed at compile-time and may generate source code to be put in their place. Such definitions are called meta-programs and their actual evaluation constitutes a compilation stage. As meta-programs are also programs, programmers should be supported in handling compile-time and runtime errors, something introducing challenges to the entire tool chain along two lines. Firstly, the source point of a compile error may well be the outcome of a series of compilation stages, thus never appearing within the original program. Effectively, the latter requires a compiler to track down the error chain across all involved stages so as to provide a meaningful, descriptive and precise error report. Secondly, every compilation stage is instantiated by the execution of the respective staged program. Thus, typical full-fledged source-level debugging for any particular stage should be facilitated during the compilation process. Existing implementations suffer in both terms, overall providing poor error messages, while lacking the required support to debug meta-programs of any staging depth. In this paper we firstly outline an implementation of a meta-programming system offering all mentioned facilities. Then, we detail the required amendments to the compilation process. Finally, we discuss the necessary interoperation points between the compiler and the tool-chain (IDE).
Yannis Lilis, Anthony Savidis

Identifying a Unifying Mechanism for the Implementation of Concurrency Abstractions on Multi-language Virtual Machines

Supporting all known abstractions for concurrent and parallel programming in a virtual machines (VM) is a futile undertaking, but it is required to give programmers appropriate tools and performance. Instead of supporting all abstractions directly, VMs need a unifying mechanism similar to INVOKEDYNAMIC for JVMs.
Our survey of parallel and concurrent programming concepts identifies concurrency abstractions as the ones benefiting most from support in a VM. Currently, their semantics is often weakened, reducing their engineering benefits. They require a mechanism to define flexible language guarantees.
Based on this survey, we define an ownership-based meta-object protocol as candidate for VM support. We demonstrate its expressiveness by implementing actor semantics, software transactional memory, agents, CSP, and active objects. While the performance of our prototype confirms the need for VM support, it also shows that the chosen mechanism is appropriate to express a wide range of concurrency abstractions in a unified way.
Stefan Marr, Theo D’Hondt

Verification of Snapshotable Trees Using Access Permissions and Typestate

We use access permissions and typestate to specify and verify a Java library that implements snapshotable search trees, as well as some client code. We formalize our approach in the Plural tool, a sound modular typestate checking tool. We describe the challenges to verifying snapshotable trees in Plural, give an abstract interface specification against which we verify the client code, provide a concrete specification for an implementation and describe proof patterns we found. We also relate this verification approach to other techniques used to verify this data structure.
Hannes Mehnert, Jonathan Aldrich

Multiparty Session C: Safe Parallel Programming with Message Optimisation

This paper presents a new efficient programming toolchain for message-passing parallel algorithms which can fully ensure, for any typable programs and for any execution path, deadlock-freedom, communication safety and global progress through a static checking. The methodology is embodied as a multiparty session-based programming environment for C and its runtime libraries, which we call Session C. Programming starts from specifying a global protocol for a target parallel algorithm, using a protocol description language. From this global protocol, the projection algorithm generates endpoint protocols, based on which each endpoint C program is designed and implemented with a small number of concise session primitives. The endpoint protocol can further be refined to a more optimised protocol through subtyping for asynchronous communication, preserving original safety guarantees. The underlying theory can ensure that the complexity of the toolchain stays in polynomial time against the size of programs. We apply this framework to representative parallel algorithms with complex communication topologies. The benchmark results show that Session C performs competitively against MPI.
Nicholas Ng, Nobuko Yoshida, Kohei Honda

Non-interference on UML State-Charts

Non-interference is a semantically well-defined property that allows one to reason about the security of systems with respect to information flow policies for groups of users. Many of the security problems of implementations could be already spotted at design time if information flow would be a concern in early phases of software development. In this paper we propose a methodology for automatically verifying the interaction of objects whose behaviour is described by deterministic UML State-charts with respect to information flow policies, based on the so-called unwinding theorem. We have extended this theorem to cope with the particularities of state-charts: the use of variables, guards, actions and hierarchical states and derived results about its compositionality. In order to validate our approach, we report on an implementation of our enhanced unwinding techniques and applications to scenarios from the Smart Metering domain.
Martín Ochoa, Jan Jürjens, Jorge Cuéllar

Representing Uniqueness Constraints in Object-Relational Mapping

The Natural Entity Framework
Object-oriented languages model data as transient objects, while relational databases store data persistently using a relational data model. The process of making objects persistent by storing their state as relational tuples is called object-relational mapping (ORM). This process is nuanced and complex as there are many fundamental differences between the relational model and the object model. In this work we address the difficulties in representing entity identity and uniqueness consistently, efficiently, and succinctly in ORM. We introduce the natural entity framework, which: (1) provides a strong concept of value-based persistent object identity; (2) allows the programmer to simultaneously specify natural and surrogate key constraints consistently in the object and relational representations; (3) provides object constructors and initializers that disambiguate the semantics of persistent object creation and retrieval; and (4) automates the mapping of inheritance hierarchies that respect natural key constraints and allows for efficient polymorphic queries and associations.
Mark J. Olah, David Mohr, Darko Stefanovic

Detection of Seed Methods for Quantification of Feature Confinement

The way features are implemented in source code has a significant influence on multiple quality aspects of a software system. Hence, it is important to regularly evaluate the quality of feature confinement. Unfortunately, existing approaches to such measurement rely on expert judgement for tracing links between features and source code which hinders the ability to perform cost-efficient and consistent evaluations over time or on a large portfolio of systems.
In this paper, we propose an approach to automating measurement of feature confinement by detecting the methods which play a central role in implementations of features, the so-called seed methods, and using them as starting points for a static slicing algorithm. We show that this approach achieves the same level of performance compared to the use of manually identified seed methods. Furthermore we illustrate the scalability of the approach by tracking the evolution of feature scattering and tangling in an open-source project over a period of ten years.
Andrzej Olszak, Eric Bouwers, Bo Nørregaard Jørgensen, Joost Visser

Assisted Behavior Driven Development Using Natural Language Processing

In Behavior Driven Development (BDD), acceptance tests provide the starting point for the software design flow and serve as a basis for the communication between designers and stakeholders. In this agile software development technique, acceptance tests are written in natural language in order to ensure a common understanding between all members of the project. As a consequence, mapping the sentences to actual source code is the first step of the design flow, which is usually done manually.
However, the scenarios described by the acceptance tests provide enough information in order to automatize the extraction of both the structure of the implementation and the test cases. In this work, we propose an assisted flow for BDD where the user enters into a dialog with the computer which suggests code pieces extracted from the sentences. For this purpose, natural language processing techniques are exploited. This allows for a semi-automatic transformation from acceptance tests to source code stubs and thus provides a first step towards an automatization of BDD.
Mathias Soeken, Robert Wille, Rolf Drechsler

Learning to Classify Bug Reports into Components

Bug reports in widely used defect tracking systems contains standard and mandatory fields like product name, component name, version number and operating system. Such fields provide important information required by developers during bug fixing. Previous research shows that bug reporters often assign incorrect values for such fields which cause problems and delays in bug fixing. We conduct an empirical study on the issue of incorrect component assignments or component reassignments in bug reports. We perform a case study on open-source Eclipse and Mozilla projects and report results on various aspects such as the percentage of reassignments, distribution across number of assignments until closure of a bug and time difference between creation and reassignment event. We perform a series of experiments using a machine learning framework for two prediction tasks: categorizing a given bug report into a pre-defined list of components and predicting whether a given bug report will be reassigned. Experimental results demonstrate correlation between terms present in bug reports (textual documents) and components which can be used as linguistic indicators for the task of component prediction. We study component reassignment graphs and reassignment probabilities and investigate their usefulness for the task of component reassignment prediction.
Ashish Sureka

Incremental Dynamic Updates with First-Class Contexts

Highly available software systems occasionally need to be updated while avoiding downtime. Dynamic software updates reduce downtime, but still require the system to reach a quiescent state in which a global update can be performed. This can be difficult for multi-threaded systems. We present a novel approach to dynamic updates using first-class contexts, called Theseus. First-class contexts make global updates unnecessary: existing threads run to termination in an old context, while new threads start in a new, updated context; consistency between contexts is ensured with the help of bidirectional transformations. We show how first-class contexts offer a practical and flexible approach to incremental dynamic updates, with acceptable overhead.
Erwann Wernli, Mircea Lungu, Oscar Nierstrasz

Elucidative Development for Model-Based Documentation

Documentation is an essential activity in software development, for source code as well as modelling artefacts. Typically, documentation is created and maintained manually which leads to inconsistencies as documented artefacts like source code or models evolve during development. Existing approaches like literate/elucidative programming or literate modelling address these problems by deriving documentation from software development artefacts or vice versa. However, these approaches restrict themselves to a certain kind of artefact and to a certain phase of the software development life-cycle. In this paper, we propose elucidative development as a generalisation of these approaches supporting heterogeneous kinds of artefacts as well as the analysis, design and implementation phases of the software development life-cycle. Elucidative development links source code and model artefacts into documentation and thus, maintains and updates their presentation semi-automatically. We present DEFT as an integrated development environment for elucidative development. We show, how DEFT can be applied to language specifications like the UML specification and help to avoid inconsistencies caused by maintenance and evolution of such a specification.
Claas Wilke, Andreas Bartho, Julia Schroeter, Sven Karol, Uwe Aßmann

Viewpoint Co-evolution through Coarse-Grained Changes and Coupled Transformations

Multi-viewpoint modeling is an effective technique to deal with the ever-growing complexity of large-scale systems. The evolution ofmulti-viewpoint system specifications is currently accomplished in terms of fine-grained atomic changes. Apart from being a very low-level and cumbersome strategy, it is also quite unnatural to system modelers, who think of model evolution in terms of coarse-grained high-level changes. In order to bridge this gap, we propose an approach to formally express and manipulate viewpoint changes in a high-level fashion, by structuring atomic changes into coarse-grained composite ones. These can also be used to formally define reconciling operations to adapt dependent views, using coupled transformations. We introduce a modeling language based on graph transformations and Maude for expressing both, the coarse-grained changes and the coupled transformations that propagate them to reestablish global consistency. We demonstrate the applicability of the approach by its application in the context of RM-ODP.
Manuel Wimmer, Nathalie Moreno, Antonio Vallecillo

Turbo DiSL: Partial Evaluation for High-Level Bytecode Instrumentation

Bytecode instrumentation is a key technique for the implementation of dynamic program analysis tools such as profilers and debuggers. Traditionally, bytecode instrumentation has been supported by low-level bytecode engineering libraries that are difficult to use. Recently, the domain-specific aspect language DiSL has been proposed to provide high-level abstractions for the rapid development of efficient bytecode instrumentations. While DiSL supports user-defined expressions that are evaluated at weave-time, the DiSL programming model requires these expressions to be implemented in separate classes, thus increasing code size and impairing code readability and maintenance. In addition, the DiSL weaver may produce a significant amount of dead code, which may impair some optimizations performed by the runtime. In this paper we introduce Turbo, a novel partial evaluator for DiSL, which processes the generated instrumentation code, performs constant propagation, conditional reduction, and pattern-based code simplification, and executes pure methods at weave-time. With Turbo, it is often unnecessary to wrap expressions for evaluation at weave-time in separate classes, thus simplifying the programming model. We present Turbo’s partial evaluation algorithm and illustrate its benefits with several case studies. We evaluate the impact of Turbo on weave-time performance and on runtime performance of the instrumented application.
Yudi Zheng, Danilo Ansaloni, Lukas Marek, Andreas Sewe, Walter Binder, Alex Villazón, Petr Tuma, Zhengwei Qi, Mira Mezini


Additional information

Premium Partner

    Image Credits