Skip to main content

Über dieses Buch

Software engineering is understood as a broad term linking science, traditional en- neering, art and management and is additionally conditioned by social and external factors (conditioned to the point that brilliant engineering solutions based on strong science, showing artistic creativity and skillfully managed can still fail for reasons beyond the control of the development team). Modern software engineering needs a paradigm shift commensurate with a change of the computing paradigm from: 1. Algorithms to interactions (and from procedural to object-oriented programming) 2. Systems development to systems integration 3.Products to services Traditional software engineering struggles to address this paradigm shift to inter- tions, integration, and services. It offers only incomplete and disconnected methods for building information systems with fragmentary ability to dynamically accom- date change and to grow gracefully. The principal objective of contemporary software engineering should therefore be to try to redefine the entire discipline and offer a complete set of methods, tools and techniques to address challenges ahead that will shape the information systems of the future.



Evaluation of Novel Approaches to Software Engineering 2008


Measuring Characteristics of Models and Model Transformations Using Ontology and Graph Rewriting Techniques

In this paper, we propose the integrated technique related to metrics in a Model Driven Development context. More concretely, we focus on the following three topics; 1) the application of a meta modeling technique to specify formally model-specific metrics, 2) the definition of metrics dealing with semantic aspects of models (semantic metrics) using domain ontologies, and 3) the specification technique for the metrics of model transformations based on graph rewriting systems.
Motoshi Saeki, Haruhiko Kaiya

On-the-Fly Testing by Using an Executable TTCN-3 Markov Chain Usage Model

The TestUS framework offers a range of techniques to obtain a TTCN-3 test suite starting from UML 2.0 requirement definitions. Use case diagrams that contain functional and non-functional requirements are first transformed to a Markov Chain usage model (MCUM). Probability annotations of MCUM state transitions enable the generation of TTCN-3 test cases that reflect the expected usage patterns of system users. Because compiling the associated TTCN-3 test suite can take quite a long time for a realistic SUT (System under Test) we decided to map the MCUM directly into the executable test suite without generating test cases in advance. Test cases and the evaluation of test verdicts are interpreted on-the-fly during executing the test suite. We proved the concept by testing an existing DECT communication system. The compilation time for deriving an executable TTCN-3 test suite was reduced to only 15 minutes and one can interpret as many test cases as one likes on-the-fly.
Winfried Dulz

Language-Critical Development of Process-Centric Application Systems

The shortage of skilled IT-staff as well as the technological possibilities offered by Service-oriented Architectures (SOA) and Web 2.0 applications, leads us to the following consequences: working processes, job engineering and labor organization are going to be modeled and therefore made digital in the sense of IT-support. This goes along with modeling working processes being independent from the individual employee in areas to be rationalized resp. not to be staffed by qualified specialists. Hence, there will be a worldwide net based selection of those who are able and skilled to fulfill modeled work like e.g. “handling a damage event” or “creating an optimized data structure for master data” by means of the Unified Modeling Language (UML) in the most effective and efficient way. An enterprise will therefore neutrally manage its modeled work processes (HB-services) and IT-services (application programs) taking place as computer supported work equipment in any working process being located anywhere in the world without assigning it first to a specific performer (neutral or artificial actors). By doing so it is possible to control and dynamically execute working processes globally based on the division of labor, and on a database supported administration of “bills of activities” (work plans) by means of the World Wide Web. All that requires new and dynamic – in the sense of component based – job descriptions and other work equipment exceeding today’s established skill and task management by far.
Tayyeb Amin, Tobias Grollius, Erich Ortner

Balancing Business Perspectives in Requirements Analysis

In modern organisations, the development of an Information System answers to strategic needs of the business. As such, it must be aligned at the same time with software and corporate goals. Capturing its requirements is a complex task as it have to deal with a variety of different and potentially conflicting needs. In this paper, we propose a novel conceptual framework for requirements modelling and validation, based on economic and business strategy theory.
Alberto Siena, Alessio Bonetti, Paolo Giorgini

Using Fault Screeners for Software Error Detection

Fault screeners are simple software (or hardware) constructs that detect variable value errors based on unary invariant checking. In this paper we evaluate and compare the performance of three low-cost screeners (Bloom filter, bitmask, and range screener) that can be automatically integrated within a program, and trained during the testing phase. While the Bloom filter has the capacity of retaining virtually all variable values associated with proper program execution, this property comes with a much higher false positive rate per unit of training effort, compared to the more simple range and bitmask screeners, that compresses all value information in terms of a single lower and upper bound or a bitmask, respectively. We present a novel analytic model that predicts the false positive and false negative rate for ideal (i.e., screeners that store each individual value during training) and simple (e.g., range and bitmask) screeners. We show that the model agrees with our empirical findings. Furthermore, we describe an application of all screeners, where the screener’s error detection output is used as input to a fault localization process that provides automatic feedback on the location of residual program defects during deployment in the field.
Rui Abreu, Alberto González, Peter Zoeteweij, Arjan J. C. van Gemund

Language Support for Service Interactions in Service-Oriented Architecture

The Open Services Gateway initiative (OSGi) is a platform for running service-oriented Java applications. OSGi provides a central service registry to allow application components (so-called bundles) to share functionality. From the viewpoint of programming language development, OSGi leaves a lot of room for improvement. Its service query language, for instance, bypasses important compile-time guarantees and it works only for service metadata that never changes during the lifetime of a service. A second problem is that the event notification system requires programmers to write a considerable amount of boilerplate logic for reacting to service events. This obfuscates the business logic, which in turn decreases code comprehension and increases the odds for introducings bugs when implementing client-service interactions.
This paper evaluates OSGi as a platform for programming client-service interactions in Java. After focusing on problems that relate to OSGi’s integrated service query language and its event notification system, we propose a solution based on a programming language extension. We also show how this extension is transformed to regular Java code so as to maintain interoperability with the OSGi specification.
Sven De Labey, Jeroen Boydens, Eric Steegmans

Evaluation of Novel Approaches to Software Engineering 2009

Automating Component Selection and Building Flexible Composites for Service-Based Applications

Service Oriented Computing allows defining applications in which components (services) can be available and selected very late during the development process or even “discovered” at execution time. In this context, it is no longer possible to describe an application as a composite entity containing all its components; we need to perform component selection all along the application life-cycle, including execution. It requires describing an application at least partially by its requirements and goals, leaving room for delaying selection; the development system, and the run-time must ensure that the current component selection satisfies, at all time, the application description.
In this paper, we propose a concept of composite addressing the needs of advanced and flexible service-based applications, automating component selection and building composites satisfying the application description and enforcing minimality, completeness and consistency properties. We also propose tools and environment supporting these concepts and mechanisms in the different phases of the application life-cycle.
Jacky Estublier, Idrissa A. Dieng, Eric Simon

An Aspect-Oriented Framework for Event Capture and Usability Evaluation

Recent work in usability evaluation has focused on automatically capturing and analysing user interface events. However, automated techniques typically require modification of the underlying software, preventing non-programmers from using these techniques. In addition, capturing events requires each event source to be modified and since these sources may be spread throughout the system, maintaining the event capture functionality can become a very arduous task. Aspect-oriented programming (AOP) is a programming paradigm that separates the concerns or behaviours of a system into discrete aspects, allowing all event capture to be contained within a single aspect. Consequently, the use of AOP for usability evaluation is currently an area of research interest, but there is a lack of a general framework. This paper describes the development of an AOP-based usability evaluation framework that can be dynamically configured to capture specific events in an application.
Slava Shekh, Sue Tyerman

Implementing Domain Specific Process Modelling

Business process modelling becomes more productive when modellers can use process modelling languages which optimally fit to the application domain. This requires the proliferation and management of domain specific modelling languages and modelling tools. In this paper we address the issue of providing domain specific languages in a systematic and structural way without having to implement modelling tools for each domain specific language separately. Our approach is based on a two dimensional meta modelling stack.
Bernhard Volz, Sebastian Dornstauder

Bin-Packing-Based Planning of Agile Releases

Agile software development represents a major approach to software engineering. Agile processes offer numerous benefits to organizations including quicker return on investment, higher product quality, and better customer satisfaction. However, there is no sound methodological support of agile release planning – contrary to the traditional, plan-based approaches. To address this situation, we present i) a conceptual model of agile release planning, ii) a bin-packing-based optimization model and iii) a heuristic optimization algorithm as a solution. Four real life data sets of its application and evaluation are drawn from the lending sector. The experiment, which was supported by prototypes, demonstrates that this approach can provide more informed and established decisions and support easy optimized release plan productions. Finally, the paper analyzes benefits and issues from the use of this approach in system development projects.
Ákos Szőke

A Method to Measure Productivity Trends during Software Evolution

Better measures of productivity are needed to support software process improvements. We propose and evaluate indicators of productivity trends that are based on the premise that productivity is closely related to the effort required to complete change tasks. Three indicators use change management data, while a fourth compares effort estimates of benchmarking tasks. We evaluated the indicators using data from 18 months of evolution in two commercial software projects. The productivity trend in the two projects had opposite directions according to the indicators. The evaluation showed that productivity trends can be quantified with little measurement overhead. We expect the methodology to be a step towards making quantitative self-assessment practices feasible even in low ceremony projects.
Hans Christian Benestad, Bente Anda, Erik Arisholm

Design Pattern Detection in Java Systems: A Dynamic Analysis Based Approach

In the context of reverse engineering, the recognition of design patterns provides additional information related to the rationale behind the design. This paper presents our approach to the recognition of the behavioral design patterns based on dynamic analysis of Java software systems. The idea behind our solution is to identify a set of rules capturing information necessary to identify a design pattern instance. Rules are characterized by weights indicating their importance in the detection of a specific design pattern. The core behavior of each design pattern may be described through a subset of these rules forming a macrorule. Macrorules define the main traits of a pattern. JADEPT (JAva DEsign Pattern deTector) is our software for design pattern identification based on this idea. It captures static and dynamic aspects through a dynamic analysis of the software by exploiting the JPDA (Java Platform Debugger Architecture). The extracted information is stored in a database. Queries to the database implement the rules defined to recognize the design patterns. The tool has been validated with positive results on different implementations of design patterns and on systems such as JADEPT itself.
Francesca Arcelli, Fabrizio Perin, Claudia Raibulet, Stefano Ravani

Formalization of the UML Class Diagrams

In this paper a system static structure modeling formalization and formalization of static models based on topological functioning model (TFM) is proposed. TFM uses mathematical foundations that holistically represent complete functionality of the problem and application domains. By using TFM within software development process it is possible to do formal analysis of a business system and in a formal way to model the static structure of the system. After construction of the TFM of a system’s functioning a problem domain object model is defined by performing transformation of defined TFM. By making further transformations of TFM and by using TFM within software development it is possible to introduce more formalism in the Unified Modeling Language (UML) diagrams and in their construction. In this paper we have introduced topology into the UML class diagrams.
Janis Osis, Uldis Donins

Extended KAOS Method to Model Variability in Requirements

This paper presents an approach to requirements engineering in the Cycab domain. Cycabs are public vehicles with fully automated driving capabilities. So far few studies have dealt with expressing such requirements at the highest level of abstraction. Furthermore, during their building, software systems embedded in Cycabs are subject to frequent changes of requirements. Thus, we need to represent a family of Cycabs that can differ according to different options. The proposed approach tries to address these issues by adopting and extending the KAOS goal oriented method. The main objective is to provide a process to define and adapt specific requirements models from a generic model, according to different situations made available to the stakeholders.
Farida Semmak, Christophe Gnaho, Régine Laleau

Orthographic Software Modeling: A Practical Approach to View-Based Development

Although they are significantly different in how they decompose and conceptualize software systems, one thing that all advanced software engineering paradigms have in common is that they increase the number of different views involved in visualizing a system. Managing these different views can be challenging even when a paradigm is used independently, but when they are used together the number of views and inter-dependencies quickly becomes overwhelming. In this paper we present a novel approach for organizing and generating the different views used in advanced software engineering methods that we call Orthographic Software Modeling (OSM). This provides a simple metaphor for integrating different development paradigms and for leveraging domain specific languages in software engineering. Development environments that support OSM essentially raise the level of abstraction at which developers interact with their tools by hiding the idiosyncrasies of specific editors, storage choices and artifact organization policies. The overall benefit is to significantly simplify the use of advanced software engineering methods.
Colin Atkinson, Dietmar Stoll, Philipp Bostan

Dynamic Management of the Organizational Knowledge Using Case-Based Reasoning

Software process reuse involves different aspects of the knowledge obtained from generic process models and previous successful projects. The benefit of reuse is reached by the definition of an effective and systematic process to specify, produce, classify, retrieve and adapt software artifacts for utilization in another context. In this work we present a formal approach for software process reuse to assist the definition, adaptation and improvement of the organization’s standard process. The Case-Based Reasoning technology is used to manage the collective knowledge of the organization.
Viviane Santos, Mariela Cortés, Márcia Brasil

Mapping Software Acquisition Practices from ISO 12207 and CMMI

The CMMI-ACQ and the ISO/IEC 12207:2008 are process reference models that address issues related to the best practices for software product acquisition. With the aim of offering information on how the practices described in these two models are related, and considering that the mapping is one specific strategy for the harmonization of models, we have carried out a mapping of these two reference models for acquisition. We have taken into account the latest versions of the models. Furthermore, to carry out this mapping in a systematic way, we defined a process for this purpose. We consider that the mapping presented in this paper supports the understanding and leveraging of the properties of these reference models, which is the first step towards harmonization of improvement technologies. Furthermore, since a great number of organizations are currently acquiring products and services from suppliers and developing fewer and fewer of these products in-house, this work intends to support organizations which are interested in introducing or improving their practices for acquisition of products and services using these models.
Francisco J. Pino, Maria Teresa Baldassarre, Mario Piattini, Giuseppe Visaggio, Danilo Caivano

Concept Management: Identification and Storage of Concepts in the Focus of Formal Z Specifications

Concept location is a necessary but all too often laborious task during maintenance phases. Part of the reasons is that repeatedly the same or similar concepts have to be reconstructed, which is a resource and time-consuming process. This contribution investigates the situation and suggests a framework that persistently stores conceptual elements and their dependencies in an SQL database. On the example of formal Z specifications it demonstrates that concept location is alleviated by simple queries that automatically identify concepts based on the database entries.
Daniela Pohl, Andreas Bollin

A Model Driven Approach to Upgrade Package-Based Software Systems

Complex software systems are often based on the abstraction of package, brought to popularity by Free and Open Source Software (FOSS) distributions. While helpful as an encapsulation layer, packages do not solve all problems of deployment, and more generally of management, of large software collections. In particular upgrades, which can affect several packages at once due to inter-package dependencies, often fail and do not hold good transactional properties. This paper shows how to apply model driven techniques to describe and manage software upgrades of FOSS distributions. It is discussed how to model static and dynamic aspects of package upgrades—the latter being the more challenging to deal with—in order to be able to predict common causes of upgrade failures and undo residual effects of failed or undesired upgrades.
Antonio Cicchetti, Davide Di Ruscio, Patrizio Pelliccione, Alfonso Pierantonio, Stefano Zacchiroli

Coupling Metrics for Aspect-Oriented Programming: A Systematic Review of Maintainability Studies

Over the last few years, a growing number of studies have explored how Aspect-Oriented Programming (AOP) might impact software maintainability. Most of the studies use coupling metrics to assess the impact of AOP mechanisms on maintainability attributes such as design stability. Unfortunately, the use of such metrics is fraught with dangers, which have so far not been thoroughly investigated. To clarify this problem, this paper presents a systematic review of recent AOP maintainability studies. We look at attributes most frequently used as indicators of maintainability in current aspect-oriented (AO) programs; we investigate whether coupling metrics are an effective surrogate to measure theses attributes; we study the extent to which AOP abstractions and mechanisms are covered by used coupling metrics; and we analyse whether AO coupling metrics meet popular theoretical validation criteria. Our review consolidates data from recent research results, highlights circumstances when the applied coupling measures are suitable to AO programs and draws attention to deficiencies where coupling metrics need to be improved.
Rachel Burrows, Alessandro Garcia, François Taïani

Revealing Commonalities Concerning Maintenance of Software Product Line Platform Components

Software Product Line (SPL) development provides the possibility of reusing common parts in similar software products. However the SPL approach does not centrally improve the maintenance of software products of a Software Product Line. This paper presents an approach for reducing maintenance costs of SPL products by using the concept Software as a Service (SaaS) by revealing exploitable commonalities concerning the maintenance of SPL platform components. This SPL-SaaS approach was developed with the experiences of arvato services integrating the software product line concept since years. It shows up the advantageous and disadvantageous characteristics of platform components that play a role for the concept combination. Main goal is to enable an IT-architect to identify platform components to be adequate for a common maintenance. Therefore criteria for the identification of platform components suitable for the approach are derived from these characteristics. Furthermore the requirements of the potential service users are examined and categorized concerning their effects on the system architecture. Special requirements of customers often lead to architectural constraints that are not compatible with the approach. If both, the criteria are met and the architectural constraints are compatible, the SPL-SaaS approach can be applied to a component. The whole approach is applied on an example of arvato services.
Martin Assmann, Gregor Engels, Thomas von der Massen, Andreas Wübbeke

Service Based Development of a Cross Domain Reference Architecture

An important trend of software engineering is that systems are in transition from component based architectures towards service centric ones. However, software product line engineering techniques can help in a quality based and systematic reuse. The content of this paper addresses the issue of how to perform design and quality analysis of cross domain reference architecture. The reference architecture is designed based on the domains requirements and features modelling. We propose a service based approach for cross-domain reference architecture development. Throughout the sections we try to introduce an innovative way of thinking founded on bridging concepts from software architecture, service orientation, software product lines, and quality analysis with the purpose to initiate and evolve software systems.
Liliana Dobrica, Eila Ovaska


Weitere Informationen

Premium Partner