Skip to main content

2005 | Buch

Advanced Information Systems Engineering

17th International Conference, CAiSE 2005, Porto, Portugal, June 13-17, 2005. Proceedings

herausgegeben von: Oscar Pastor, João Falcão e Cunha

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Keynotes

Conceptual Schema-Centric Development: A Grand Challenge for Information Systems Research

The goal of automating information systems building was stated in the sixties. Forty years later it is clear that the goal has not been achieved in a satisfactory degree. One of the problems has been the lack of standards in languages and platforms. In this respect, the recent efforts on standardization provide an opportunity to revive the automation goal. This is the main purpose of this paper. We have named the goal “conceptual schema-centric development” (CSCD) in order to emphasize that the conceptual schema should be the center of the development of information systems. We show that to develop an information system it is necessary to define its conceptual schema and that, therefore, the CSCD approach does not place an extra burden on developers. In CSCD, conceptual schemas would be explicit, executable in the production environment and the basis for the system evolution. To achieve the CSCD goal it is necessary to solve many research problems. We identify and comment on a few problems that should be included in a research agenda for CSCD. Finally, we show that the CSCD goal can be qualified as a grand challenge for the information systems research community.

Antoni Olivé
A MDA-Compliant Environment for Developing User Interfaces of Information Systems

To cope with the ever increasing diversity of markup languages, programming languages, tool kits and interface development environments, conceptual modeling of user interfaces could bring a framework for specifying, designing, and developing user interfaces at a level of abstraction that is higher than the level where code is merely manipulated. For this purpose, a complete environment is presented based on conceptual modeling of user interfaces of information systems structured around three axes: the models that characterize a user interface from the end user’s viewpoint and the specification language that allows designers to specify such interfaces, the method for developing interfaces in forward, reverse, and lateral engineering based on these models, and a suite of tools that support designers in applying the method based on the models. This environment is compatible with the Model-Driven Architecture recommendations in the sense that all models adhere to the principle of separation of concerns and are based on model transformation between the MDA levels. The models and the transformations of these models are all expressed in UsiXML (User Interface eXtensible Markup Language) and maintained in a model repository that can be accessed by the suite of tools. Thanks to this environment, it is possible to quickly develop and deploy a wide array of user interfaces for different computing platforms, for different interaction modalities, for different markup and programming languages, and for various contexts of use.

Jean Vanderdonckt
Toward Semantic Interoperability of Heterogeneous Biological Data Sources

Genomic researchers use a number of heterogeneous data sources including nucleotides, protein sequences, 3-D Protein structures, taxonomies, and research publications such as MEDLINE. This research aims to discover as much biological knowledge as possible about the properties and functions of the structures such as DNA sequences and protein structures and to explore the connections among all the data, so that the knowledge can be used to improve human lives. Currently it is very difficult to connect all of these data sources seamlessly unless all the data is transformed into a common format with an id connecting all of them. The state-of-the-art facilities for searching these data sources provide interfaces through which scientists can access multiple databases. Most of these searches are primarily text based, requiring users to specify keywords using which the systems search through each individual data source and returns results. The user is then required to create the connections between the results from each source. This is a major problem because researchers do not always know how to create these connections. To solve this problem we propose a semantics-based mechanism for automatically linking and connecting the various data sources. Our approach is based on a model that explicitly captures the semantics of the heterogeneous data sources and makes them available for searching. In this talk I will discuss issues related to capturing the semantics of biological data and using these semantics to automate the integration of diverse heterogeneous sources.

Sudha Ram

Conceptual Modeling

The Association Construct in Conceptual Modelling – An Analysis Using the Bunge Ontological Model

Associations are a widely used construct of object-oriented languages. However, the meaning of associations for conceptual modelling of application domains remains unclear. This paper employs ontological analysis to first examine the software semantics of the association construct, and shows that they cannot be transferred to conceptual modelling. The paper then explores associations as ’semantic connections’ between objects and shows that this meaning cannot be transferred to conceptual modelling either.

As an alternative to the use of associations, the paper proposes using shared properties, a construct that is rooted directly in ontology. An example from a case study demonstrates how this is applied. The paper then shows an efficient implementation in object-oriented programming languages to maintain seamless transitions between analysis, design, and implementation.

Joerg Evermann
Computing the Relevant Instances That May Violate an OCL Constraint

Integrity checking is aimed at efficiently determining whether the state of the information base is consistent after the application of a set of structural events. One possible way to achieve efficiency is to consider only the relevant instances that may violate an integrity constraint instead of the whole population of the information base. This is the approach we follow in this paper to automatically check the integrity constraints defined in a UML conceptual schema. Since the method we propose uses only the standard elements of the conceptual schema to process the constraints, its efficiency improvement can benefit any implementation of the schema regardless the technology used.

Jordi Cabot, Ernest Teniente
Event-Based Modeling of Evolution for Semantic-Driven Systems

Ontologies play a key role in the realization of the Semantic Web. An ontology is used as an explicit specification of a shared conceptualization of a given domain. When such a domain evolves, the describing ontology needs to evolve too. In this paper, we present an approach that allows tracing evolution on the instance level. We use event types as an abstraction mechanism to define the semantics of changes. Furthermore, we introduce a new event-based approach to keep depending artifacts consistent with a changing instance base.

Peter Plessers, Olga De Troyer, Sven Casteleyn

Metamodeling

Interoperability in Meta-environments: An XMI-Based Approach

In this paper we propose an approach conceived to handle the interoperability in meta-environments. The paper first illustrates the relevance of model interoperability in the present software engineering applications; then, it presents the proposed approach, with a particular emphasis to the relevant role MOF and XMI play in it. Finally, it illustrates a prototype we have realized for verifying the applicability of the proposed approach in a real case, namely the Business Process Management domain.

Roberto Riggio, Domenico Ursino, Harald Kühn, Dimitris Karagiannis
On the Notion of Consistency in Metadata Repository Systems

Repository systems handle the management of metadata and meta-models. They act as data store with a custom-defined and dynamically adaptable system catalogue. This feature finds a useful application in systems such as process engines, collaborative and information systems, CASE tools and transformation engines, in which custom-defined catalogues are rarely available due to their complex nature. In this context repositories would improve those systems’ ability to adapt and allow for dynamic information discovery. Preserving the consistency of the repository data is a major challenge. Repository consistency has several aspects, the most important of which is structural consistency. It is insufficiently specified in the metadata and repository standards, and is incompletely implemented in existing systems. In this paper we propose a novel approach to enforcing structural consistency in MOF-based repositories. We describe its implementation in iRM/RMS – a prototypical OMG MOF-based repository system [35]. We show how this algorithm overcomes the deficiencies of the existing approaches and products.

Ilia Petrov, Stefan Jablonski, Marc Holze
Using Text Editing Creation Time Meta Data for Document Management

Word processing systems ignore the fact that the history of a text document contains crucial information for its management. In this paper, we present database-based word processing, focusing on the incorporated document management system. During the creation process of a document, meta data are being gathered. This information is generated on the level of the whole document, on sections of a document or even on individual characters and is used for advanced retrieval by so-called dynamic folders, which are superior to advanced hierarchical file systems.

Thomas B. Hodel, Roger Hacmac, Klaus R. Dittrich

Databases

An Object-Relational Approach to the Representation of Multi-granular Spatio-Temporal Data

The notion of spatio-temporal multi-granularity is fundamental when modeling objects in GIS applications in that it supports the representation of the temporal evolutions of these objects. Concepts and issues in multi-granular spatio-temporal representations have been widely investigated by the research community. However, despite the large number of theoretical investigations, no comprehensive approaches, have been proposed dealing with the representation of multi-granular spatio-temporal objects in commercially available DBMSs. The goal of the work that we report in this paper is to address this gap. To achieve it, the paper first introduces an object-relational model based on OpenGis specifications described in SQL3. Several extensions are developed in order to improve the semantics and behavior for spatio-temporal data types introducing an approach to represent the temporal dimension in this model and the multi-representation of spatio-temporal granularities.

Elisa Bertino, Dolores Cuadra, Paloma Martínez
Managing Inheritance Hierarchies in Object/Relational Mapping Tools

We study, in the context of object/relational mapping tools, the problem of describing mappings between inheritance hierarchies and relational schemas. To this end, we introduce a novel mapping model, called M

2

ORM

2

+

HIE

, and investigate its mapping capabilities. We first show that M

2

ORM

2

+

HIE

subsumes three well-know basic representation strategies for mapping a hierarchy to relations. We then show that M

2

ORM

2

+

HIE

also allows expressing further mappings, e.g., where the three basic strategies are applied independently to different parts of a multi-level hierarchy. We describe the semantics of M

2

ORM

2

+

HIE

in term of how CRUD (i.e., Create, Read, Update, and Delete) operations on objects (in a hierarchy) can be translated into operations over a corresponding relational database. We also investigate correctness conditions.

Luca Cabibbo, Antonio Carosi
BInXS: A Process for Integration of XML Schemata

This paper presents a detailed integration process for XML schemata called BInXS. BInXS adopts a

global-as-view

integration approach that builds a global schema from a set of heterogeneous XML schemata related to a same application domain. This bottom-up approach maps all element and attribute definitions in XML schemata to correspondent concepts at the global schema, allowing access to all data available at the XML sources. The integration process is semi-automatically performed over conceptual representations of the XML schemata, which provides a better understanding of the semantics of the XML data to be unified. A conceptual schema is generated by a set of conversion rules that are applied to a schema definition for XML data. Once this conceptual schema is the result of a meticulous analysis of the XML logical model, it is able to abstract the particularities of semistructured and XML data, like elements with mixed contents and elements with alternative representations. Therefore, the further unification of such conceptual schemata implicitly deals with structural conflicts inherent to semistructured and XML data. In addition, BInXS supports a mapping strategy based on XPath expressions in order to maintain correspondences among global concepts and data at the XML sources.

Ronaldo dos Santos Mello, Carlos Alberto Heuser

Query Processing

Query Processing Using Ontologies

Recently, the database and AI research communities have paid increased attention to

ontologies

. The main motivating reason is that ontologies promise solutions for complex problems caused by the lack of a good understanding of the semantics of data in many cases. In particular, ontologies have extensively been used to overcome the interoperability problem during the integration of heterogeneous information sources. Moreover, many efforts have been put into developing ontology based techniques for improving the query answering process in database and information systems.

In this paper, we present a new approach for query processing within single (object) relational databases using ontology knowledge. Our goal is to process database queries in a semantically more meaningful way. In fact, our approach shows how an ontology can be effectively exploited to rewrite a user query into another one such that the new query provides more meaningful results satisfying the intention of the user. To this end, we develop a set of transformation rules which rely on semantic information extracted from the ontology associated with the database. In addition, we propose a semantic model and a set of criteria to prove the validity of the transformation results. We also address the necessary mappings between an ontology and its underlying database w.r.t. our framework.

Chokri Ben Necib, Johann-Christoph Freytag
Estimating Recall and Precision for Vague Queries in Databases

In

vague

queries, a user enters a value that represents some real world object and expects as the result the set of database values that represent this real world object even with not exact matching. The problem appears in databases that collect data from different sources or databases were different users enter data directly. Query engines usually rely on the use of some type of similarity metric to support data with inexact matching. The problem of building query engines to execute vague queries has been already studied, but an important problem still remains open, namely that of defining the threshold to be used when a similarity scan is performed over a database column. From the bibliography it is known that the threshold depends on the similarity metrics and also on the set of values being queried. Thus, it is unrealistic to expect that the user supplies a threshold at query time. In this paper we propose a process for estimation of recall/precision values for several thresholds for a database column. The idea is that this process is started by a database administrator in a pre-processing phase using samples extracted from database. The meta-data collected by this process may be used in query processing in the optimization phase. The paper describes this process as well as experiments that were performed in order to evaluate it.

Raquel Kolitski Stasiu, Carlos A. Heuser, Roberto da Silva
Querying Tree-Structured Data Using Dimension Graphs

Tree structures provide a popular means to organize the information on the Web. Taxonomies of thematic categories, concept hierarchies, e-commerce product catalogs are examples of such structures. Querying multiple data sources that use tree structures to organize their data is a challenging issue due to name mismatches, structural differences and structural inconsistencies that occur in such structures, even for a single knowledge domain. In this paper, we present a method to query tree-structured data. We introduce dimensions which are sets of semantically related nodes in tree structures. Based on dimensions, we suggest dimension graphs. Dimension graphs can be automatically extracted from trees and abstract their structural information. They are semantically rich constructs that provide query guidance to pose and evaluate queries on trees. We design a query language to query tree-structured data. A key feature of this language is that queries are not restricted by the structure of the trees. We present a technique for evaluating queries and we provide necessary and sufficient conditions for checking query unsatisfiability. We also show how dimension graphs can be used to query multiple trees in the presence of structural differences and inconsistencies.

Dimitri Theodoratos, Theodore Dalamagas

Process Modeling and Workflow Systems

Workflow Resource Patterns: Identification, Representation and Tool Support

In the main, the attention of workflow researchers and workflow developers has focussed on the process perspective, i.e., control-flow. As a result, issues associated with the resource perspective, i.e., the people and machines actually doing the work, have been largely neglected. Although the process perspective is of most significance, appropriate consideration of the resource perspective is essential for successful implementation of workflow technology. Previous work has identified recurring, generic constructs in the control-flow and data perspectives, and presented them in the form of control-flow and data patterns. The next logical step is to describe workflow resource patterns that capture the various ways in which resources are represented and utilised in workflows. These patterns include a number of distinct groupings such as push patterns (“the system pushes work to a worker”) and pull patterns (“the worker pulls work from the system”) to describe the many ways in which work can be distributed. By delineating these patterns in a form that is independent of specific workflow technologies and modelling languages, we are able to provide a comprehensive treatment of the resource perspective and we subsequently use these patterns as the basis for a detailed comparison of a number of commercially available workflow management systems.

Nick Russell, Wil M. P. van der Aalst, Arthur H. M. ter Hofstede, David Edmond
A Declarative Foundation of Process Models

In this paper, a declarative foundation for process models is proposed. Three issues in process management and modeling are identified: business orientation, traceability, and flexibility. It is shown how these issues can be addressed by basing process models on business models, where a business model focuses on the transfer of value between agents. As a bridge between business models and process models, the notion of activity dependency model is introduced, which identifies, classifies, and relates activities needed for executing and coordinating value transfers.

Birger Andersson, Maria Bergholtz, Ananda Edirisuriya, Tharaka Ilayperuma, Paul Johannesson
Synchronizing Copies of External Data in Workflow Management Systems

Workflow management systems integrate applications and data resources in business processes. Frequently they have to keep local copies of data in the so called workflow repository. We introduce and define synchronization policies for these copies with their external data sources ranging from the provision of fully synchronized replications of external data sources to unsynchronized storage of read results. We analyze in detail the effects of combining various pull and push policies and show in an example how these policies can be used in different situations.

Johann Eder, Marek Lehmann

Requirements Engineering

Understanding the Requirements on Modelling Techniques

The focus of this paper is

not

on the requirements of an information system to be developed, but rather on the requirements that apply to the

modelling techniques

used during information system development. We claim that in past and present, many information systems modelling techniques have been developed without a proper understanding of the requirements that follow from the development processes in which these techniques are to be used. This paper provides a progress report on our research e.orts to obtain a fundamental understanding of the requirements mentioned. We discuss the underlying research issues, the research approach we use, the way of thinking (

weltanschauung

) that will be employed in finding the answers, and some first results.

S. J. B. A. Hoppenbrouwers, H. A. Proper, Th. P. van der Weide
A Process for Generating Fitness Measures

It is widely acknowledged that the system functionality captured in a system model has to match organisational requirements available in the business model. However, fitness measures are rarely integrated in design methodologies. The paper proposes a framework to ease the generation of fitness measures adapted to a given methodology in order to quantify to which extent there is fit between the business and the system. The framework comprises a generic level and a specific level. The former provides generic evaluation criteria and metrics expressed on the basis of business and system ontologies. The specific level is dealing with a specific set of metrics adapted to specific business and system models. The paper presents the process for generating a specific set of measures from the generic set, illustrates it with two specific models and shows how the use of the generated metrics can help in making design decisions in the development of a hotel room booking system.

Anne Etien, Colette Rolland
A Concern-Oriented Requirements Engineering Model

Traditional requirements engineering approaches suffer from the tyranny of the dominant decomposition, with functional requirements serving as the base decomposition and non-functional requirements cutting across them. In this paper, we propose a model that decomposes requirements in a uniform fashion regardless of their functional or non-functional nature. This makes it possible to project any particular set of requirements on a range of other requirements, hence supporting a multi-dimensional separation. The projections are achieved through composition rules employing informal, often concern-specific, actions and operators. The approach supports establishment of early trade-offs among crosscutting and overlapping requirements. This, in turn, facilitates negotiation and decision-making among stakeholders.

Ana Moreira, João Araújo, Awais Rashid

Model Transformation

Generating Transformation Definition from Mapping Specification: Application to Web Service Platform

In this paper, we present in the first part our proposition for mapping specification and generation of transformation definition in the context of Model Driven Architecture (MDA). In the second part, we present the application of our proposition to Web Services platform. We propose a metamodel for mapping specification and its implementation as a plug-in for Eclipse. Once mappings are specified between two metamodels (e.g. UML and WSDL), transformation definitions are generated automatically using transformation languages such as Atlas Transformation Language (ATL). We have applied this tool to edit mappings between UML and Web Services. Then, we have used this mapping to generate ATL code to achieve transformations from UML into Web Services.

Denivaldo Lopes, Slimane Hammoudi, Jean Bézivin, Frédéric Jouault
A General Approach to the Generation of Conceptual Model Transformations

In data integration, a

Merge

operator takes as input a pair of schemas in some conceptual modelling language, together with a set of correspondences between their constructs, and produces as an output a single integrated schema. In this paper we present a new approach to implementing the

Merge

operator that improves upon previous work by considering a wider range of correspondences between schema constructs and defining a generic and formal framework for the generation of schema transformations. This is used as a basis for deriving transformations over high level models. The approach is demonstrated in this paper to generate transformations for ER and relational models.

Nikolaos Rizopoulos, Peter Mçbrien
Building a Software Factory for Pervasive Systems Development

The rise of the number and complexity of pervasive systems is a fact. Pervasive systems developers need advanced development methods in order to build better systems in an easy way. Software Factories and the Model Driven Architecture (MDA) are two important trends in the software engineering field. This paper applies the guidelines and strategies described by these proposals in order to build a methodological approach for pervasive systems development. Software Factories are based on the definition of software families supported by frameworks. Individual systems requirements are specified by means of domain specific languages. Following this strategy, our approach defines a framework and a domain specific language for pervasive systems. We use the MDA guidelines to support the development of our domain specific language and the automatic generation of the specific source code of a particular system. The approach presented in this paper raises the abstraction level in the development of pervasive systems and provides high reusable assets to reduce the effort in the development projects.

Javier Muñoz, Vicente Pelechano

Knowledge Management and Verification

Alignment and Maturity Are Siblings in Architecture Assessment

Current architecture assessment models focus on either architecture maturity or architecture alignment, considering the other as an explaining sub-variable. Based on an exploratory study, we conjecture that both alignment and maturity are equally important variables in properly assessing architecture organizations. Our hypothesis is that these variables conceptually differ, correlate, but do not explain one another. In this paper we describe our Multi-dimensional Assessment model for architecture Alignment and architecture Maturity (

MAAM

), which contains six main interrelated sub-variables that explain both alignment and maturity. We used existing models, literature from business and IS domains, and knowledge gained from previous research to identify the explaining variables. We constructed

MAAM

using structured modeling techniques. We are currently using a structured questionnaire method to construct an Internet survey with which we gather data to empirically validate our model. Our goal is to develop an architecture assessment process and supporting tool based on

MAAM

.

Bas van der Raadt, Johan F. Hoorn, Hans van Vliet
Verification of EPCs: Using Reduction Rules and Petri Nets

Designing business models is a complicated and error prone task. On the one hand, business models need to be intuitive and easy to understand. On the other hand, ambiguities may lead to different interpretations and false consensus. Moreover, to configure process-aware information systems (e.g., a workflow system), the business model needs to be transformed into an executable model. Event-driven Process Chains (EPCs), but also other informal languages, are intended as a language to support the transition from a business model to an executable model. Many researchers have assigned formal semantics to EPCs and are using these semantics for execution and verification. In this paper, we use a different tactic. We propose a two-step approach where first the informal model is reduced and then verified in an interactive manner. This approach acknowledges that some constructs are correct or incorrect no matter what interpretation is used and that the remaining constructs require human judgment to assess correctness. This paper presents a software tool that supports this two-step approach and thus allows for the verification of real-life EPCs as illustrated by two case studies.

B. F. van Dongen, W. M. P. van der Aalst, H. M. W. Verbeek
Measurement Practices for Knowledge Management: An Option Perspective

This article develops an option pricing model to evaluate knowledge management (KM) activities from the following perspectives: knowledge creation, knowledge conversion, knowledge circulation, and knowledge carry out. This paper makes three important contributions: (1) it provides a formal theoretical grounding for the validity of the Black-Scholes model that might be employed to KM; (2) it proposes a measurement framework to enable leveraging knowledge assets effectively and efficiently; (3) it presents the first application of the Black-Scholes model that uses a real world business situation involving KM as its test bed. The results prove the option pricing model can be act as a measurement guideline to the whole KM activities.

An-Pin Chen, Mu-Yen Chen

Web Services

An Ontological Approach for Eliciting and Understanding Needs in e-Services

The lack of a good understanding of customer needs within e-service initiatives caused severe financial losses in the Norwegian energy sector, resulting in the failure of

e-service

initiatives offering packages of independent services. One of the causes was a poor elicitation and understanding of the e-services at hand. In this paper, we propose an ontologically founded approach (1) to describe customer needs, and the necessary e-services that satisfy such needs, and (2) to bundle elementary e-services into needs-satisfying e-service

bundles

. The ontology as well as the associated reasoning mechanisms are codified in RDFS to enable software support for need elicitation and service bundling. A case study from the Norwegian energy sector is used to demonstrate how we put our theory into practice.

Ziv Baida, Jaap Gordijn, Hanne Sæle, Hans Akkermans, Andrei Z. Morch
Developing Adapters for Web Services Integration

The push toward business process automation has generated the need for integrating different enterprise applications involved in such processes. The typical approach to integration and to process automation is based on the use of adapters and message brokers

.

The need for adapters in Web services mainly comes from two sources: one is the heterogeneity at the higher levels of the interoperability stack, and the other is the high number of clients, each of which can support different interfaces and protocols, thereby generating the need for providing multiple interfaces to the same service. In this paper, we characterize the problem of adaptation of web services by identifying and classifying different kinds of adaptation requirements. Then, we focus on business protocol adapters, and we classify the different ways in which two protocols may differ. Next, we propose a methodology for developing adapters in Web services, based on the use of mismatch patterns and service composition technologies.

Boualem Benatallah, Fabio Casati, Daniela Grigori, Hamid R. Motahari Nezhad, Farouk Toumani
Efficient: A Toolset for Building Trusted B2B Transactions

The paper introduces an approach to the specification, the verification and the validation of B2B transactions. Based on the usage of a subset of formally defined UML diagrams complemented with business rules, we introduce two facilities offered by the supporting Efficient toolset, namely the checking of formal properties expected from the produced models as well as the animation tool allowing business experts to understand and ‘play’ with business transactions models before they are implemented. The overall approach is illustrated through the experiences gained in the performance of a real transactional Import/Export business case.

Amel Mammar, Sophie Ramel, Bertrand Grégoire, Michael Schmitt, Nicolas Guelfi

Web Engineering

Separation of Structural Concerns in Physical Hypermedia Models

In this paper we propose a modeling and design approach for building physical hypermedia applications, i.e. those mobile applications in which physical and digital objects are related and explored using the hypermedia paradigm. We show that by separating the geographical and domain concerns we gain in modularity, and evolution ease. We first review the state of the art of this kind of software systems, arguing about the need of a systematic modeling approach; we next present a light extension to the OOHDM design approach, incorporating physical objects and ”walkable” links; next we generalize our approach and show how to improve concern separation and integration in hypermedia design models. We compare our approach with others in the field of physical and ubiquitous hypermedia and in the more generic software engineering field. Some concluding remarks and further work are finally presented.

Silvia Gordillo, Gustavo Rossi, Daniel Schwabe
Integrating Unnormalised Semi-structured Data Sources

Semi-structured data sources, such as XML, HTML or CSV files, present special problems when performing data integration. In addition to the hierarchical structure of the semistructured data, the data integration must deal with the redundancy in semi-structured data, where the same fact may be repeated in a data source, but should map into a single fact in a global integrated schema. We term semi-structured data containing such redundancy as being an unnormalised data source, and we define a normal form for semi-structured data that may be used when defining global schemas. We introduce special functions to relate object identifiers used in the global data model to object identifiers in unnormalised data sources, and demonstrate how to use these functions in query processing, update processing and integration of these data sources.

Sasivimol Kittivoravitkul, Peter McBrien
Model Transformations in the Development of Data–Intensive Web Applications

Over the last few years, Web-based systems became commonplace. Despite the complexity and the economic significance of such applications, current practice does not always apply robust and well-understood principles. Model driven architecture (MDA) separates the application logic from the underlying platform technology and represents them with precise semantic models. Web application development therefore has potentially the most to gain from adopting such techniques that can offer a greater return on development time and quality factors than traditional approaches. In particular, the paper presents model-driven transformations between platform-independent (conceptual descriptions of Web applications) and platform-specific (Model-View-Controller conformant) models. The design of such transformations is documented (and possibly animated) through mathematically rigorous specifications given by means of Abstract State Machines.

Davide Di Ruscio, Alfonso Pierantonio

Software Testing

Automated Reasoning on Feature Models

Software Product Line (SPL) Engineering has proved to be an effective method for software production. However, in the SPL community it is well recognized that variability in SPLs is increasing by the thousands. Hence, an automatic support is needed to deal with variability in SPL. Most of the current proposals for automatic reasoning on SPL are not devised to cope with extra–functional features. In this paper we introduce a proposal to model and reason on an SPL using constraint programming. We take into account functional and extra–functional features, improve current proposals and present a running, yet feasible implementation.

David Benavides, Pablo Trinidad, Antonio Ruiz-Cortés
A Method for Information Systems Testing Automation

This paper presents MODEST, a

M

eth

OD

to h

E

lp

S

ystem

T

esting. MODEST can reduce the overall effort required during software construction, using an extended design specification produced in a UP-like software process. This specification is used to automate test generation and execution, decreasing the effort required during test activities. The method deals with Information Systems that follow an architecture composed of a user interface layer, a business rule layer and a storage mechanism abstracted by a persistence layer.

Pedro Santos Neto, Rodolfo Resende, Clarindo Pádua
Model-Based System Testing of Software Product Families

In software product family engineering reusable artifacts are produced during domain engineering and applications are built from these artifacts during application engineering. Modeling variability of current and future applications is the key for enabling reuse. The proactive reuse leads to a reduction in development costs and a shorter time to market. Up to now, these benefits have been realized for the constructive development phases, but not for testing. This paper presents the ScenTED technique (

Scen

ario based

TE

st case

D

erivation), which aims at reducing effort in product family testing. ScenTED is a model-based, reuse-oriented technique for test case derivation in the system test of software product families. Reuse of test cases is ensured by preserving variability during test case derivation. Thus, concepts known from model-based testing in single system engineering, e.g., coverage metrics, must be adapted. Experiences with our technique gained from an industrial case study are discussed and prototypical tool support is illustrated.

Andreas Reuys, Erik Kamsties, Klaus Pohl, Sacha Reis

Software Quality

Quality-Based Software Reuse

Work in software reuse focuses on reusing artifacts. In this context, finding a reusable artifact is driven by a desired functionality. This paper proposes a change to this common view. We argue that it is possible and necessary to also look at reuse from a non-functional (quality) perspective. Combining ideas from reuse, from goal-oriented requirements, from aspect-oriented programming and quality management, we obtain a goal-driven process to enable the quality-based reusability.

Julio Cesar Sampaio do Prado Leite, Yijun Yu, Lin Liu, Eric S. K. Yu, John Mylopoulos
On the Lightweight Use of Goal-Oriented Models for Software Package Selection

Software package selection can be seen as a process of matching the products available in the marketplace with the requirements stated by an organization. This process may involve hundreds of requirements and products and therefore we need a framework abstract enough to focus on the most important factors that influence the selection. Due to their strategic nature, goal-oriented models are good candidates to be used as a basis of such a framework. They have demonstrated their usefulness in contexts like early requirements engineering, organizational analysis and business process reengineering. In this paper, we identify three different types of goal-oriented models useful in the context of package selection when some assumptions hold.

Market segment models

provide a shared view to all the packages of the segment;

software package models

are derived from them. The selection can be seen as a process of matching among the

organizational model

and the other models; in our proposal this matching is lightweight, since no model checking is performed. We define our approach rigorously by means of a UML conceptual data model.

Xavier Franch
Measuring IT Infrastructure Project Size: Infrastructure Effort Points

Our objective is to design a metric that can be used to measure the size of projects that install and configure COTS stand-alone software, firmware and hardware components. We call these IT infrastructure, as these components often form the foundation of the information system that is built on top of it. At the moment no accepted size metric exists for the installation and configuration of stand-alone software, firmware and hardware components. The proposed metric promises to be a viable instrument to assess the effectiveness and efficiency of IT infrastructure projects.

Joost Schalken, Sjaak Brinkkemper, Hans van Vliet
Backmatter
Metadaten
Titel
Advanced Information Systems Engineering
herausgegeben von
Oscar Pastor
João Falcão e Cunha
Copyright-Jahr
2005
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-32127-9
Print ISBN
978-3-540-26095-0
DOI
https://doi.org/10.1007/b136788

Premium Partner