Skip to main content

2005 | Buch

On the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASE

OTM Confederated International Conferences, CoopIS, DOA, and ODBASE 2005, Agia Napa, Cyprus, October 31 - November 4, 2005, Proceedings, Part I

herausgegeben von: Robert Meersman, Zahir Tari

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

OTM 2005 Keynotes

Probabilistic Ontologies and Relational Databases

The relational algebra and calculus do not take the semantics of terms into account when answering queries. As a consequence, not all tuples that should be returned in response to a query are always returned, leading to low recall. In this paper, we propose the novel notion of a

constrained probabilistic ontology

(CPO). We developed the concept of a CPO-enhanced relation in which each attribute of a relation has an associated CPO. These CPOs describe relationships between terms occurring in the domain of that attribute. We show that the relational algebra can be extended to handle CPO-enhanced relations. This allows queries to yield sets of tuples, each of which has a probability of being correct.

Octavian Udrea, Deng Yu, Edward Hung, V. S. Subrahmanian
Intelligent Web Service – From Web Services to .Plug&Play. Service Integration

The service oriented architecture and its implementation by Web services have reached a considerable degree of maturity and also a wide adoption in different application domains. This is true for the R&D as well as for the industrial community. Standards for the description, activation, and combination of Web services have been established; UDDI registries are in place for the management of services, and development environments support the software engineer in the creation of Web services.

However, the major benefit of service oriented architectures, the loose coupling of services, is still seldom explored in real world settings. The reason is the heterogeneity on different levels within the service oriented architecture. The heterogeneity problems reach from the semantics of service descriptions to compatibility problems between workflows, which have to be connected via service interfaces. In spite of compatible service signatures, workflows might, for example, not be compatible in their semantics.

This talk discusses challenges and solutions for a real .Plug&Play. service infrastructure, i.e. a Web service infrastructure, where integration of new Web services becomes as simple and straightforward as plugging a USB stick into your laptop. To achieve this goal various issues have to be addressed:

Semantics of services as a foundation for intelligent service mediation and usage

Effective, automatic, and intelligent service discovery taking into account application context

Dynamic context-aware composition of services into processes

The challenges and approaches for a “Plug&Play” service infrastructure are illustrated with a real world example.

Erich Neuhold, Thomas Risse, Andreas Wombacher, Claudia Niederée, Bendick Mahleko
Process Modeling in Web Applications

While Web applications evolve towards ubiquitous, enterprise-wide or multi-enterprise information systems, their features must cover new requirements, such as the capability of managing complex processes spanning multiple users and organizations, by interconnecting software provided by different organizations. Significant efforts are currently being invested in application integration, to support the composition of business processes of different companies, so as to create complex, multi-party business scenarios. In this setting, Web applications, which were originally conceived to allow the user-to-system dialogue, are extended with Web services, which enable system-to-system interaction, and with process control primitives, which permit the implementation of the required business constraints. This talk presents new Web engineering methods for the high-level specification of applications featuring business processes and remote services invocation. Process- and service-enabled Web applications benefit from the high-level modeling and automatic code generation techniques that have been fruitfully applied to conventional Web applications, broadening the class of Web applications that take advantages of these powerful software engineering techniques. All the concepts presented in this talk are fully implemented within a CASE tool.

Stefano Ceri

Cooperative Information Systems (CoopIS) 2005 International Conference

Workflow

Let’s Go All the Way: From Requirements Via Colored Workflow Nets to a BPEL Implementation of a New Bank System

This paper describes use of the formal modeling language

Colored Petri Nets (CPNs)

in the development of a new bank system. As a basis for the paper, we present a

requirements model

, in the form of a CPN, which describes a new bank work process that must be supported by the new system. This model has been used to specify, validate, and elicit user requirements. The contribution of this paper is to describe two translation steps that go from the requirements CPN to an implementation of the new system. In the first translation step, a

workflow model

is derived from the requirements model. This model is represented in terms of a so-called

Colored Workflow Net (CWN)

, which is a generalization of the classical workflow nets to CPN. In the second translation step, the CWN is translated into implementation code. The target implementation language is BPEL4WS deployed in the context of IBM WebSphere. A semi-automatic translation of the workflow model to BPEL4WS is possible because of the structural requirements imposed on CWNs.

W. M. P. van der Aalst, J. B. Jørgensen, K. B. Lassen
A Service-Oriented Workflow Language for Robust Interacting Applications

In a service-oriented world, a long-running business process can be implemented as a set of stateful services that represent the individual but coordinated steps that make up the overall business activity. These service-based business processes can then be combined to form loosely-coupled distributed applications where the participants interact by calling on each other’s services. A key concern is to ensure that these interacting service-based processes work correctly in all cases, including maintaining consistency of both their stored data and the status of the joint activities. We propose a new model and notation for expressing such business processes which helps the designer avoid many common sources of errors, including inconsistency. Unlike most existing orchestration or workflow

a

This work was completed while the author was working at CSIRO. languages used for expressing business processes, we do not separate the normal case from exceptional activity, nor do we treat exceptional activity as a form of failure that requires compensation. Our model has been demonstrated by developing prototype systems.

Surya Nepal, Alan Fekete, Paul Greenfield, Julian Jang, Dean Kuo, Tony Shi
Balancing Flexibility and Security in Adaptive Process Management Systems

Process–aware information systems (PAIS) must provide sufficient flexibility to their users to support a broad spectrum of application scenarios. As a response to this need adaptive process management systems (PMS) have emerged, supporting both ad-hoc deviations from the predefined process schema and the quick adaptation of the PAIS to business process changes. This newly gained runtime flexibility, however, imposes challenging security issues as the PMS becomes more vulnerable to misuse. Process changes must be restricted to authorized users, but without nullifying the advantages of a flexible system by handling authorizations in a too rigid way. This paper discusses requirements relevant in this context and proposes a comprehensive access control (AC) model with special focus on adaptive PMS. On the one hand, our approach allows the compact definition of user dependent access rights restricting process changes to authorized users only. On the other hand, the definition of process type dependent access rights is supported to only allow for those change commands which are applicable within a particular process context. Respective AC mechanisms will be key ingredients in future adaptive PMS.

Barbara Weber, Manfred Reichert, Werner Wild, Stefanie Rinderle

Workflow and Business Processes

Enabling Business Process Interoperability Using Contract Workflow Models

Business transactions are governed by legally established contracts. Contractual obligations are to be fulfilled by executing business processes of the involved parties. To enable this, contract terms and conditions need to be semantically mapped to process concepts and then analyzed for compliance with existing process models. To solve the problem, we propose a methodology that, using a layered contract ontology, deduces contract requirements into a high-level process description named Contract Workflow Model (CWM). By applying a set of transformation rules, the CWM is then compared for compliance with existing, executable process models. By the use of its concepts, the methodology enables comprehensive identification and evolution of requirements for interoperability of processes of the contracting parties.

Jelena Zdravkovic, Vandana Kabilan
Resource-Centric Worklist Visualisation

Business process management, and in particular workflow management, are a major area of ICT research. At present no coherent approach has been developed to address the problem of workflow visualisation to aid workers in the process of task prioritisation. In this paper we describe the development of a new, coherent approach to worklist visualisation, via analysis and development of a resource-centric view of the worklist information. We then derive appropriate visualisations for worklists and the relevant resources to aid worker in decision making. A worklist visualisation system has been implemented as an extension to an open-source workflow system, YAWL (Yet Another Workflow Language).

Ross Brown, Hye-young Paik
CoopFlow: A Framework for Inter-organizational Workflow Cooperation

The work we present here is in line with a novel approach for inter-organizational workflow cooperation spanning several organizations without being managed by one physical organization. Our approach consists of three steps: workflow advertisement, workflow interconnection, and workflow cooperation. Hence, to implement a virtual organization it is important to provide a mechanism whereby organizations can advertise their workflow parts, other organizations can look at them and cooperate these with their own workflows. In this paper, we present

CoopFlow

, a workflow cooperation framework, supporting dynamic plugging and cooperation between heterogeneous workflow management systems (WfMS). Can be connected to

CoopFlow

any WfMS that is able to invoke external applications (programs, Web services, etc.) and that allows external applications to invoke any step within a workflow it manages.

CoopFlow

presents many advantages. First, it provides powerful ways for inter-organizational workflow cooperation and code reduction. In fact, partners can change their WfMS without changing the global proxy behaviour. Furthermore, it permits a dynamic interconnection and disconnection of participating organizations. In addition, it preserves the privacy and autonomy of process participants by reducing inter-visibility as tiny as the cooperation needs based on the view principle. Finally, our framework preserves established workflows : participants don’t modify their internal systems. Instead, they have just to implement a proxy to integrate

CoopFlow

.

Issam Chebbi, Samir Tata

Mining and Filtering

Process Mining and Verification of Properties: An Approach Based on Temporal Logic

Information systems are facing conflicting requirements. On the one hand, systems need to be adaptive and self-managing to deal with rapidly changing circumstances. On the other hand, legislation such as the Sarbanes-Oxley Act, is putting increasing demands on monitoring activities and processes. As processes and systems become more flexible, both the need for, and the complexity of monitoring increases. Our earlier work on

process mining

has primarily focused on

process discovery

, i.e., automatically constructing models describing knowledge extracted from event logs. In this paper, we focus on a different problem complementing process discovery. Given an event log and some property, we want to

verify

whether the property holds. For this purpose we have developed a new language based on Linear Temporal Logic (LTL) and we combine this with a standard XML format to store event logs. Given an event log and an LTL property, our LTL Checker verifies whether the

observed behavior

matches the

(un)expected/(un)desirable behavior

.

W. M. P. van der Aalst, H. T. de Beer, B. F. van Dongen
A Detailed Investigation of Memory Requirements for Publish/Subscribe Filtering Algorithms

Various filtering algorithms for publish/subscribe systems have been proposed. One distinguishing characteristic is their internal representation of Boolean subscriptions: They either require conversions into DNFs (canonical approaches) or are directly exploited in event filtering (non-canonical approaches).

In this paper, we present a detailed analysis and comparison of the memory requirements of canonical and non-canonical filtering algorithms. This includes a theoretical analysis of space usages as well as a verification of our theoretical results by an evaluation of a practical implementation. This practical analysis also considers time (filter) efficiency, which is the other important quality measure of filtering algorithms. By correlating the results of space and time efficiency, we conclude when to use non-canonical and canonical approaches.

Sven Bittner, Annika Hinze
Mapping Discovery for XML Data Integration

The interoperability of heterogeneous data sources is an important issue in many applications such as mediation systems or web-based systems. In these systems, each data source exports a schema and each application defines a target schema representing its needs. The way instances of the target schema are derived from the sources is described through mappings. Generating such mappings is a difficult task, especially when the schemas are semi structured. In this paper, we propose an approach for mapping generation in an XML context; the basic idea is to decompose the target schema into subtrees and to find mappings, called partial mappings, for each of them; the mappings for the whole target schema are then produced by combining the partial mappings and checking that the structure of the target schema is preserved. We also present a tool supporting our approach and some experimental results.

Zoubida Kedad, Xiaohui Xue

Petri Nets and Processs Management

Colored Petri Nets to Verify Extended Event-Driven Process Chains

Business processes are becoming more and more complex and at the same time their correctness is becoming a critical issue: The costs of errors in business information systems are growing due to the growing scale of their application and the growing degree of automation. In this paper we consider

Extended Event-driven Process Chains (eEPCs)

, a language which is widely used for modeling business processes, documenting industrial reference models and designing workflows. We describe how to translate eEPCs into timed colored Petri nets in order to verify processes given by eEPCs with the CPN Tools.

Kees van Hee, Olivia Oanea, Natalia Sidorova
Web Process Dynamic Stepped Extension: Pi-Calculus-Based Model and Inference Experiments

Web Processes combine traditional workflow management with Web Services technology. A key challenge to support dynamic composition of Web Processes is to solve the conflicts between process deployment and process execution caused by the inner dependencies. To address this, we have presented a dynamic extension pattern, termed the Web Process Dynamic Stepped Extension (WPDSE). In this approach the process is divided into multiple sub processes, and each sub process is defined and deployed at different times during process execution based on the requirements. A rigorous mathematic modeling language, pi-calculus, is used to define the framework and extension units of the WPDSE. The primary benefit derived from using the pi-calculus is that both the correctness and dynamic performance of the WPDSE model can be effectively verified and analyzed using a mathematically sound approach. This is done using a pi-calculus inference prototype tool called the Interactive Inferring Tool (InferTool).

Li Zhang, Zhiwei Yu
Petri Net + Nested Relational Calculus = Dataflow

In this paper we propose a formal, graphical workflow language for dataflows, i.e., workflows where large amounts of complex data are manipulated and the structure of the manipulated data is reflected in the structure of the workflow. It is a common extension of

Petri nets

, which are responsible for the organization of the processing tasks, and

Nested relational calculus

, which is a database query language over complex objects, and is responsible for handling collections of data items (in particular, for iteration) and for the typing system.

We demonstrate that dataflows constructed in hierarchical manner, according to a set of refinement rules we propose, are

sound

: initiated with a single token (which may represent a complex scientific data collection) in the input node, terminate with a single token in the output node (which represents the output data collection). In particular they always process all of the input data, leave no ”debris data” behind and the output is always eventually computed.

Jan Hidders, Natalia Kwasnikowska, Jacek Sroka, Jerzy Tyszkiewicz, Jan Van den Bussche

Information Access and Integrity

On the Controlled Evolution of Access Rules in Cooperative Information Systems

For several reasons enterprises are frequently subject to organizational change. Respective adaptations may concern business processes, but also other components of an enterprise architecture. In particular, changes of organizational structures often become necessary.

The information about organizational entities and their relationships is maintained in organizational models. Therefore the quick and correct adaptation of these models is fundamental to adequately cope with changes. However, model changes alone are not sufficient to guarantee consistency. Since organizational models also provide the basis for defining access rules (e.g., actor assignments in workflow management systems or access rules in document–centered applications) this information has to be adapted accordingly (e.g., to avoid non-resolvable actor assignments). Current approaches do not adequately address this problem, which often leads to security gaps and delayed change adaptations.

In this paper we present a comprehensive approach for the controlled evolution of organizational models in cooperative information systems. First, we introduce a set of operators with well-defined semantics for defining and changing organizational models. Second, we present an advanced approach for the semi-automated adaptation of access rules when the underlying organizational model is changed. This includes a formal part concerning both the evolution of organizational models and the adaptation of related access rules.

Stefanie Rinderle, Manfred Reichert
Towards a Tolerance-Based Technique for Cooperative Answering of Fuzzy Queries Against Regular Databases

In this paper, we present a cooperative approach for avoiding empty answers to fuzzy relational queries. We propose a relaxation mechanism generating more tolerant queries. This mechanism rests on a transformation that consists in applying a tolerance relation to fuzzy predicates contained in the query. A particular tolerance relation, which can be conveniently modeled in terms of a parameterized proximity relation, is discussed. The modified fuzzy predicate is obtained by a simple arithmetic operation on fuzzy numbers. We show that this proximity relation can be defined in a relative or in an absolute way. In each case, the main features of the resulting weakening mechanism are investigated. We also show that the limits of the transformation, that guarantee that the weakened query is not semantically too far from the original one, can be handled in a non-empirical rigorous way without requiring any additional information from the user. Lastly, to illustrate our proposal an example is considered.

Patrick Bosc, Allel Hadjali, Olivier Pivert
Filter Merging for Efficient Information Dissemination

In this paper we present a generic formal framework for filter merging in content-based routers. The proposed mechanism is independent of the used filtering language and routing data structure. We assume that the routing structure computes the minimal cover set. It supports merging of filters from local clients, hierarchical routing, and peer-to-peer routing. The mechanism is also transparent and does not require modifications in other routers in the distributed system to achieve benefits. In addition to content-based routers, the system may also be used in firewalls and auditing gateways. We present and analyze experimental results for the system.

Sasu Tarkoma, Jaakko Kangasharju

Heterogeneity

Don’t Mind Your Vocabulary: Data Sharing Across Heterogeneous Peers

The strong dynamics of peer-to-peer networks, coupled with the diversity of peer vocabularies, makes query processing in peer database systems a very challenging task. In this paper, we propose a framework for translating expressive relational queries across heterogeneous peer databases. Our framework avoids an integrated global schema or centralized structures common to the involved peers. The cornerstone of our approach is the use of both syntax and instance level schema mappings that each peer constructs and shares with other peers. Based on this user provided mapping information, our algorithm applies generic translation rules to translate SQL queries. Our approach supports both query translation and propagation among the peers preserving the autonomy of individual peers. The proposal combines both syntax and instance level mappings into a more general framework for query translation across heterogeneous boundaries. We have developed a prototype as a query service layer wrapped around a basic service providing heterogeneity management. The prototype has been evaluated on a small peer-to-peer network to demonstrate the viability of the approach.

Md. Mehedi Masud, Iluju Kiringa, Anastasios Kementsietsidis
On the Usage of Global Document Occurrences in Peer-to-Peer Information Systems

There exist a number of approaches for query processing in Peer-to-Peer information systems that efficiently retrieve relevant information from distributed peers. However, very few of them take into consideration the overlap between peers: as the most popular resources (e.g., documents or files) are often present at most of the peers, a large fraction of the documents eventually received by the query initiator are duplicates. We develop a technique based on the notion of

global document occurrences

(GDO) that, when processing a query, penalizes frequent documents increasingly as more and more peers contribute their local results. We argue that the additional effort to create and maintain the GDO information is reasonably low, as the necessary information can be piggybacked onto the existing communication. Early experiments indicate that our approach significantly decreases the number of peers that have to be involved in a query to reach a certain level of recall and, thus, decreases user-perceived latency and the wastage of network resources.

Odysseas Papapetrou, Sebastian Michel, Matthias Bender, Gerhard Weikum
An Approach for Clustering Semantically Heterogeneous XML Schemas

In this paper we illustrate an approach for clustering semantically heterogeneous XML Schemas. The proposed approach is driven mainly by the semantics of the involved Schemas that is defined by means of the interschema properties existing among concepts represented therein. An important feature of our approach consists of its capability to be integrated with almost all the clustering algorithms already proposed in the literature.

Pasquale De Meo, Giovanni Quattrone, Giorgio Terracina, Domenico Ursino

Semantics

Semantic Schema Matching

We view

match

as an operator that takes two graph-like structures (e.g., XML schemas) and produces a mapping between the nodes of these graphs that correspond semantically to each other.

Semantic schema matching

is based on the two ideas: (i) we discover mappings by computing

semantic relations

(e.g., equivalence, more general); (ii) we determine semantic relations by analyzing the

meaning

(concepts, not labels) which is codified in the elements and the structures of schemas. In this paper we present basic and optimized algorithms for semantic schema matching, and we discuss their implementation within the S-Match system. We also validate the approach and evaluate S-Match against three state of the art matching systems. The results look promising, in particular for what concerns quality and performance.

Fausto Giunchiglia, Pavel Shvaiko, Mikalai Yatskevich
Unified Semantics for Event Correlation over Time and Space in Hybrid Network Environments

The recent evolution of ubiquitous computing has brought with it a dramatic increase of event monitoring capabilities by wireless devices and sensors. Such systems require new, more sophisticated, event correlation over time and space. This new paradigm implies composition of events in heterogeneous network environments, where network and resource conditions vary. Event Correlation will be a multi-step operation from event sources to final subscribers, combining information collected by wireless devices into higher level information or knowledge. Most extant approaches to define event correlation lack a formal mechanism for establishing complex temporal and spatial relationships among correlated events. Here, we will focus on two subjects. First, we define generic composite event semantics, which extend traditional event composition with data aggregation in wireless sensor networks (WSNs). This work bridges data aggregation in WSNs with event correlation services over distributed systems. Secondly, we introduce interval-based semantics for event detection, defining precisely complex timing constraints among correlated event instances.

Eiko Yoneki, Jean Bacon
Semantic-Based Matching and Personalization in FWEB, a Publish/Subscribe-Based Web Infrastructure

The web is a vast graph built of hundreds of millions of web pages and over a billion links. Directly or indirectly, each of these links has been written by hand, and, despite the amount of duplication among links, is the result of an enormous effort by web authors.

One has to ask if it is possible that some of this labour can be automated. That is, can we automate some of the effort required to create and maintain links between pages? In recent work, we described FWEB, a system capable of automating link creation using publish/subscribe communication among a peer-to-peer network of web servers. This allowed web servers to match information about link requirements and page content in circumstances where we specify an anchor in terms of what content we want to link to, rather than a specific URL. When such a match is successful, a link between the pages is automatically created.

However, this system relied on simple keyword-based descriptions, and has several drawbacks, verified by experiment. In this paper, we show how the use of shared ontologies can improve the process of matching the content requirements for links and the descriptions of web pages. We report on our experience of using FWEB and, in addition, show how the capabilities of the FWEB architecture can be extended to include link personalization and explicit backlinks.

Simon Courtenage, Steven Williams

Querying and Content Delivery

A Cooperative Model for Wide Area Content Delivery Applications

Content delivery is a major task in wide area environments, such as the Web. Latency, the time elapses since the user sends the request until the server’s response is accepted is a major concern in many applications. Therefore, minimizing latency is an obvious target of wide area environments and one of the more common solutions in practice is the use of client-side caching. Collaborative caching is used to further enhance content delivery, but unfortunately, it often fails to provide significant improvements. In this work, we explore the limitations of collaborative caching, analyze the existing literature and suggest a cooperative model for which cache content sharing show more promise. We propose a novel approach, based on the observation that clients can specify their tolerance towards content obsolescence using a simple-to-use method, and servers can supply content update patterns. The cache use a cost model to determine which of the following three alternatives is most promising: delivery of a local copy, delivery of a copy from a cooperating cache, or delivery of a fresh copy from the origin server. Our experiments reveal that using the proposed model, it becomes possible to meet client needs with reduced latency. We also show the benefit of cache cooperation in increasing hit ratios and thus reducing latency further. Specifically, we show that cache collaboration is in particular useful to users with high demands regarding both latency and consistency.

Rami Rashkovits, Avigdor Gal
A Data Stream Publish/Subscribe Architecture with Self-adapting Queries

In data stream applications, streams typically arise from a geographically distributed collection of producers and may be queried by consumers, which may be distributed as well. In such a setting, a query can be seen as a subscription asking to be informed of all tuples that satisfy a specific condition. We propose to support the publishing and querying of distributed data streams by a

publish/subscribe architecture.

To enable such a system to scale to a large number of producers and consumers requires the introduction of republishers which collect together data streams and make the merged stream available. If

republishers

consume from other republishers, a hierarchy of republishers results.

We present a formalism that allows distributed data streams, published by independent stream producers, to be integrated as views on a mediated schema. We use the formalism to develop methods to adapt query plans to changes in the set of available data streams and allow consumers to dynamically change which streams they subscribe to.

Alasdair J. G. Gray, Werner Nutt
Containment of Conjunctive Queries with Arithmetic Expressions

We study the problem of query containment for conjunctive queries with arithmetic constraints (QWAE). Such queries arise naturally in conventional database applications, information integration, and cooperative information systems. Given two such queries

Q

1

and

Q

2

, we propose an algorithm that decides the containment

$Q_2\sqsubseteq Q_1$

. The proposed algorithm returns a QWAE

Q

2

′ obtained by rewriting

Q

2

′ such that

$Q_2'\sqsubseteq Q_2$

. This provides partial answers to the QWAE

Q

1

, which would otherwise be discarded by existing standard or extended techniques for query containment.

Ali Kiani, Nematollaah Shiri

Web Services, Agents

Multiagent Negotiation for Fair and Unbiased Resource Allocation

This paper proposes a novel solution for the

n

agent cake cutting (resource allocation) problem. We propose a negotiation protocol for dividing a resource among

n

agents and then provide an algorithm for allotting portions of the resource. We prove that this protocol can enable distribution of the resource among

n

agents in a fair manner. The protocol enables agents to choose portions based on their internal utility function, which they do not have to reveal. In addition to being fair, the protocol has desirable features such as being unbiased and verifiable while allocating resources. In the case where the resource is two-dimensional (a circular cake) and uniform, it is shown that each agent can get close to 1/

n

of the whole resource.

Karthik Iyer, Michael Huhns
QoS-Based Service Selection and Ranking with Trust and Reputation Management

QoS-based service selection mechanisms will play an essential role in service-oriented architectures, as e-Business applications want to use services that most accurately meet their requirements. Standard approaches in this field typically are based on the prediction of services’ performance from the quality advertised by providers as well as from feedback of users on the actual levels of QoS delivered to them. The key issue in this setting is to detect and deal with false ratings by dishonest providers and users, which has only received limited attention so far. In this paper, we present a new QoS-based semantic web service selection and ranking solution with the application of a trust and reputation management method to address this problem. We will give a formal description of our approach and validate it with experiments which demonstrate that our solution yields high-quality results under various realistic cheating behaviors.

Le-Hung Vu, Manfred Hauswirth, Karl Aberer
An Integrated Alerting Service for Open Digital Libraries: Design and Implementation

Alerting services can provide a valuable support for information seeking in Digital Libraries (DL). Several systems have been proposed. Most of them have serious drawbacks, such as limited expressiveness, limited coverage, and poor support of federated and distributed collections.

In this paper, we present the detailed design and implementation for a comprehensive alerting service for digital libraries. We demonstrate typical user interactions with the system. Our alerting service is open to other event sources, supports a rich variety of event types, and works on distributed as well as on federated DL collections.

Annika Hinze, Andrea Schweer, George Buchanan

Security, Integrity and Consistency

Workflow Data Guards

Workflow management systems (WfMSs) frequently use data to coordinate the execution of workflow instances. A WfMS evaluates conditions defined on data to make the control flow decisions i.e. selecting the next activity or deciding on an actor. However, data – within and outside of a running workflow instance – may change dynamically. Modifications of data needed for past control flow decisions may invalidate these decisions. We analyze the desired synchronization policies, and propose a mechanism called data guard to selectively guarantee that significant changes in data are recognized and handled by the data management system to ensure correctness of workflow execution in face of asynchronous updates.

Johann Eder, Marek Lehmann
Consistency Between e 3 -value Models and Activity Diagrams in a Multi-perspective Development Method

Multi-perspective approaches to analysis and design of businesses information systems are used to manage the complexity of the development process. A perspective contains a partial specification of the system from a particular stakeholder’s standpoint. This separation of concerns leads to potential inconsistencies between specifications from different perspectives, resulting in non-implementable systems. In this paper, a consistency relationship between the

economic value

and

business processes

perspectives of a design framework for networked businesses is proposed based on an equivalence of a common semantic model.

Zlatko Zlatev, Andreas Wombacher
Maintaining Global Integrity in Federated Relational Databases Using Interactive Component Systems

The maintenance of global integrity constraints in database federations is still a challenge since traditional integrity constraint management techniques cannot be applied to such a distributed management of data. In this paper we present a concept of global integrity maintenance by migrating the concepts of active database systems to a collection of interoperable relational databases. We introduce

Active Component Systems

which are able to interact with each other using direct connections established from within their database management systems. Global integrity constraints are decomposed into sets of

partial integrity constraints

, which are enforced directly by the affected Active Component Systems without the need of a global component.

Christopher Popfinger, Stefan Conrad

Chain and Collaboration Mangement

RFID Data Management and RFID Information Value Chain Support with RFID Middleware Platform Implementation

Radio Frequency Identification (RFID) middleware is a new breed of software system which facilitates data communication between automatic identification equipments like RFID readers and enterprise applications. It provides a distributed environment to process the data coming from tags, filter and then deliver it to a variety of backend applications via various communication protocols including web services. In this paper, we focus on the information flow converting raw RFID data to useful information which may even be used to lead to automated business process execution and further to knowledge to support decision making, and we define the information flow as so-called ‘RFID Information Value Chain (RFID IVC)’. We examine the elements and associated activities of RFID IVC and also introduce the RFID middleware ecosystem not only to provide the seamless environment spanning from the edge of the enterprise network to the enterprise systems, but also to support the activities arisen on RFID IVC. RFID middleware ecosystem consists of RFID middleware, rule engine to generate business semantic events and orchestration engine to coordinate sort of business process invoked by RFID tag data capture event. Moreover, the implementations of each system residing in RFID middleware ecosystem are introduced and the relationship between RFID middleware ecosystem and RFID IVC is demonstrated.

Taesu Cheong, Youngil Kim
A Collaborative Table Editing Technique Based on Transparent Adaptation

Tables are an efficient means to organize information. Collaboration is important for table editing. In this paper, we report an innovative technique, called CoTable, for supporting collaborative table editing in both table-centric and word-centric complex documents. This collaborative table editing technique is based on the Transparent Adaptation approach and hence applicable to commercial off-the-shelf single-user editing applications. Key technical elements of the CoTable technique include: (1) techniques for adapting a variety of table-related data address models, accessible from the single-user Application Programming Interface (API), to that of the underlying Operational Transformation (OT) technique; and (2) techniques for translating user-level table editing operations into the primitive operations supported by OT. The CoTable technique has been implemented in the CoWord system, and CoWord-specific table processing issues and techniques are discussed in detail as well.

Steven Xia, David Sun, Chengzheng Sun, David Chen
Inter-enterprise Collaboration Management in Dynamic Business Networks

The agility to collaborate in several business networks has become essential for the success of enterprises. The dynamic nature of collaborations and the autonomy of enterprises creates new challenges for the operational computing environment. This paper describes the web-Pilarcos B2B middleware solutions for managing the life-cycle of dynamic business networks in an inter-enterprise environment. The use of B2B middleware moves the management challenges away from the individual enterprise applications to more global infrastructure services, and provides a level of automation into the establishment and maintenance. The middleware services aim for a rigorous level of transparent interoperability support, including awareness of collaboration processes, and collaboration level adaptation to breaches in operation.

Lea Kutvonen, Janne Metso, Toni Ruokolainen
CoopIS 2005 PC Co-chairs’ Message

This volume contains the proceedings of the Thirteenth International Conference on Cooperative Information Systems, CoopIS 2005, held in Agia Napa, Cyprus, November 2 – 4, 2005.

CoopIS is the premier conference for researchers and practitioners concerned with the vital task of providing easy, flexible, and intuitive access and organization of information systems for every type of need. This conference draws on several research areas, including CSCW, Internet data management, electronic commerce, human-computer interaction, workflow management, web services, agent technologies, and software architectures.

These proceedings contain 33 original papers out of 137 submissions. Papers went through a rigorous reviewing process (3 reviewers per paper) and were sometimes discussed by email. The papers cover the fields of Workflows, Web Services, Peer-to-Peer interaction, Semantics, Querying, Security, Mining, Clustering, and Integrity. We wish to thank the authors for their excellent papers, the Program Committee members, and the referees for their effort. They all made the success of CoopIS 2005 possible.

Mohand-Said Hacid, John Mylopoulos, Barbara Pernici

Distributed Objects and Applications (DOA)2005 International Conference

Web Services and Service-Oriented Architectures

Developing a Web Service for Distributed Persistent Objects in the Context of an XML Database Programming Language

The development of data centric applications should be performed in a high-level and transparent way. In particular, aspects concerning the persistency and distribution of business objects should not influence or restrict the application design. Furthermore applications should be platform independent and should be able to exchange data independently of their programming language origin.

There are several approaches for an architecture for distributed objects. One example is CORBA. JDO and EJB allow specifications for distributed persistent objects offering transparent persistency up to a certain degree. Nevertheless, the programmer is still forced to write explicit code for making objects persistent or for connecting to distributed objects.

In contrast to existing approaches, the

$\mbox{\textbf{XOBE}}_{\mbox{\scriptsize{DBPL}}}$

project develops a database programming language with transparency with respect to types, and persistency and distribution with respect to objects. Application development is performed on a high-level business object level only. A web service for realizing distributed persistency and data exchange is internal and completely integrated in the

$\mbox{\textbf{XOBE}}_{\mbox{\scriptsize{DBPL}}}$

runtime environment. Although the

$\mbox{\textbf{XOBE}}_{\mbox{\scriptsize{DBPL}}}$

language is an extension of the Java programming language, the introduced concepts could be easily transferred to other object-oriented programming languages.

Henrike Schuhart, Dominik Pietzsch, Volker Linnemann
Comparing Service-Oriented and Distributed Object Architectures

Service-Oriented Architectures have been proposed as a replacement for the more established Distributed Object Architectures as a way of developing loosely-coupled distributed systems. While superficially similar, we argue that the two approaches exhibit a number of subtle differences that, taken together, lead to significant differences in terms of their large-scale software engineering properties such as the granularity of service, ease of composition and differentiation – properties that have a significant impact on the design and evolution of enterprise-scale systems. We further argue that some features of distributed objects are actually crucial to the integration tasks targeted by service-oriented architectures.

Seán Baker, Simon Dobson
QoS-Aware Composition of Web Services: An Evaluation of Selection Algorithms

A composition arranges available services resulting in a defined flow of executions. Before the composition is carried out, a discovery service identifies candidate services. Then, a selection process chooses the optimal candidates. This paper discusses how the selection can consider different Quality-of-Service (QoS) categories as selection criteria to select the most suitable candidates for the composition. If more than one category is used for optimisation, a multi-dimensional optimisation problem arises which results in an exponential computation effort for computing an optimal solution. We explain the problem and point out similarities to other combinatorial problems – the knapsack problem and the resource constraint project scheduling problem (RCPSP). Based on this discussion, we describe possible heuristics for these problems and evaluate their efficiency when used for web service candidate selection.

Michael C. Jaeger, Gero Mühl, Sebastian Golze

Multicast and Fault Tolerance

Extending the UMIOP Specification for Reliable Multicast in CORBA

OMG has published an unreliable multicast specification for distributed applications developed in CORBA (UMIOP). This mechanism can be implemented based on IP Multicast, a best-effort protocol, which provides no guarantees about the message delivery. However, many fault-tolerant or groupware applications demand more restrictive agreement and ordering guarantees (for instance, reliable multicast with FIFO, causal or total ordering) from the available support for group communication. OMG has not yet provided any specification for supporting those requirements. This paper presents an important contribution towards this direction. We proposed the ReMIOP, an extension to the UMIOP/OMG protocol, for the conception of a reliable multicast mechanism in CORBA middleware. Performance measures comparing ReMIOP, UMIOP and UDP sockets for IP multicast communication are presented in order to evidence the costs for adding reliable and unreliable multicast in middleware level.

Alysson Neves Bessani, Joni da Silva Fraga, Lau Cheuk Lung
Integrating the ROMIOP and ETF Specifications for Atomic Multicast in CORBA

OMG published a draft specification for a reliable ordered multicast inter-ORB protocol to be used by distributed applications developed in CORBA (ROMIOP). This specification was made to attend the demand of applications that needed more restrictive guarantees on reliability and ordering, since there already has a specification without these resources (UMIOP). This paper presents how ROMIOP was implemented, as well as modifications that were made on the specification to make possible to implement it according to the ETF (Extensible Transport Framework) specification. Performance measures were made comparing ROMIOP with others protocols, like UMIOP, to show its characteristics and its cost.

Daniel Borusch, Lau Cheuk Lung, Alysson Neves Bessani, Joni da Silva Fraga
The Design of Real-Time Fault Detectors

This paper presents the design and implementation of real-time fault detectors. We describe their design, implementation, and scheduling under a Fixed Priority/ High Priority First policy. Two types of real-time detectors are described; primary detectors and secondary (meta) detectors. A Primary Detector is designed for the detection of simple faults and failures (Worst Case Execution Time, Worst Case Response Time, Latest Response Time and Activation Overrun events). These events occur when a task uses more resources than have been catered for. The secondary type of detector, called

meta Detector

, is used to detect more complicated events called

meta-events

. Meta-events are based on a set of primary detectors and their interrelations. The Real-Time Specification Language (RTSL) is used for the description of Meta-events, including the primary events relations such as precedence; (THEN) and other logical relations; (AND, OR, TIMES). Primary and meta fault detectors must be admitted to the system as periodic or sporadic real-time threads. We present a method for the feasibility analysis of each detector type. These principles are integrated within a Minimum Real-Time CORBA prototype called RT-SORBET.

Serge Midonnet

Communication Services (Was Messaging and Publish/Subscribe)

A CORBA Bidirectional-Event Service for Video and Multimedia Applications

The development of multimedia applications using the CORBA A/V Streaming architecture, suffers from a complex software design. This is not a minor drawback in a middleware architecture, intended to simplify the software development process. One source of complexity is the absence of a flexible signaling mechanism to communicate application-dependent control information. As a consequence, developed applications must design parallel communication processes between end points, which obscures the design. Another shortcoming identified, is the rigid flow establishment process, which does not allow the selection of an asynchronous connection setup. In this paper we present an extension of the A/V Streaming service, which addresses these issues. The service proposed provides access to the applications through an integrated bidirectional event-based signaling mechanism. The A/V Streaming extension offers this functionality by means of a CORBA Bidirectional Event Service, also presented in this paper. The A/V Streaming extension under consideration is implemented and comparatively evaluated with the original service, in the CORBA ACE/TAO distribution. Benchmark results validate our proposal, and encourage its practical utilization.

Felipe Garcia-Sanchez, Antonio-Javier Garcia-Sanchez, P. Pavon-Mariño, J. Garcia-Haro
GREEN: A Configurable and Re-configurable Publish-Subscribe Middleware for Pervasive Computing

In this paper we present GREEN a highly configurable and re-configurable publish-subscribe middleware to support pervasive computing applications. Such applications must embrace both heterogeneous networks and heterogeneous devices: from embedded devices in wireless ad-hoc networks to high-power computers in the Internet. Publish-subscribe is a paradigm well suited to applications in this domain. However, present-day publish-subscribe middleware does not adequately address the configurability and re-configurability requirements of such heterogeneous and changing environments. As one prime example, current platforms can-not be configured to operate in diverse network types (e.g. infrastructure based fixed networks and mobile ad-hoc networks). Hence, we present the design and implementation of GREEN (Generic & Re-configurable EvEnt Notification service), a next generation publish-subscribe middleware that addresses this particular deficiency. We demonstrate the configurability and re-configurability of GREEN through a worked example: consisting of a vehicular ad-hoc network for safe driving coupled with a fixed wide area network for vehicular traffic monitoring. Finally, we evaluate the performance of this highly dynamic middleware under different environmental conditions.

Thirunavukkarasu Sivaharan, Gordon Blair, Geoff Coulson
Transparency and Asynchronous Method Invocation

This article focuses on transparency in the context of asynchronous method invocation. It describes two solutions available to provide

full-transparency

: where asynchronism is entirely masked to the developer. The main contribution of this paper is to clearly present the drawbacks of this approach: exception handling and developer consciousness are two problems inherent to

full-transparency

that makes it at least, hard to use, at worst, useless. This paper defends explicit asynchronous method invocation and proposes

semi-transparency

: almost all the complexity of asynchronism is masked to the developer but the asynchronism itself.

Pierre Vignéras

Techniques for Application Hosting

COROB: A Controlled Resource Borrowing Framework for Overload Handling in Cluster-Based Service Hosting Center

The paper proposes a resource framework

COROB

for overload handling in component application hosting center through dynamic resource borrowing among hosted applications. The main idea is to utilize the fine-grained idle server resource of other applications to partake of surging workload, while keeping the resource borrowing under control for not violating the SLA of the donor application. The contribution of the paper is two-fold: (1) a queuing analysis-based resource borrowing algorithm is proposed for overloaded applications to acquire as exact amount of resource as possible; (2) an adaptive threshold-driven algorithm is presented to drive the overload handling with threshold values adaptively tuned according to changing workload. Empirical data is presented to demonstrate the efficacy of COROB for overload handling and response time guarantee in a prototype service hosting cluster environment.

Yufeng Wang, Huaimin Wang, Dianxi Shi, Bixin Liu
Accessing X Applications over the World-Wide Web

The X Protocol, an asynchronous network protocol, was developed at MIT amid the need to provide a network transparent graphical user interface primarily for the UNIX Operating System. Current examples of Open Source implementations of the X server, require specific software to be downloaded and installed on the end-user’s workstation. To avoid this and other issues involved in the conventional X setup, this paper proposes a new solution by defining a protocol bridge that translates the conventional X Protocol to an HTTP-based one. This approach makes an X application accessible from any web browser. With the goal of leveraging the enormous browser install base, the web-based X server supports multiple web browsers and has been tested to support a number of X clients.

Arno Puder, Siddharth Desai
Exploiting Application Workload Characteristics to Accurately Estimate Replica Server Response Time

Our proposition, presented in this paper, consists in the definition of a function estimating the response time and a method for applying it to different application workloads. The function combines the application demands for various resources (such as the CPU, the disk I/O and the network bandwidth) with the resource capabilities and availabilities on the replica servers. The main benefits of our approach include: the

simplicity

and the

transparency

, from the perspective of the clients, who don’t have to specify themselves the resource requirements, the

estimation accuracy

, by considering the application real needs and the current degree of resource usage, determined by concurrent applications and the

flexibility

, with respect to the precision with which the resource-concerned parameters are specified.

The experiments we conducted show two positive results. Firstly, our estimator provides a good approximation of the real response time obtained by measurements. Secondly, the ordering of the servers according to our estimation function values, matches with high accuracy the ordering determined by the real response times.

Corina Ferdean, Mesaac Makpangou

Mobility

Automatic Introduction of Mobility for Standard-Based Frameworks

The computerization of industrial design processes raises software engineering problems that are addressed by distributed component frameworks. But these frameworks are constrained by a set of antagonistic constraints, between performances and reusability of the components. In order to take up this challenge, we study how mobile code technology enables the improvement of performances without harming the components’ reusability. Our approach relies on a transparent, totally automatic introduction of mobility into the programs. This transformation is a local optimization which is based on a static analysis. It is implemented within a compiler. An experimental study shows how the approach can be helpful for increasing the efficiency of the framework, enabling the usage of standards that – as for today – lack of efficiency.

Grégory Haïk, Jean-Pierre Briot, Christian Queinnec
Empirical Evaluation of Dynamic Local Adaptation for Distributed Mobile Applications

Distributed mobile applications operate on devices with diverse capabilities, in heterogeneous environments, where parameters such as processor, memory and network utilisation, are constantly changing. In order to maintain efficiency in terms of performance and resource utilisation, such applications should be able to adapt to their environment. Therefore, this paper proposes and empirically evaluates a local adaptation strategy for mobile applications, with ‘local’ referring to a strategy that operates independently on each node in the distributed application. The strategy is based upon a series of formal adaptation models and a suite of mobile application metrics introduced by the authors in a recent paper. The experiments demonstrate the potential practical application of the local adaptation strategy using a number of distinct scenarios involving runtime changes in processor, memory and network utilisation. In order to maintain application efficiency in response to these changing operating conditions, the system reacts by rearranging the object topology of the application by dynamically moving objects between nodes.

Pablo Rossi, Caspar Ryan
Middleware for Distributed Context-Aware Systems

Context-aware systems represent extremely complex and heterogeneous distributed systems, composed of sensors, actuators, application components, and a variety of context processing components that manage the flow of context information between the sensors/actuators and applications. The need for middleware to seamlessly bind these components together is well recognised. Numerous attempts to build middleware or infrastructure for context-aware systems have been made, but these have provided only partial solutions; for instance, most have not adequately addressed issues such as mobility, fault tolerance or privacy. One of the goals of this paper is to provide an analysis of the requirements of a middleware for context-aware systems, drawing from both traditional distributed system goals and our experiences with developing context-aware applications. The paper also provides a critical review of several middleware solutions, followed by a comprehensive discussion of our own PACE middleware. Finally, it provides a comparison of our solution with the previous work, highlighting both the advantages of our middleware and important topics for future research.

Karen Henricksen, Jadwiga Indulska, Ted McFadden, Sasitharan Balasubramaniam
Timely Provisioning of Mobile Services in Critical Pervasive Environments

Timeliness in conventional real-time systems is addressed by employing well-known scheduling techniques that guarantee the execution of a number of tasks within certain deadlines. However, these classical scheduling techniques do not take into account basic features that characterize today’s critical pervasive computing environments.

In this paper, we revisit the issue of timeliness in the context of pervasive computing environments. We propose a middleware service that addresses the timely provisioning of services, while taking into account both the mobility of the entities that constitute pervasive computing environments and the existence of multiple alternative entities, providing semantically compatible services. Specifically, we model the overall behavior of mobile entities in terms of the entities’ lifetime. The lifetime of an entity is the duration for which the entity is present and available to other entities. Given a new request coming from a mobile client and a number of semantically compatible mobile entities that can fulfill the request, one of them must be selected. The proposed service realizes three different policies that facilitate the selection. With respect to the first policy, the selection is realized solely on the basis of the client’s and the server’s lifetimes. The second policy additionally considers the load of each server towards selecting the one that guarantees to serve the new request within the lifetime of both the client and the server. The third policy further deals with periodic service requests.

Filippos Papadopoulos, Apostolos Zarras, Evaggelia Pitoura, Panos Vassiliadis
Mobility Management and Communication Support for Nomadic Applications

There is an increasing demand for realizing communication services for nomadic environments, capable to provide applications with mobility management facilities and application-aware adaptation support. This paper proposes a novel mobility management and communication architecture specifically suited for nomadic environments, offering communication facilities and adaptation support by means of an API, named NCSOCKS. The driving idea is to provide application and middleware developers of nomadic services with essential mobility-enabled communication support, while hiding network heterogeneity in terms of wireless technology and leveraging the availability level of communication in spite of transient signal degradations. Transient signal degradations, due to device movements and/or shadowing, have the effect of increasing the handoff frequency. The proposed architecture integrates a novel mechanism to improve the connection availability by reducing the number of unnecessary handover procedures. In order to evaluate the proposal, an approach based on combined use of simulation and prototype-based measurements is adopted.

Marcello Cinque, Domenico Cotroneo, Stefano Russo
Platform-Independent Object Migration in CORBA

Object mobility is the basis for highly dynamic distributed applications. This paper presents the design and implementation of mobile objects on the basis of the CORBA standard. Our system is compatible to the CORBA Life-Cycle–Service specification and thus provides object migration between different language environments and computer systems. Unlike others, our Life-Cycle–Service implementation does not need vendor-specific extensions and just relies on standard CORBA features like servant managers and value types. Our implementation is portable; objects can migrate even between different ORBs. It supports object developers with a simple programming model that defines the state of an object as value type, provides coordination of concurrent threads in case of migration, and takes care of location-independent object addressing. Additionally we seamlessly integrated our implementation with a dynamic code-loading service.

Rüdiger Kapitza, Holger Schmidt, Franz J. Hauck
DOA 2005 PC Co-chairs’ Message

Welcome to the Proceedings of the 2005 International Conference on Distributed Objects and Applications (DOA). Some of the world’s most important and critical software systems are based on distribution technologies. For example, distributed objects run critical systems in industries such as telecommunication, manufacturing, finance, insurance, and government. When a phone call is made or a financial transaction performed, chances are that distributed objects are acting in the background. Although existing distribution technologies, such as CORBA, DCOM and Java-based technologies have been widely successful, they are still evolving and serving as the basis for emerging technologies and standards, such as CORBA Components, J2EE, .NET, and Web Services. Regardless of the specifics of each approach, they all aim to provide openness, reliability, scalability, distribution transparency, security, ease of development, and support for heterogeneity between applications and platforms. Also, of utmost importance today is the ability to integrate distributed object systems with other technologies such as the web, multimedia systems, databases, message-oriented middleware, the Global Information Grid, and peer-to-peer systems. However, significant research and development continues to be required in all of these areas in order to continue to advance the state of the art and broaden the scope of the applicability of distribution technologies.

Ozalp Babaoglu, Arno Jacobsen, Joe Loyall
Backmatter
Metadaten
Titel
On the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASE
herausgegeben von
Robert Meersman
Zahir Tari
Copyright-Jahr
2005
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-32116-3
Print ISBN
978-3-540-29736-9
DOI
https://doi.org/10.1007/11575771