Skip to main content
main-content

Inhaltsverzeichnis

Frontmatter

CoopIS 2007 International Conference (International Conference on Cooperative Information Systems)

Frontmatter

Keynote

The Internet Service Bus

Service oriented architectures (SOA) increasingly form the basis for cooperative information systems. Many enterprises and application solutions have been practicing SOA for many years. SOA is an architecture that IT practitioners can layer on many technologies: messaging, RPC, etc. Most SOA solutions are evolving to exploit an ”enterprise service bus” for integrating and connecting applications.

Web services are a set of standards that simplifies building cooperative information systems. The standards eliminate the need to enable the infrastructure to cooperate before enabling application cooperation.

Web services exploitation of Internet protocols and concepts creates an additional opportunity. Many, if not most, cooperating information systems have elements in different organizations: enterprises, sites, lines-of-business, etc. The same architectural style that led to an enterprise services bus will yield an Internet Service Bus.

This presentation will explain the Internet Service Bus concepts using a scenario. The scenario will also highlight many of the benefits of an ISB. There are several interesting technical and architectural challenges for building and using an ISB, which the presentation will discuss.

Donald F. Ferguson

Process Analysis and Semantics

Soundness Verification of Business Processes Specified in the Pi-Calculus

Recent research in the area of business process management (BPM) introduced the application of a process algebra—the

π

-calculus—for the formal description of business processes and interactions among them. Especially in the area of service-oriented architectures, the key architecture for today’s BPM systems, the

π

-calculus—as well as other process algebras—have shown their benefits in representing dynamic topologies. What is missing, however, are investigations regarding the correctness, i.e. soundness, of process algebraic formalizations of business processes. Due to the fact that most existing soundness properties are given for Petri nets, these cannot be applied. This paper closes the gap by giving characterizations of invariants on the behavior of business processes in terms of bisimulation equivalence. Since bisimulation equivalence is a well known concept in the world of process algebras, the characterizations can directly be applied to

π

-calculus formalizations of business processes. In particular, we investigate the characterization of five major soundness properties, i.e. easy, lazy, weak, relaxed, and classical soundness.

Frank Puhlmann

Extending BPMN for Modeling Complex Choreographies

Capturing the interaction behavior between two or more business parties has major importance in the context of business-to-business (B2B) process integration. The Business Process Modeling Notation (BPMN), being the de-facto standard for modeling intra-organizational processes, also includes capabilities for describing cross-organizational collaboration. However, as this paper will show, BPMN fails to capture advanced choreography scenarios. Therefore, this paper proposes extensions to broaden the applicability of BPMN. The proposal is validated using the Service Interaction Patterns.

Gero Decker, Frank Puhlmann

Semantics of Standard Process Models with OR-Joins

The Business Process Modeling Notation (BPMN) is an emerging standard for capturing business processes. Like its predecessors, BPMN lacks a formal semantics and many of its features are subject to interpretation. One construct of BPMN that has an ambiguous semantics is the OR-join. Several formal semantics of this construct have been proposed for similar languages such as EPCs and YAWL. However, these existing semantics are computationally expensive. This paper formulates a semantics of the OR-join in BPMN for which enablement of an OR-join in a process model can be evaluated in quadratic time in terms of the total number of elements in the model. This complexity can be reduced down to linear-time after materializing a quadratic-sized data structure at design-time. The paper also shows how to efficiently detect the enablement of an OR-join incrementally as the execution of a process instance unfolds.

Marlon Dumas, Alexander Grosskopf, Thomas Hettel, Moe Wynn

Pattern-Based Design and Validation of Business Process Compliance

In this paper we present a novel approach for the modeling and implementation of Internal Controls in Business Processes. The approach is based on the formal modeling of Internal Controls in the validation process under the usage of frequently recurring control patterns. The main idea is the introduction of a semantic layer in which the process instances are interpreted according to an independently designed set of controls. This ensures separation of business and control objectives in a Business Process. A prototypical implementation of the approach is presented.

Kioumars Namiri, Nenad Stojanovic

Process Modeling

Constraint-Based Workflow Models: Change Made Easy

The degree of flexibility of workflow management systems heavily influences the way business processes are executed. Constraint-based models are considered to be more flexible than traditional models because of their semantics: everything that does not violate constraints is allowed. Although constraint-based models are flexible, changes to process definitions might be needed to comply with evolving business domains and exceptional situations. Flexibility can be increased by run-time support for dynamic changes – transferring instances to a new model – and ad-hoc changes – changing the process definition for one instance. In this paper we propose a general framework for a constraint-based process modeling language and its implementation. Our approach supports both ad-hoc and dynamic change, and the transfer of instances can be done easier than in traditional approaches.

M. Pesic, M. H. Schonenberg, N. Sidorova, W. M. P. van der Aalst

Dynamic, Extensible and Context-Aware Exception Handling for Workflows

This paper presents the realisation, using a Service Oriented Architecture, of an approach for dynamic, flexible and extensible exception handling in workflows, based not on proprietary frameworks, but on accepted ideas of how people actually work. The resultant service implements a detailed taxonomy of workflow exception patterns to provide an extensible repertoire of self-contained exception-handling processes called

exlets

, which may be applied at the task, case or specification levels. When an exception occurs at runtime, an exlet is dynamically selected from the repertoire depending on the context of the exception and of the particular work instance. Both expected and unexpected exceptions are catered for in real time, so that ‘manual handling’ is avoided.

Michael Adams, Arthur H. M. ter Hofstede, Wil M. P. van der Aalst, David Edmond

Understanding the Occurrence of Errors in Process Models Based on Metrics

Business process models play an important role for the management, design, and improvement of process organizations and process-aware information systems. Despite the extensive application of process modeling in practice, there are hardly empirical results available on quality aspects of process models. This paper aims to advance the understanding of this matter by analyzing the connection between formal errors (such as deadlocks) and a set of metrics that capture various structural and behavioral aspects of a process model. In particular, we discuss the theoretical connection between errors and metrics, and provide a comprehensive validation based on an extensive sample of EPC process models from practice. Furthermore, we investigate the capability of the metrics to predict errors in a second independent sample of models. The high explanatory power of the metrics has considerable consequences for the design of future modeling guidelines and modeling tools.

Jan Mendling, Gustaf Neumann, Wil van der Aalst

Data-Driven Modeling and Coordination of Large Process Structures

In the engineering domain, the development of complex products (e.g., cars) necessitates the coordination of thousands of (sub-) processes. One of the biggest challenges for process management systems is to support the modeling, monitoring and maintenance of the many interdependencies between these sub-processes. The resulting process structures are large and can be characterized by a strong relationship with the assembly of the product; i.e., the sub-processes to be coordinated can be related to the different product components. So far, sub-process coordination has been mainly accomplished manually, resulting in high efforts and inconsistencies. IT support is required to utilize the information about the product and its structure for deriving, coordinating and maintaining such

data-driven process structures

. In this paper, we introduce the COREPRO framework for the data-driven modeling of large process structures. The approach reduces modeling efforts significantly and provides mechanisms for maintaining data-driven process structures.

Dominic Müller, Manfred Reichert, Joachim Herbst

Supporting Ad-Hoc Changes in Distributed Workflow Management Systems

Flexible support of distributed business processes is a characteristic challenge for any workflow management system (WfMS). Scalability at the presence of high loads as well as the capability to dynamically adapt running process instances are essential requirements. Should the latter one be not met, the WfMS will not have the necessary flexibility to cover the wide range of process-oriented applications deployed in many organizations. Scalability and flexibility have, for the most part, been treated separately in literature thus far. Even though they are basic needs for a WfMS, the requirements related with them are totally different. To achieve satisfactory scalability, on the one hand the system needs to be designed such that a workflow (WF) instance can be controlled by several WF servers that are as independent from each other as possible. Yet dynamic WF changes, on the other hand, necessitate a (logical) central control instance which knows the current and global state of a WF instance. This paper presents methods which allow ad-hoc modifications (e.g., to insert, delete, or shift steps) to be correctly performed in a distributed WfMS; i.e., in a WfMS with partitioned WF execution graphs and distributed WF control. It is especially noteworthy that the system succeeds in realizing the full functionality as given in the central case while, at the same time, achieving favorable behavior with respect to communication costs.

Manfred Reichert, Thomas Bauer

P2P

Acquaintance Based Consistency in an Instance-Mapped P2P Data Sharing System During Transaction Processing

The paper presents a transaction processing mechanism in a peer-to-peer (P2P) database environment that combines both P2P and database management systems functionalities. We assume that each peer has an independently created relational database and data heterogeneity between two peers is resolved by data-level mappings. For such an environment, the paper first introduces the execution semantics of a transaction and shows the challenges for concurrent execution of transactions, initiated from a peer, over the network. Later the paper presents a correctness criterion that ensures the correct execution of transactions over the P2P network. We present two approaches ensuring the correctness criterion and finally discuss the implementation issues.

Md Mehedi Masud, Iluju Kiringa

Enabling Selective Flooding to Reduce P2P Traffic

We propose a P2P cooperation policy to increase the effectiveness of the flooding-based approach used to retrieve information over pure P2P networks. Flooding consists in propagating the original query from the source peer to “known” peers, and, in turn, to other peers, producing, in the general case, an exponential grow of the search traffic in the network. According to our policy, each peer involved in the flooding process propagates the query only toward peers hopefully capable of satisfying it. The crucial point is: how to detect such good candidates? Of course, “local” properties of similarity between peers not satisfying the transitivity property cannot be used to the above purpose, due to the necessity of propagating queries. Our solution relies on recovering some transitivity behavior in similarity-based P2P information retrieval approaches by considering neighborhood semantic properties. Experimental results show that the selective flooding so obtained is effective in the sense the traffic is drastically reduced w.r.t. the standard flooding (like GNUTELLA), with no loss of query success.

Francesco Buccafurri, Gianluca Lax

Improving the Dependability of Prefix-Based Routing in DHTs

Under frequent node arrival and departure (churn) in an overlay network structure, the problem of preserving accessibility is addressed by maintaining valid entries in the routing tables towards nodes that are alive. However, if the system fails to replace the entries of dead nodes with entries of live nodes in the routing tables soon enough, requests may fail. In such cases, mechanisms to route around failures are required to increase the tolerance to node failures.

Existing Distributed Hash Tables (DHTs) overlays include extensions to provide fault tolerance when looking up keys, however, these are often insufficient. We analyze the case of greedy routing, often preferred for its simplicity, but with limited dependability even when extensions are applied.

The main idea is that fault tolerance aspects need to be dealt with already at design time of the overlay. We thus propose a simple overlay that offers support for alternative paths, and we create a routing strategy which takes advantage of all these paths to route the requests, while keeping maintenance cost low. Experimental evaluation demonstrates that our approach provides an excellent resilience to failures.

Sabina Serbu, Peter Kropf, Pascal Felber

Collaboration

Social Topology Analyzed

It is an aspiring trend that Internet users not only consume, but also produce and share content. This leads to content of great diversity. Personalized navigation is an approach to correctly discover interesting content for the user. This navigation can be based on both search and recommendation systems.

The Socialized.Net

is a social Peer-to-Peer network infrastructure supporting personalized navigation. In this paper, a data trace of a popular file sharing site is analyzed and shown to have semantically close users. The data trace is further used for simulations in

The Socialized.Net

. Evidence is given that a fully distributed social network can be created based on traffic analysis. This can provide a powerful platform for personalized content navigation.

Njål T. Borch, Anders Andersen, Lars K. Vognild

Conflict Resolution of Boolean Operations by Integration in Real-Time Collaborative CAD Systems

Boolean operations are widely used in CAD applications to construct complex objects out of primitive ones. Conflict resolution of Boolean operations is a special and challenging issue in real-time collaborative CAD systems, which allow a group of geographically dispersed users to jointly perform design tasks over computer networks. In this paper, we contribute a novel conflict resolution technique that can retain the effects of individual conflicting Boolean operations by integrating them. This technique, named as CRIBO (Conflict Resolution by Integration for Boolean Operations), is in a sharp contrast to other ones that either desert the effects of some operations or keep the effects of different operations in different versions of the design. It is particularly good for collaborative CAD applications, where integration of different mindsets is a main source of creation and innovation. This technique lays a good foundation for resolving conflicting operations in design-oriented collaborative applications that require collective wisdom and stimulus of creation.

Yang Zheng, Haifeng Shen, Steven Xia, Chengzheng Sun

Trust Extension Device: Providing Mobility and Portability of Trust in Cooperative Information Systems

One method for establishing a trust relationship between a server and its clients in a co-operative information system is to use a digital certificate. The use of digital certificates bound to a particular machine works well under the assumption that the underlying computing and networking infrastructure is managed by a single enterprise. Furthermore, managed infrastructures are assumed to have a controlled operational environment, including execution of a standard set of applications and operating system. These assumptions are also valid for recent proposals on establishing trust using hardware-supported systems based on a Trusted Computing Module (TPM) cryptographic microcontroller. However, these assumptions do not hold in today’s cooperative information systems. Clients are mobile and work using network connections that go beyond the administrative boundaries of the enterprise. In this paper, we propose a novel technology, called Trust Extension Device (TED), which enables mobility and portability of trust in cooperative information systems that works in a heterogeneous environment. The paper provides an overview of the technology by describing its design, a conceptual implementation and its use in an application scenario.

Surya Nepal, John Zic, Hon Hwang, David Moreland

Organizing Meaning Evolution Supporting Systems Using Semantic Decision Tables

DOGMA-MESS (Meaning Evolution Support System) is a system and methodology for supporting scalable, community-grounded ontology engineering. It uses a socio-technical process of meaning negotiation to tackle the scalability problems in ontology engineering. In order to improve the effectiveness of DOGMA-MESS, we adopt the idea of Semantic Decision Table (SDT). An SDT contains semantically rich decision rules that guide DOGMA-MESS micro-processes. It separates the decision rules from DOGMA-MESS. SDT with different decision rules results in different final decisions, which can be evaluated. In this paper, we illustrate how SDTs are used and apply our approach in the domain of Human Resource Management.

Yan Tang, Robert Meersman

Business Transactions

Extending Online Travel Agency with Adaptive Reservations

Current online ticket booking systems either do not allow customers to reserve a ticket with a locked price, or grant a fixed reservation timespan, typically 24 hours. The former often leads to

false availability

: when a customer decides to purchase a ticket after a few queries, she finds that either the ticket is no longer available or the price has hiked up. The latter, on the other hand, may result in unnecessary

holdback

: a customer cannot purchase a ticket because someone else is holding it, who then cancels the reservation after an excessively long period of time. False availability and holdback routinely lead to loss of revenues, credibility and above all, customers. To rectify these problems, this paper introduces a transaction model for e-ticket systems to support a reservation functionality: customers can reserve tickets with a locked price, for a timespan that is determined by the demands on the tickets, rather than being fixed for all kinds of the tickets. We propose a method for implementing the model, based on hypothetical queries and triggers. We also show how to adjust the reservation timespan

w.r.t.

demands. We experimentally verify that our model and methods effectively reduce both false availability and holdback rates. These yield a practical approach to improving not only e-ticket systems but also other e-commerce systems.

Yu Zhang, Wenfei Fan, Huajun Chen, Hao Sheng, Zhaohui Wu

A Multi-level Model for Activity Commitments in E-contracts

An

e-contract

is a contract modeled, specified, executed, controlled and monitored by a software system. A

contract

is a legal agreement involving parties, activities, clauses and payments. The goals of an e-contract include precise specification of the activities of the contract, mapping them into deployable workflows, and providing transactional support in their execution. Activities in a contract are complex and interdependent. They may be executed by different parties autonomously and in a loosely coupled fashion. They may be compensated and/or re-executed at different times relative to the execution of other activities. Both the initial specification of the activities and the later verification of their executions with respect to compliance to the clauses are tedious and complicated. We believe that an e-contract should reflect both the specification and the execution aspects of the activities at the same time, where the former is about the composition logic and the later about the transactional properties. Towards facilitating this, we propose a multi-level composition model for activities in e-contracts. Our model allows for the specification of a number of transactional properties, like atomicity and commitment, for activities at all levels of the composition. In addition to their novelty, the transactional properties help to coordinate payments and eventual closure of the contract.

K. Vidyasankar, P. Radha Krishna, Kamalakar Karlapalem

Decentralised Commitment for Optimistic Semantic Replication

We study large-scale distributed cooperative systems that use optimistic replication. We represent a system as a graph of actions (operations) connected by edges that reify semantic constraints between actions. Constraint types include conflict, execution order, dependence, and atomicity. The local state is some schedule that conforms to the constraints; because of conflicts, client state is only tentative. For consistency, site schedules should converge; we designed a decentralised, asynchronous commitment protocol. Each client makes a proposal, reflecting its tentative and/or preferred schedules. Our protocol distributes the proposals, which it decomposes into semantically-meaningful units called candidates, and runs an election between comparable candidates. A candidate wins when it receives a majority or a plurality. The protocol is fully asynchronous: each site executes its tentative schedule independently, and determines locally when a candidate has won an election. The committed schedule is as close as possible to the preferences expressed by clients.

Pierre Sutra, João Barreto, Marc Shapiro

Coordinate BPEL Scopes and Processes by Extending the WS-Business Activity Framework

In a Web service world, the Web Services Business Process Execution Language (WS-BPEL) is the standard used to compose Web services into business processes. These processes are often long-running. Therefore WS-BPEL employs a long-running transaction model to handle the internal transactions of a WS-BPEL process. WS-Business Activity (WS-BA) is a set of mechanisms and protocols to coordinate a set of Web Services into a long-running compensation-based transaction. Up to now, it was not possible to let parts of a WS-BPEL process participate in a WS-BA coordination. We show how WS-BA needs to be extended to allow parts of a WS-BPEL process to participate in a WS-BA coordination, which is supervised by an external coordinator. In addition our approach allows external partners to participate in these modified internal WS-BA transactions initiated by a WS-BPEL process and also allows for easy incorporation of BPEL sub-processes into the proposed coordination model. The architecture of a prototype implementing our approach is sketched.

Stefan Pottinger, Ralph Mietzner, Frank Leymann

Verifying Composite Service Transactional Behavior Using Event Calculus

A key challenge of Web service (WS) composition is how to ensure reliable execution. The lack of techniques that support non-functional features such as execution reliability is widely recognized as a barrier preventing widespread adoption. Therefore, there is a growing interest for verification techniques which help to prevent WS composition execution failures.

In this paper, we propose an event driven approach to validate the transactional behavior of WS compositions. Using the Event Calculus to formally specify and check the transactional behavior consistency of WS composition, our approach provides a logical foundation to ensure recovery mechanisms consistency at design time and report execution deviations after runtime.

Walid Gaaloul, Mohsen Rouached, Claude Godart, Manfred Hauswirth

Short Papers

Matching Cognitive Characteristics of Actors and Tasks

Acquisition, application and testing of knowledge by actors trying to fulfill knowledge intensive tasks is becoming increasingly important for organizations due to trends such as globalization, the emergence of virtual organizations and growing product complexity. An actor’s management of basic cognitive functions, however, is at stake because of this increase in the need to acquire, apply and test knowledge during daily work. This paper specifically focusses on matchmaking between the cognitive characteristics supplied by an actor and the cognitive characteristics required to fulfill a certain knowledge intensive task. This is based on a categorization and characterization of actors and knowledge intensive tasks. A framework for a cognitive matchmaker system is introduced to compute actual match values and to be able to reason about the suitability of a specific actor to fulfill a task of a certain type.

S. J. Overbeek, P. van Bommel, H. A. (Erik) Proper, D. B. B. Rijsenbrij

The OpenKnowledge System: An Interaction-Centered Approach to Knowledge Sharing

The information that is made available through the semantic web will be accessed through complex programs (web-services, sensors,

etc.

) that may interact in sophisticated ways. Composition guided simply by the specifications of programs’ inputs and outputs is insufficient to obtain reliable aggregate performance - hence the recognised need for process models to specify the interactions required between programs. These interaction models, however, are traditionally viewed as a

consequence

of service composition rather than as the focal point for

facilitating

composition. We describe an operational system that uses models of interaction as the focus for knowledge exchange. Our implementation adopts a peer to peer architecture, thus making minimal assumptions about centralisation of knowledge sources, discovery and interaction control.

Ronny Siebes, Dave Dupplaw, Spyros Kotoulas, Adrian Perreau de Pinninck, Frank van Harmelen, David Robertson

Ontology Enrichment in Multi Agent Systems Through Semantic Negotiation

Ontologies play a key role in the development of Multi-Agent Systems (MASs) for the Semantic Web, providing conceptual description of the agents’ world. However, especially in open MASs, agents use different ontologies and this often leads to communication failures. Semantic negotiation is a recent framework which provides an effective solution to such a problem, but it is a too heavy framework to be implemented in large agent communities. In this paper, we deal with the inefficiency in semantic negotiations, and we show how a possible solution is to build a common representation of the different terms used by the agents. We argue that a reasonable compromise to use a common ontology consists of combining it with semantic negotiation and we propose an algorithm which implements this idea in the recent HISENE semantic negotiation framework. Moreover, the semantic negotiation is exploited, in our proposal, to dynamically enrich the global ontology.

Salvatore Garruzzo, Domenico Rosaci

A Relaxed But Not Necessarily Constrained Way from the Top to the Sky

As P2P systems are a very popular approach to connect a possibly large number of peers, efficient query processing plays an important role. Appropriate strategies have to take the characteristics of these systems into account. Due to the possibly large number of peers, extensive flooding is not possible. The application of routing indexes is a commonly used technique to avoid flooding. Promising techniques to further reduce execution costs are query operators such as top-

N

and skyline, constraints, and the relaxation of exactness and/or completeness. In this paper, we propose strategies that take all these aspects into account. The choice is left to the user if and to what extent he is willing to relax exactness or apply constraints. We provide a thorough evaluation that uses two types of distributed data summaries as examples for routing indexes.

Katja Hose, Christian Lemke, Kai-Uwe Sattler, Daniel Zinn

Collaborative Filtering Based on Opportunistic Information Sharing in Mobile Ad-Hoc Networks

Personal mobile devices and mobile ad-hoc networks can support interesting forms of opportunistic information sharing in user communities based on spatio-temporal proximity. We show how this could be used to realise a novel decentralised collaborative filtering (CF) approach in a mobile environment.

Alexandre de Spindler, Moira C. Norrie, Michael Grossniklaus

Policy-Based Service Registration and Discovery

The WS-Policy framework has been introduced to allow policy to be expressed and associated with Web Services thereby enabling organizations to manage the quality of their services. How the specified polices are kept consistent with the organization’s regulations, and how to match service and client policies requirements for effective service discovery, are issues yet to be addressed. In this paper, we present a new approach that allows for the automatic verification and matching of policies, using a service registry that serves as a policy storage and management facility, a policy checkpoint during service publication and as a policy matchmaker during service discovery. We extend WS-Policy with a policy conformance operator for policy verification and use WS-Policy Intersection for policy matching. We develop a policy information model and policy processing logics for the registry. An implementation of a policy-enabled service registry is also introduced.

Tan Phan, Jun Han, Jean-Guy Schneider, Tim Ebringer, Tony Rogers

Business Process Quality Metrics: Log-Based Complexity of Workflow Patterns

We believe that analysis tools for BPM should provide other analytical capabilities besides verification. Namely, they should provide mechanisms to analyze the complexity of workflows. High complexity in workflows may result in poor understandability, errors, defects, and exceptions leading processes to need more time to develop, test, and maintain. Therefore, excessive complexity should be avoided. The major goal of this paper is to describe a quality metric to analyze the complexity of workflow patterns from a log-based perspective.

Jorge Cardoso

CoopIS 2007 PC Co-chairs’ Message

Welcome to the proceeding of the 15th International Conference on Cooperative Information Systems (CoopIS 2007) held in Vilamoura, Portugal, November 28-30, 2007.

The CoopIS conferences provide a forum for exchanging ideas and results on scientific research from a variety of areas, such as CSCW, Internet data management, electronic commerce, human–computer interaction, business process management, agent technologies, P2P systems, and software architectures, to name but a few. We encourage the participation of both researchers and practitioners in order to facilitate exchange and cross-fertilization of ideas and to support the transfer of knowledge to research projects and products. Towards this goal, we accepted both research and experience papers.

Francisco Curbera, Frank Leymann, Mathias Weske

Distributed Objects and Applications (DOA) 2007 International Conference

Frontmatter

Keynote

WS-CAF: Contexts, Coordination and Transactions for Web Services

As Web services have evolved as a means to integrate processes and applications at an inter-enterprise level, traditional transaction semantics and protocols have proven to be inappropriate. Web services-based transactions, colloquially termed Business Transactions, differ from traditional transactions in that they execute over long periods, they require commitments to the transaction to be “negotiated” at runtime, and isolation levels have to be relaxed. A solution to this problem has to work over HTTP and include existing transaction processing technologies of all types: database management systems, application servers, message queuing systems and packaged applications. In this paper we’ll look at the WS-CAF standardization effort and show how it is attempting to address this important and difficult subject. We’ll also consider how the architecture defined by WS-CAF fits into the evolving architecture of Web services.

Mark Little

Dependability and Security

Resilient Security for False Event Detection Without Loss of Legitimate Events in Wireless Sensor Networks

When large-scale wireless sensor networks are deployed in hostile environments, the adversary may compromise some sensor nodes and use them to generate false sensing reports or to modify the reports sent by other nodes. Such false events can cause the user to make bad decisions. They can also waste a significant amount of network resources. Unfortunately, most current security designs have drawbacks; they either require their own routing protocols to be used, or lose legitimate events stochastically and completely break down when more than a fixed threshold number of nodes are compromised. We propose a new method for detecting false events that does not suffer from these problems. When we set the probability of losing legitimate events to 1%, our proposal method can detect more false events than related method can. We demonstrate this by mathematical analysis and simulation.

Yuichi Sei, Shinichi Honiden

Formal Verification of a Group Membership Protocol Using Model Checking

The development of safety-critical embedded applications in domains such as automotive or avionics is an exceedingly challenging intellectual task. This task can, however, be significantly simplified through the use of middleware that offers specialized fault-tolerant services. This middleware must provide a high assurance level that it operates correctly. In this paper, we present a formal verification of a protocol for one such service, a Group Membership Service, using model checking. Through this verification we discovered that although the protocol specification is correct, a previously proposed implementation is not.

Valério Rosset, Pedro F. Souto, Francisco Vasques

Revisiting Certification-Based Replicated Database Recovery

Certification-based database replication protocols are a good means for supporting transactions with the

snapshot

isolation level. Such kind of replication protocol does not demand readset propagation and allows the usage of a symmetric algorithm for terminating transactions, thus eliminating the need of a final voting phase. Recovery mechanisms especially adapted for certification-based replication protocols have not been thoroughly studied in previous works. In this paper we propose two recovery techniques for this kind of replication protocols and analyze their performance. The first technique consists in dividing the recovery in two stages, reducing the certification load and the amount of information to be recovered in the second stage. The second technique scans and compacts the set of items to transfer, sending only the latest version of each item. We show that these techniques can be easily combined, reducing thus the recovery time.

M. I. Ruiz-Fuertes, J. Pla-Civera, J. E. Armendáriz-Iñigo, J. R. González de Mendívil, F. D. Muñoz-Escoí

A Survey of Fault Tolerant CORBA Systems

CORBA is an OMG standard for distributed object computing; but despite being a standard and wide scale acceptance in the industry it lacks the ability to meet high demands of quality of service (QoS) required for building a reliable fault tolerant distributed system. To tackle these issues, in 2001, OMG incorporated fault tolerance mechanisms, QoS policies and services in its standard interfaces as mentioned in its Fault Tolerant CORBA (FT-CORBA) specification. FT-CORBA Architecture used the notion of object replication to provide reliable and fault tolerant services. In this paper, we surveyed the different approaches for building FT-CORBA based distributed systems with their merits and limitations. We gave an overview of FT-CORBA specification; its requirements and limitations, and FT-CORBA Architecture. We have also revised the existing categorization of FT-CORBA systems by incorporating a fourth approach, i.e., Reflective Approach, in the categorization taxonomy. A comparison between different types of replication and FT-CORBA based systems is conducted to achieve quick insight on their features.

Muhammad Fahad, Aamer Nadeem, Michael R. Lyu

Middleware and Web Services

Flexible Reuse of Middleware Infrastructures in Heterogeneous IT Environments

Middleware systems and adapters integrate remote systems and provide uniform access to them. Middleware infrastructures consist of different types of middleware systems, e.g. application servers or federated database systems, and different types of adapters, e.g. J2EE connectors or SQL wrappers. Different adapter technologies are incompatible to each other, which requires to write new adapters where existing ones should be reused instead. Therefore, we introduce a virtualization tier that allows to uniformly handle and access adapters of different middleware platforms and that reuses existing adapter deployments, which avoids redundant administration tasks. Moreover, the virtualization tier can also reuse complete middleware infrastructures such that adapter deployments and adapter execution remains in the respective middleware system. This allows to flexibly reuse middleware infrastructures and facilitates the realization of new integration scenarios at reduced expense.

Ralf Wagner, Bernhard Mitschang

Self-optimization of Clustered Message-Oriented Middleware

Today’s entreprise-level applications are often built as an assembly of distributed components that provide the basic services required by the application logic. As the scale of these applications increases, coarse-grained components will need be decoupled and will use message-based communication, often helped by Message-Oriented Middleware or MOMs.

In the Java world, a standardized interface exists for MOMs: Java Messaging Service or JMS. And like other middleware, some JMS implementations use clustering techniques to provide some level of performance and fault-tolerance. One such implementation is JORAM, which is open-source and hosted by the ObjectWeb consortium.

In this paper, we describe performance modeling of various clustering configurations and validate our model with performance evaluation in a real-life cluster. In doing that, we observed that the resource-efficiency of the clustering methods can be very poor due to local instabilities and/or global load variations.

To solve these issues, we provide insight into how to build autonomic capabilities on top of the JORAM middleware. Specifically, we describe a methodology to (i) dynamically adapt the load distribution among the servers (load-balancing aspect) and (ii) dynamically adapt the replication level (provisioning aspect).

Christophe Taton, Noël De Palma, Daniel Hagimont, Sara Bouchenak, Jérémy Philippe

Minimal Traffic-Constrained Similarity-Based SOAP Multicast Routing Protocol

SOAP, a de-facto communication protocol of Web services, is popular for its interoperability across organisations. However, SOAP is based on XML and therefore inherits XML’s disadvantage of having voluminous messages. When there are many transactions requesting similar server operations, using conventional SOAP unicast to send SOAP response messages can generate a very large amount of traffic [7]. This paper presents a traffic-constrained SMP routing protocol, called tc-SMP, which is an extension of our previous work on a similarity-based SOAP multicast protocol (SMP) [11]. Tc-SMP looks at the network optimization aspect of SMP and proposes alternative message delivery paths that minimize total network traffic. A tc-SMP algorithm, based on an

incremental

approach, is proposed and compared for its efficiency and performance advantages over SMP. Simple heuristic methods are also implemented to improve results. From extensive experiments, it is shown that tc-SMP achieves a minimum of 25% reduction in total network traffic compared to SMP with a trade-off of 10% increase in average response time. Compared to conventional unicast, bandwidth consumption can by reduced by up to 70% when using tc-SMP and 50% when using SMP.

Khoi Anh Phan, Peter Bertok, Andrew Fry, Caspar Ryan

Implementing a State-Based Application Using Web Objects in XML

In this paper we introduce Web Objects in XML (WOX) as a web protocol for distributed objects, which uses HTTP as its transport protocol and XML as its format representation. It allows remote method invocations on web objects, and remote procedure calls on exposed web services. WOX uses URIs to represent object references, inspired by the principles of the representational state transfer (REST) architectural style. Using URIs in this way allows parameters to be passed, and values returned, either by value or by reference. We present a case study, in which an existing chart application is exposed over the Internet using three different technologies: RMI, SOAP, and WOX. WOX proves to be the simplest way to implement this application, requiring less program code to be written or modified than RMI or SOAP. Furthermore, as a consequence of its REST foundations, WOX is particularly transparent, since any objects that persist after a WOX call may be inspected with any XML-aware web browser. It is also possible to invoke methods of persistent objects through a web browser.

Carlos R. Jaimez González, Simon M. Lucas

Aspects and Development Tools

Experience with Dynamic Crosscutting in Cougaar

Component-based middleware frameworks that support distributed agent societies have proven to be very useful in a variety of domains. Such frameworks must include support for both agents to implement business logic and runtime adaptation to overcome the inherent limitations of unreliable, resource-constrained environments. Regardless of how any particular middleware framework is organized into components, the business logic and adaptation support will inevitably require some crosscutting of the dominant decomposition. In this paper, we discuss a spectrum of dynamic crosscutting techniques in support of runtime adaptation that we have implemented in Cougaar, a component-based service-oriented architecture. We describe these crosscutting techniques and show how they can be used to enhance the flexibility and survivability of agent-based applications.

John Zinky, Richard Shapiro, Sarah Siracuse, Todd Wright

Property-Preserving Evolution of Components Using VPA-Based Aspects

Protocols that govern the interactions between software components are a popular means to support the construction of correct component-based systems. Previous studies have, however, almost exclusively focused on static component systems that are not subject to evolution. Evolution of component-based systems with explicit interaction protocols can be defined quite naturally using aspects (in the sense of AOP) that modify component protocols. A major question then is whether aspect-based evolutions preserve fundamental correctness properties, such as compatibility and substitutability relations between software components.

In this paper we discuss how such correctness properties can be proven in the presence of aspect languages that allow matching of traces satisfying interaction protocols and enable limited modifications to protocols. We show how common evolutions of distributed components can be modeled using VPA-based aspects [14] and be proven correct directly in terms of properties of operators of the aspect language. We first present several extensions to an existing language for VPA-based aspects that facilitate the evolution of component systems. We then discuss different proof techniques for the preservation of composition properties of component-based systems that are subject to evolution using protocol-modifying aspects.

Dong Ha Nguyen, Mario Südholt

Multi-stage Aspect-Oriented Composition of Component-Based Applications

The creation of distributed applications requires sophisticated compositions, as various components — supporting application logic or non-functional requirements — must be assembled and configured in an operational application. Aspect-oriented middleware has contributed to improving the modularization of such complex applications, by supporting a component model that offers aspect-oriented composition alongside the traditional composition of provided and required interfaces. One of the recent advances in AO middleware is the ability to express dynamic compositions that depend on the evaluation of available context information — some of this information may only be available at deployment time.

The search for high level composition mechanisms is an ongoing track in the research community, yet the composition logic of a real world application remains complex and it would greatly pay off if composition logic — traditionally encoded in monolithic deployment descriptors — could be reused over ranges of applications and even be gradually refined for specific applications.

This paper presents M-Stage, an AO component and composition model that supports the reuse and adaptation of compositions in distributed applications that are built on AO middleware. We illustrate the power of M-Stage by applying the model in a realistic distributed application where we analyze the reuse and adaptation potential of the M-Stage model.

Bert Lagaisse, Eddy Truyen, Wouter Joosen

An Eclipse-Based Tool for Symbolic Debugging of Distributed Object Systems

After over thirty years of distributed computing, debugging distributed applications is still regarded as a difficult task. While it could be argued that this condition stems from the complexity of distributed executions, the fast pace of evolution witnessed with distributed computing technologies has also played its role by shortening the life-span of many useful debugging tools. In this paper we present an extensible Eclipse-based tool which brings distributed threads and symbolic debuggers together, resulting in a simple and useful debugging aid. This extensible tool is based on a technique that is supported by elements that are common to synchronous-call middleware implementations, making it a suitable candidate for surviving technology evolution.

Giuliano Mega, Fabio Kon

Mobility and Distributed Algorithms

A Bluetooth-Based JXME Infrastructure

Over the last years, research efforts have led the way to embed computation into the environment. Much attention is drawn to technologies supporting dynamicity and mobility over small devices which can follow the user anytime, anywhere. The Bluetooth standard particularly fits this idea, by providing a versatile and flexible wireless network technology with low power consumption.

In this paper, we describe an implementation of a novel framework named

JXBT

(JXME over Bluetooth), which allows the JXME infrastructure to use Bluetooth as the communication channel. By exploiting the JXME functionalities we can overcome Bluetooth limitations, such as the maximum number of interconnectable devices (7 according to the Bluetooth standard) and the maximum transmission range (10 or 100 meters depending on the version). To test the lightness of

JXBT

, we designed and evaluated

BlueIRC

, an application running on top of

JXBT

. This application enables the set up of a chat among Bluetooth-enabled mobile devices, without requiring them to be within transmission range.

Carlo Blundo, Emiliano De Cristofaro

Agreements and Policies in Cooperative Mobile Agents: Formalization and Implementation

Organization of mobile agents into a group has appeared as a new paradigm for dynamic deployment of composite services. However, it has not been discussed how multiple mobile agents cooperate with each other, handling conflicts in their requirements. In response to this problem, this study proposes a model for cooperative mobility based on the notion of agreements. Agent behavior defined in the proposed model involves agreement establishment and enforcement for cooperative mobility. Such behavior can be customized only by specifying requirements/constraints of each agent, eliminating the necessity to write down the whole behavior to handle agreements. The model is described in a formal way, using Event Calculus, and it is proved the model leads to no occurrence of defined inconsistency. The model has been implemented on an existing agent framework, Freedia, combined with its dynamic partner management mechanism.

Fuyuki Ishikawa, Nobukazu Yoshioka, Shinichi Honiden

An Adaptive Coupling-Based Algorithm for Internal Clock Synchronization of Large Scale Dynamic Systems

This paper proposes an internal clock synchronization algorithm which combines the gossip-based paradigm with a nature-inspired approach coming from the

coupled oscillators

phenomenon. The proposed solution allows a very large number of clocks to self-synchronize without any central control, despite node departure and arrival. This addresses the needs of an emergent class of large-scale peer-to-peer applications that have to operate without any assumptions on the underlying infrastructure. Empirical evaluation shows extremely good convergence and stability under different network settings.

Roberto Baldoni, Angelo Corsaro, Leonardo Querzoni, Sirio Scipioni, Sara Tucci-Piergiovanni

Reviewing Amnesia Support in Database Recovery Protocols

Replication is used for providing highly available and fault-tolerant information systems, which are constructed on top of replication and recovery protocols. An important aspect when designing these systems is the failure model assumed. Replicated databases literature last trends consist in adopting the crash-recovery with partial amnesia failure model because in most cases it shortens the recovery times. But, despite the large use of such failure model we consider that most of these works do not handle accurately the amnesia phenomenon. Therefore, in this paper we survey some works, analyzing their amnesia support.

Rubén de Juan-Marín, Luis H. García-Muñoz, J. Enrique Armendáriz-Íñigo, Francesc D. Muñoz-Escoí

Frameworks, Patterns, and Testbeds

The Conceptualization of a Configurable Multi-party Multi-message Request-Reply Conversation

Organizations, to function effectively and expand their boundaries, require a deep insight into both process orchestration and choreography of cross-organization business processes. The set of requirements for service interactions is significant, and has not yet been sufficiently refined. Service Interaction Patterns studies by Barros et al. demonstrate this point. However, they overlook some important aspects of service interaction of bilateral and multilateral nature. Furthermore, the definition of these patterns are not precise due to the absence of a formal semantics. In this paper, we analyze and present a set of patterns formed around the subset of patterns documented by Barros et al. concerned with Request-Reply interactions, and extend these ideas to cover multiple parties and multiple messages. We concentrate on the interaction between multiple parties, and analyze issues of a non-guaranteed response and different aspects of message handling. We propose one configurable, formally defined, conceptual model to describe and analyze options and variants of request-reply patterns. Furthermore, we propose a graphical notation to depict every pattern variant, and formalize the semantics by means of Coloured Petri Nets. In addition, we apply this pattern family to evaluate WS-BPEL v2.0 and check how selected pattern variants can be operationalized in Oracle BPEL PM.

Nataliya Mulyar, Lachlan Aldred, Wil M. P. van der Aalst

Building Adaptive Systems with Service Composition Frameworks

Frameworks that support the implementation and execution of service compositions are a fundamental component of middleware infrastructures that support the design of adaptive systems. This paper discusses the requirements imposed by adaptive middleware on service composition frameworks, and discusses how they have been addressed by previous work. As a result, it describes the design of a novel adaptation-friendly service composition framework that takes into consideration the requirements at three different levels: service programming model level, adaptation-friendly services level, and kernel mechanisms level.

Liliana Rosa, Luís Rodrigues, Antónia Lopes

Invasive Patterns for Distributed Programs

Software patterns have evolved into a commonly used means to design and implement software systems. Programming patterns, architecture and design patterns have been quite successful in the context of sequential as well as (massively) parallel applications but much less so in the context of distributed applications over irregular communication topologies and heterogeneous synchronization requirements.

In this paper, we propose a solution for one of the main issues in this context: the need to complement distributed patterns with access to execution state on which it depends but that is frequently not directly available at the sites where the patterns are to be applied. To this end we introduce

invasive patterns

that couple well-known computation and communication patterns like pipelining and farming out computations with facilities to access non-local state. We present the following contributions: (i) a motivation for such invasive patterns in the context of a real-world application: the JBoss Cache framework for transactional replicated caching, (ii) a proposal of language support for such invasive patterns, (iii) a prototypical implementation of this pattern language using AWED, an aspect language for distributed programming, and (iv) an evaluation of our proposal for refactoring of JBoss Cache.

Luis Daniel Benavides Navarro, Mario Südholt, Rémi Douence, Jean-Marc Menaud

NSLoadGen– A Testbed for Notification Services

During the past years a lot of work on Notification Services has been focused on features like scalability, transactions, persistence, routing algorithms, caching, mobility, etc. However, less work has been invested on how to evaluate or compare such systems. The selection of the most appropriate Notification Services for a particular application scenario is crucial and today available tools are bound to a particular implementation. If the Notification Service under test does not fulfill the application requirements then a new try with other Notification Service needs to be started from scratch: the description of the workload characterization and its injection cannot be reused.

In this paper we introduce

NSLoadGen

(Notification Services Load Generator), a testbed platform that supports the definition of real-life scenarios, the simulation of these scenarios against notification services, and finally generating vast data that can be used to precisely evaluate it. NSLoadGen is not targeted at any specific Notification Services, but rather is generic and adaptable. It has been designed to support a wide variety of Notification Services characteristics, hiding the many differences among messaging products/specifications (e.g. Java Message Service [1]) and, at the same time, it is easily extensible to support new implementations. This paper covers the different steps the tool follows (scenario definition, scenario simulation and result collection), the proposed approach, as well as relevant design and implementation details.

Diego Palmisano, Mariano Cilia

DOA 2007 PC Co-chairs’ Message

Welcome to the Ninth International Symposium on Distributed Objects, Middleware and Applications (DOA 2007), held in Vilamoura, Portugal, November 25-30, 2007.

The DOA conferences provide a forum for exchanging the latest research results on distributed objects, components, services, middleware and applications. To emphasize the increasing importance and proliferation of higher-level software abstractions and the associated general-purpose middleware, the term ‘middleware’ was added to the title of DOA this year. Research in objects, middleware and their application establishes new principles that open the way to solutions that can meet the requirements of tomorrow’s applications. Conversely, practical experience in real-world projects drives this same research by exposing new ideas and posing new types of problems to be solved.With DOA 2007 we explicitly intended to provide a forum to help this mutual interaction occur, and to trigger and foster it. Submissions were therefore welcomed along both these dimensions: research (fundamentals, concepts, principles, evaluations, patterns, and algorithms) and practice (applications, experience, case studies, and lessons). Contributions attempting to cross over the gap between these two dimensions were particularly encouraged. Toward this goal, we accepted both research and experience papers.

Pascal Felber, Calton Pu, Aad van Moorsel

Ontologies, Databases and Applications of Semantics (ODBASE) 2007 International Conference

Frontmatter

Keynote

Towards Next Generation Value Networks

We are moving towards a services economy where more and more value in an economy is created through services. A key enabler for such an economy is the transformation of services themselves into tradable goods similar to products. As organizations are focusing on core competences we will see (i) focused organizations will provide more specialized services (service providers), (ii) composition and coordination of services provided by different service providers into value-added services will become important business opportunity (service broker / coordinator), and (iii) organizations will be willing to “buy” more services from service providers and integrate them into their business operations (service consumers). This leads to longer and deeper service value chains consisting of a large number of services. A value chain may consist of services provided by a diversity of service providers, thus resulting in an increased complexity in coordinating services provided by multiple service providers.

The German funded TEXO project runs under the umbrella of the THESEUS research programme and addresses the challenges imposed by next generation value networks. The interdisciplinary TEXO consortium is coordinated by SAP and includes a number of partners having technical, economical and legal competencies. In this talk I will give examples of existing value networks, present the vision of the TEXO project for next generation value networks, present the interdisciplinary approach to address the technical, economical and legal challenges, and illustrate scenarios in which the TEXO solution adds value.

York Sure

Ontology Mapping

Combining the Semantic Web with the Web as Background Knowledge for Ontology Mapping

We combine the Semantic Web with the Web, as background knowledge, to provide a more balanced solution for Ontology Mapping. The Semantic Web can provide mappings that are missed by the Web, which can provide many more, but noisy, mappings. We present a combined technique that is based on variations of existing approaches. Our experimental results in two real-life thesauri are compared with previous work, and they reveal that a combined approach to Ontology Mapping can provide more balanced results in terms of precision, recall and confidence measure of mappings. We also discover that a reduced set of 3 appropriate Hearst patterns can eliminate noise in the list of discovered mappings, and thus techniques based exclusively in the Web can be improved. Finally, we also identify open questions derived from building a combined approach.

Ruben Vazquez, Nik Swoboda

Discovering Executable Semantic Mappings Between Ontologies

Creating

executable

semantic mappings is an important task for ontology-based information integration. Although it is argued that mapping tools may require interaction from humans (domain experts) for best accuracy, in general, automatic ontology mapping is an AI-Complete problem. Finding matchings (correspondences) between the concepts of two ontologies is the first step towards solving this problem but matchings are normally not directly executable for data exchange or query translation. This paper presents an systematic approach to combining ontology matching, object reconciliation and multi-relational data mining to find the executable mapping rules in a highly automatic manner. Our approach starts from an iterative process to search the matchings and do object reconciliation for the ontologies with data instances. Then the result of this iterative process is used for mining

frequent queries

. Finally the semantic mapping rules can be generated from the frequent queries. The results show our approach is highly automatic without losing much accuracy compared with human-specified mappings.

Han Qin, Dejing Dou, Paea LePendu

Interoperability of XML Schema Applications with OWL Domain Knowledge and Semantic Web Tools

Several standards are expressed using XML Schema syntax, since the XML is the default standard for data exchange in the Internet. However, several applications need semantic support offered by domain ontologies and semantic Web tools like logic-based reasoners. Thus, there is a strong need for interop erability between XML Schema and OWL. This can be achieved if the XML schema constructs are expressed in OWL, where the enrichment with OWL domain ontologies and further semantic processing are possible. After semantic processing, the derived OWL constructs should be converted back to instances of the original schema. We present in this paper XS2OWL, a model and a system that allow the transformation of XML Schemas to OWL-DL constructs. These con structs can be used to drive the automatic creation of OWL domain ontologies and individuals. The XS2OWL transformation model allows the correct conver sion of the derived knowledge from OWL-DL back to XML constructs valid according to the original XML Schemas, in order to be used transparently by the applications that follow XML Schema syntax of the standards.

Chrisa Tsinaraki, Stavros Christodoulakis

Semantic Querying

Query Expansion and Interpretation to Go Beyond Semantic P2P Interoperability

In P2P data management systems, semantic interoperability between any two peers that do not share the same ontology relyes on ontology matching. The established correspondences, i.e. the “shared” parts of the ontologies are indeed essential to exchange information. But to what extent the“unshared” part can contribute to information exchange. In this paper, we address this question. We focus on a P2P document management system, where documents and queries are represented by semantic vectors. We propose a specific query expansion step at the query initiator’s side and a query interpretation step at the document provider’s. Through these steps, unshared concepts contribute to evaluate the relevance of documents wrt a given query. The experiments show that the proposed method enables to correctly evaluate the relevance of a document even if concepts of a query are not shared. In some cases, we are able to find up to 90% of the documents that would be selected when all the concepts are shared.

Anthony Ventresque, Sylvie Cazalens, Philippe Lamarre, Patrick Valduriez

SPARQL++ for Mapping Between RDF Vocabularies

Lightweight ontologies in the form of RDF vocabularies such as SIOC, FOAF, vCard, etc. are increasingly being used and exported by “serious” applications recently. Such vocabularies, together with query languages like

SPARQL

also allow to syndicate resulting RDF data from arbitrary Web sources and open the path to finally bringing the Semantic Web to operation mode. Considering, however, that many of the promoted lightweight ontologies overlap, the lack of suitable standards to describe these overlaps in a declarative fashion becomes evident. In this paper we argue that one does not necessarily need to delve into the huge body of research on ontology mapping for a solution, but

SPARQL

itself might — with extensions such as external functions and aggregates — serve as a basis for declaratively describing ontology mappings. We provide the semantic foundations and a path towards implementation for such a mapping language by means of a translation to Datalog with external predicates.

Axel Polleres, François Scharffe, Roman Schindlauer

OntoPath: A Language for Retrieving Ontology Fragments

In this work we introduce a novel retrieval language, named

OntoPath

, for specifying and retrieving relevant ontology fragments. This language is intended to extract customized self-standing ontologies from very large, general-purpose ones. Through

OntoPath,

users can specify the desired detail level in the concept taxonomies as well as the properties between concepts that are required by the target applications. The syntax and aims of OntoPath resemble XPath’s in that they are simple enough to be handled by non-expert users and they are designed to be included in other XML-based applications (e.g. transformations sheets, semantic annotation of web services, etc.).

OntoPath

has been implemented on the top of the graph-based database

G

, for which a Protégé OWL plug-in has been designed to access and retrieve ontology fragments.

E. Jiménez-Ruiz, R. Berlanga, V. Nebot, I. Sanz

Taxonomy Construction Using Compound Similarity Measure

Taxonomy learning is one of the major steps in ontology learning process. Manual construction of taxonomies is a time-consuming and cumbersome task. Recently many researchers have focused on automatic taxonomy learning, but still quality of generated taxonomies is not satisfactory. In this paper we have proposed a new compound similarity measure. This measure is based on both knowledge poor and knowledge rich approaches to find word similarity. We also used Neural Network model for combination of several similarity methods. We have compared our method with simple syntactic similarity measure. Our measure considerably improves the precision and recall of automatic generated taxonomies.

Mahmood Neshati, Leila Sharif Hassanabadi

Ontology Development

r3– A Foundational Ontology for Reactive Rules

In this paper we present the

r

3

ontology, a foundational ontology for reactive rules, aiming at coping with language heterogeneity at the rule (component) level. This (OWL-DL) ontology is at a low (structural) abstraction level thus fostering its extension. Although focusing on reactive rules (reactive derivation rules not excluded), the

r

3

ontology defines a vocabulary that allows also for the definition of rule (component) languages to model other types of rules like production, integrity, or logical derivation rules.

José Júlio Alferes, Ricardo Amador

Heuristics for Constructing Bayesian Network Based Geospatial Ontologies

Bayesian Network based ontologies enable specification of partial relations between concepts as an advantage over conventional ontologies, based on description logic. In the context of geospatial ontologies such specifications facilitate encoding relations between action and entitiy concepts. This paper presents a case study of transportation ontologies based on traffic code texts of two different countries. We construct ontologies of both geospatial entities and actions using the BayesOWL approach. Thereafter we employ heuristics based on verb-noun co-occurence evidences, available from analysis of formal texts, to construct linkages between the two types of concepts. This approach enables high recall and precission for querries on concepts and enables rich inferences such as most similar and disimilar concepts. The results of our experiments are verified with human subjects testing. Such heuristics-based-probablisitic approaches to geospatial ontology specification and reasoning can be utilized for concept mapping within and across geospatial ontologies as well as to quanitfy the naming hetrogeinities in two given ontologies.

Sumit Sen, Antonio Krüger

OntoCase - A Pattern-Based Ontology Construction Approach

As the technologies facilitating the Semantic Web become more and more mature they are also adopted by the business world. When developing semantic applications, constructing the underlying ontologies is a crucial part. Construction of enterprise ontologies need to be semi-automatic in order to reduce the effort required and the need for expert ontology engineers. Another important issue is to introduce knowledge reuse in the ontology construction process. By basing our semi-automatic method on the principles of case-based reasoning we envision a novel semi-automatic ontology construction process. The approach is based on automatic selection and application of patterns but also includes ontology evaluation and revision, as well as pattern candidate discovery. The development of OntoCase is still ongoing work, in this paper we report mainly on the initial realisation and first experiments concerning the retrieval and reuse phases.

Eva Blomqvist

Towards Community-Based Evolution of Knowledge-Intensive Systems

This article wants to address the need for a research effort and framework that studies and embraces the novel, difficult but crucial issues of adaptation of knowledge resources to their respective user communities, and

vice versa

, as a fundamental property within knowledge-intensive internet systems. Through a deep understanding of

real-time

, community-driven evolution of so-called ontologies, a knowledge-intensive system can be made operationally relevant and sustainable over longer periods of time. To bootstrap our framework, we adopt and extend the DOGMA ontology framework, and its community-grounded ontology engineering methodology DOGMA-MESS, with an ontology that models community concepts such as business rules, norms, policies, and goals as first-class citizens of the ontology evolution process. Doing so ontology evolution can be tailored to the needs of a particular community. Finally, we illustrate with an example from an actual real-world problem setting, viz. interorganisational exchange of HR-related knowledge.

Pieter De Leenheer, Robert Meersman

ImageNotion: Methodology, Tool Support and Evaluation

The content of image archives changes rapidly. This makes the traditional separation of ontology development and image annotation steps no longer feasible. In this paper, we present an approach, termed ImageNotion that allows for the collaborative development of domain ontologies directly by domain experts with minimal ontology experience. ImageNotion is both a methodology based on the idea of the ontology maturing process model, and the name of the tool supporting this methodology. ImageNotion embeds the creation of ontology entities, termed imagenotions, into the work process of creating semantic annotations of images and their parts. Both the creation of imagenotions and the creation of image annotations are visual, user friendly processes, implemented by a web application that integrates all of the required functionality in one consistent framework. Besides the theoretical concepts, this paper also presents the results of our evaluation of the system with experienced image annotators and librarians having minimal ontology background.

Andreas Walter, Gábor Nagypál

Learning and Text Mining

Optimal Learning of Ontology Mappings from Human Interactions

Lexical similarity based ontology mappings are useful to obtain semantic translations of database schemas across application domains. Incremental improvement of such mappings can be obtained from human inputs of ontology mapping. Manual mappings are labor intensive and need to be assisted by machine-generated mappings in a semi-automated approach. Heuristics based approaches allow multiple strategies to learn human expertise in concept mappings. Such learning improves the level of automation of the mapping process. We analyze heuristics based Bayesian learning of manual mappings to improve effectiveness of machine-generated mappings. Our results show that human based mappings contribute higher improvement in the machine-generated values of lexical similarity in comparison to those of structural similarity. The optimal weightage for structural similarity learning is inversely proportional to the complexity of given ontology graphs.

Sumit Sen, Dolphy Fernandes, N. L. Sarda

Automatic Feeding of an Innovation Knowledge Base Using a Semantic Representation of Field Knowledge

In this paper, by considering a particular application field, the innovation, we propose an automatic system to feed an innovation knowledge base (IKB) starting from texts located on the Web.

To facilitate the extraction of concepts from texts we distinguished in our work two knowledge types: primitive knowledge and definite knowledge. Each one is separately represented. Primitive knowledge is directly extracted from natural language texts and temporally organized in a specific base called TKB (Temporary Knowledge Base). The entry of the base IKB is the knowledge filtered from the TKB by some specified rules. After each filtering step, the TKB is emptied for starting new extractions from other texts sources.

The filtering rules are specified using variables representing interesting concepts. Their specifications result from the semantics of the innovation operators involved in the innovation process. The variables are initiated from a semantic representation of the operators. The content of the base IKB can be displayed as text annotations. Hence the feeding system is coupled with a user interface allowing the exploration of these annotations through their dynamic insertion in the associated texts.

In this paper, we present the application field and our approach for representing and for feeding the IKB innovation base. We also provide a number of experiment results and we indicate work we plan to undertake in order to improve our system.

Issam Al Haj Hasan, Michel Schneider, Grigore Gogu

Ontology Learning for Search Applications

Ontology learning tools help us build ontologies cheaper by applying sophisticated linguistic and statistical techniques on domain text. For ontologies used in search applications class concepts and hierarchical relationships at the appropriate level of detail are vital to the quality of retrieval. In this paper, we discuss an unsupervised keyphrase extraction system for ontology learning and evaluate its resulting ontology as part of an ontology-driven search application. Our analysis shows that even though the ontology is slightly inferior to manually constructed ontologies, the quality of search is only marginally affected when using the learned ontology. Keyphrase extraction may not be sufficient for ontology learning in general, but is surprisingly effective for ontologies specifically designed for search.

Jon Atle Gulla, Hans Olaf Borch, Jon Espen Ingvaldsen

Annotation and Metadata Management

MultiBeeBrowse – Accessible Browsing on Unstructured Metadata

Growing abundance of information on the Internet, especially the Next Generation Internet, poses even more challenges on more efficient information management; hence it has brought attention of the researchers to the faceted navigation. Existing solutions, however, do not address the majority of users, who are still inexperienced in using the faceted navigation solutions or who do not understand underlying concepts of the Semantic Web technologies, or both. The query refinement process, while using the faceted navigation interface, is more complex than, e.g., refining a simple keyword-based query.

In this article we present MultiBeeBrowse (MBB), an accessible faceted navigation solution that solves aforementioned problems in the browsing environment. We present how to improve users’ access to their history of refinements; we discuss how users can share their browsing experience. And last bust not least, we present an adaptable user interface, which aims to decrease information overload.

Sebastian Ryszard Kruk, Adam Gzella, Filip Czaja, Władysław Bultrowicz, Ewelina Kruk

Matching of Ontologies with XML Schemas Using a Generic Metamodel

Schema matching is the task of automatically computing correspondences between schema elements. A multitude of schema matching approaches exists for various scenarios using syntactic, semantic, or instance information. The schema matching problem is aggravated by the fact that models to be matched are often represented in different modeling languages, e.g. OWL, XML Schema, or SQL DDL. Consequently, besides being able to match models in the same metamodel, a schema matching tool must be able to compute reasonable results when matching models in heterogeneous modeling languages. Therefore, we developed a matching component as a part of our model management system

GeRoMeSuite

which is based on our generic metamodel

GeRoMe

. As

GeRoMe

provides a unified representation of models, the matcher is able to match models represented in different languages with each other. In this paper, we will show in particular the results for matching XML Schemas with OWL ontologies as it is often required for the semantic annotation of existing XML data sources.

GeRoMeSuite

allows for flexible configuration of the matching system; various matching algorithms for element and structure level matching are provided and can be combined freely using different ways of aggregation and filtering in order to define new matching strategies. This makes the matcher highly configurable and extensible. We evaluated our system with several pairs of XML Schemas and OWL ontologies and compared the performance with results from other systems. The results are considerably better which shows that a matching system based on a generic metamodel is favorable for heterogeneous matching tasks.

Christoph Quix, David Kensche, Xiang Li

Labeling Data Extracted from the Web

We consider finding descriptive labels for anonymous, structured datasets, such as those produced by state-of-the-art Web wrappers. We give a probabilistic model to estimate the

affinity

between attributes and labels, and describe a method that uses a Web search engine to populate the model. We discuss a method for finding good candidate labels for unlabeled datasets. Ours is the first unsupervised labeling method that does not rely on mining the HTML pages containing the data. Experimental results with data from 8 different domains show that our methods achieve high accuracy even with very few search engine accesses.

Altigran S. da Silva, Denilson Barbosa, João M. B. Cavalcanti, Marco A. S. Sevalho

Ontology Applications

Data Quality Enhancement of Databases Using Ontologies and Inductive Reasoning

The objective of this paper is twofold: create domain ontologies by induction on source databases and enhance data quality features in relational databases using these ontologies. The proposed method consists of the following steps : (1) transforming domain specific controlled terminologies into Semantic Web compliant Description Logics, (2) associating new axioms to concepts of these ontologies based on inductive reasoning on source databases, and (3) providing domain experts with an ontology-based tool to enhance the data quality of source databases. This last step aggregates tuples using ontology concepts and checks the characteristics of those tuples with the concept’s properties. We present a concrete example of this solution on a medical application using well-established drug related terminologies.

Olivier Curé, Robert Jeansoulin

A Web Services-Based Annotation Application for Semantic Annotation of Highly Specialised Documents About the Field of Marketing

The field of marketing is ever-changing. Each shift in the focus of marketing may have an impact on the current terminology in-use, and therefore, compiling marketing terminology and knowledge can help marketing managers and scholars to keep track of the ongoing evolution in the field. However, processing highly specialised documents about a particular domain is a delicate and very time-consuming activity performed by domain experts and trained terminologists, and which can not be easily delegated to automatic tools. This paper presents a Web services-based application to automate the semantic annotation and text categorisation of highly specialised documents where domain knowledge is encoded as OWL domain ontology fragments that are used as the inputs and outputs of Web services. The approach presented outlines the use of OWL-S and the OWL´s XML presentation syntax to obtain Web services that easily deal with terminological background knowledge. To validate the proposal, the research has focused on expert-to-expert documents of the marketing field. The emphasis of the research approach presented is on the end-users (marketing experts and trained terminologists) who are not computer experts and not familiarized with Semantic Web technologies.

Mercedes Argüello Casteleiro, Mukhtar Abusa, Maria Jesus Fernandez Prieto, Veronica Brookes, Fonbeyin Henry Abanda

Ontology Based Categorization in eGovernment Application

Applications in eGovernment domain often manage a knowledge base containing information about citizens. In this domain many commissions have to statute on citizens conditions in order to allow some assistance. Processes for classifying citizens have to be proposed to simplify the work of these commissions.

We propose a method for automatically classify instances of concepts in knowledge bases. In this categorization, instances may themselves belong to a category defined by a rule or may be associated to specific instances or concepts defined in an ontology. We consider three types of classification that can be applied to the main criteria of social care applications.

We also present a module allowing tools to get all required information about the categorization of elements in a knowledge base and in particular the categorization of citizens.

Claude Moulin, Fathia Bettahar, Jean-Paul Barthès, Marco Luca Sbodio

Semantic Matching Based on Enterprise Ontologies

Semantic Web technologies have in recent years started to also find their way into the world of commercial enterprises. Enterprise ontologies can be used as a basis for determining the relevance of information with respect to the enterprise. The interests of individuals can be expressed by means of the enterprise ontology. The main contribution of our approach is the integration of point set distance measures with a modified semantic distance measure for pair-wise concept distance calculation. Our combined measure can be used to determine the intra-ontological distance between sub-ontologies.

Andreas Billig, Eva Blomqvist, Feiyu Lin

ODBASE 2007 PC Co-chairs’ Message

As in recent years, the focus of the ODBASE conference lies in addressing research issues that bridge traditional boundaries between disciplines such as databases, artificial intelligence, networking, data extraction, or mobile computing. There has been an increasing focus on semantic technologies in ODBASE. The work on semantic modeling technologies is being progressively scaled up to handling millions of triples which permit adoption of semantic applications within a few days. The envelope is being progressively pushed out to enable even faster, wider and broader enterprise-wide and Web-scale applications. Also, ODBASE 2007 encouraged the submission of papers that examine the information needs of various applications, including electronic commerce, electronic government, mobile systems, or bioinformatics.

Tharam Dillon, Michele Missikoff, Steffen Staab

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise