Skip to main content

2006 | Buch

On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE

OTM Confederated International Conferences, CoopIS, DOA, GADA, and ODBASE 2006, Montpellier, France, October 29 - November 3, 2006. Proceedings, Part I

herausgegeben von: Robert Meersman, Zahir Tari

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Cooperative Information Systems (CoopIS) 2006 International Conference

CoopIS 2006 International Conference (International Conference on Cooperative Information Systems) PC Co-chairs’ Message

Welcome to the Proceedings of the 14th International Conference on Cooperative Information Systems (CoopIS 2006), which was held in Montpellier, France, from October 29 to November 3, 2006.

The CoopIS conferences provide a forum for exchanging ideas and results on scientific research from a variety of areas, such as CSCW, Internet data management, electronic commerce, human-computer interaction, workflow management, agent technologies, P2P systems, and software architectures, to name but a few. We encourage the participation of both researchers and practitioners in order to facilitate exchange and cross-fertilization of ideas and to support the transfer of knowledge to research projects and products. Towards this goal, we accepted both research and experience papers.

Mike Papazoglou, Louiqa Raschid, Rainer Ruggaber

Keynote

Workflow-Based Coordination and Cooperation in a Service World

One of the most important roles of workflow technology in a service oriented environment is that of providing an easy to use technology for service composition (so-called “orchestration”). Another important composition model in this domain is based on the technology of “coordination protocols”. We sketch the relation between orchestration and coordination protocols by describing two application areas of both technologies: the introduction of subprocesses to the service oriented world, and facilitating outsourcing by making splitting processes much easier. Cooperation aspects of workflow technology are emphasized by sketching the inclusion of human tasks in orchestrations. Finally, the benefit of combining semantic technologies with orchestrations is outlined (“semantic processes”) which aims in simplifying the creation of orchestrations.

Frank Leymann

Distributed Information Systems I

Distributed Triggers for Peer Data Management

A network of peer database management systems differs from conventional multidatabase systems by assuming absence of any central control, no global schema, transient connection of participating peer DBMSs, and evolving coordination among databases. We describe distributed triggers to support data coordination in this setting. The execution of our triggers requires coordination among the involved peer databases. We present an SQL3 compatible trigger language for the P2P setting. We also extend the SQL3 processing mechanism to this setting. Our trigger processing mechanism consists of an execution semantics, a set of termination protocols to deal with peer transiency, and a set of protocols for managing peer acquaintances in presence of distributed triggers. We show preliminary experimental results about our mechanism.

Verena Kantere, Iluju Kiringa, Qingqing Zhou, John Mylopoulos, Greg McArthur
Satisfaction-Based Query Load Balancing

We consider the query allocation problem in open and large distributed information systems. Provider sources are heterogeneous, autonomous, and have finite capacity to perform queries. A main objective in query allocation is to obtain good response time. Most of the work towards this objective has dealt with finding the most efficient providers. But little attention has been paid to satisfy the providers interest in performing certain queries. In this paper, we address both sides of the problem. We propose a query allocation approach which allows providers to express their intention to perform queries based on their preference and satisfaction. We compare our approach to both query load balancing and economic approaches. The experimentation results show that our approach yields high efficiency while supporting the providers’ preferences in adequacy with the query load. Also, we show that our approach guarantees interesting queries to providers even under low arrival query rates. In the context of open distributed systems, our approach outperforms traditional query load balancing approaches as it encourages providers to stay in the system, thus preserving the full system capacity.

Jorge-Arnulfo Quiané-Ruiz, Philippe Lamarre, Patrick Valduriez
Efficient Dynamic Operator Placement in a Locally Distributed Continuous Query System

In a distributed processing environment, the static placement of query operators may result in unsatisfactory system performance due to unpredictable factors such as changes of servers’ load, data arrival rates, etc. The problem is exacerbated for continuous (and long running) monitoring queries over data streams as any suboptimal placement will affect the system for a very long time. In this paper, we formalize and analyze the operator placement problem in the context of a locally distributed continuous query system. We also propose a solution, that is asynchronous and local, to dynamically manage the load across the system nodes. Essentially, during runtime, we migrate query operators/fragments from overloaded nodes to lightly loaded ones to achieve better performance. Heuristics are also proposed to maintain good data flow locality. Results of a performance study shows the effectiveness of our technique.

Yongluan Zhou, Beng Chin Ooi, Kian-Lee Tan, Ji Wu

Distributed Information Systems II

Views for Simplifying Access to Heterogeneous XML Data

We present

XyView

, a practical solution for fast development of user- (web forms) and machine-oriented applications (web services) over a repository of heterogeneous schema-free XML documents. XyView provides the means to view such a repository as an array, queried using a QBE-like interface or through simple selection/projection queries. Close to the concept of universal relation, it extends it in mainly two ways: (i) the input is not a relational schema but a potentially large set of XML data guides; (ii) the view is not defined explicitly by a query but implicitly by various mappings so as to avoid data loss and duplicates generated by joins. Developed on top of the Xyleme content management system, XyView can easily be adapted to any system supporting XQuery.

Dan Vodislav, Sophie Cluet, Grégory Corona, Imen Sebei
SASMINT System for Database Interoperability in Collaborative Networks

In most suggested systems aiming to enable interoperability and collaboration among heterogeneous databases,

schema matching

and

integration

is performed manually. The SASMINT system introduced in this paper proposes a (semi-) automated approach to tackle the following: 1) identification of the syntactic/semantic/structural similarities between the donor and recipient schemas to resolve their heterogeneities, 2) suggestion of corresponding mappings among the pairs of matched components, 3) facilitation of user-interaction with the system, necessary for validation/enhancement of results, and 4) generation of a proposed integrated schema, and a set of derivation rules for each of its components to support query processing against integrated sources. Unlike other systems that typically apply one specific algorithm, SASMINT applies a hybrid approach for schema matching that combines a selection of algorithms from NLP and graph theory. Furthermore, SASMINT exploits the user-validated schema matching results in its semi-automatic generation of the integrated schema and its necessary derivations.

Ozgul Unal, Hamideh Afsarmanesh
Querying E-Catalogs Using Content Summaries

With the rapid development of e-services on the Web, increasing number of e-catalogs are becoming accessible to users. A large number of e-catalogs provide information about similar type of products/services. To simplify users information searching effort, data integration systems have being developed to integrate e-catalogs providing similar type of information such that users can query those e-catalogs with a mediator through an uniform query interface. The conventional approach to answer a query received by a mediator is to select e-catalogs purely based on their query capabilities, i.e., query interface specifications. However, an e-catalog having the capability to answer a query does not mean it has relevant answers to the query. To remedy the wasted resources of querying catalogs that do not generate an answer, in this paper, we propose to use catalog content summary as a filter and select the relevant e-catalogs to answer a given query based not only on their query capabilities but also on their content relevance to the query. A multi-attribute content (MAC) summary is proposed to describe an e-catalog with respect to its content. With MAC summary, an e-catalog is selected to answer a query only if the e-catalog is likely having answers to the query. MAC summary can be constructed and updated using answers returned from e-catalogs and therefore the e-catalogs need not be cooperative. We evaluated MAC summary on 50 e-catalogs, and the experimental results were promising.

Aixin Sun, Boualem Benatallah, Mohand-Saïd Hacid, Mahbub Hassan

Workflow Modelling

WorkflowNet2BPEL4WS: A Tool for Translating Unstructured Workflow Processes to Readable BPEL

This paper presents

WorkflowNet2BPEL4WS

a tool to automatically map a graphical workflow model expressed in terms of Workflow Nets (WF-nets) onto BPEL. The

Business Process Execution Language for Web Services

(BPEL) has emerged as the de-facto standard for implementing processes and is supported by an increasing number of systems (cf. the IBM WebSphere Choreographer and the Oracle BPEL Process Manager). While being a powerful language, BPEL is difficult to use. Its XML representation is very verbose and only readable for the trained eye. It offers many constructs and typically things can be implemented in many ways, e.g., using links and the flow construct or using sequences and switches. As a result only experienced users are able to select the right construct. Some vendors offer a graphical interface that generates BPEL code. However, the graphical representations are a direct reflection of the BPEL code and not easy to use by end-users. Therefore, we provide a mapping from WF-nets to BPEL. This mapping builds on the rich theory of Petri nets and can also be used to map other languages (e.g., UML, EPC, BPMN, etc.) onto BPEL. To evaluate WorkflowNet2BPEL4WS we used more than 100 processes modeled using Protos (the most widely used business process modeling tool in the Netherlands), automatically converted these into CPN Tools, and applied our mapping. The results of these evaluation are very encouraging and show the applicability of our approach.

Kristian Bisgaard Lassen, Wil M. P. van der Aalst
Let’s Dance: A Language for Service Behavior Modeling

In Service-Oriented Architectures (SOAs), software systems are decomposed into independent units, namely services, that interact with one another through message exchanges. To promote reuse and evolvability, these interactions are explicitly described right from the early phases of the development lifecycle. Up to now, emphasis has been placed on capturing structural aspects of service interactions. Gradually though, the description of behavioral dependencies between service interactions is gaining increasing attention as a means to push forward the SOA vision. This paper deals with the description of these behavioral dependencies during the analysis and design phases. The paper outlines a set of requirements that a language for modeling service interactions at this level should fulfill, and proposes a language whose design is driven by these requirements.

Johannes Maria Zaha, Alistair Barros, Marlon Dumas, Arthur ter Hofstede
Dependability and Flexibility Centered Approach for Composite Web Services Modeling

The interest surrounding the Web services (WS) composition issue has been growing tremendously. In the near future, it is awaited to prompt a veritable shift in the distributed computing history, by making the Service-Oriented Architecture (SOA) a reality. Yet, the way ahead is still long. A careful investigation of a major part of the solutions proposed so far reveals that they follow a workflow-like composition approach and that they view failures as exceptional situations that need not to be a primary concern. In this paper, we claim that obeying these assumptions in the WS realm may constrain critically the chances to achieve a high-dependability level and may hamper significantly flexibility. Motivated with these arguments, we propose a WS composition modeling approach that accepts failures inevitability and enriches the composition with concepts that can add flexibility and dependability but that are not part from the WS architecture pillars, namely, the state, the transactional behavior, the vitality degree, and the failure recovery. In addition, we describe a WS composition in terms of definition rules, composability rules, and ordering rules, and we introduce a graphical and a formal notation to ensure that a WS composition is easily and dynamically adaptable to best suit the requirements of a continuously changing environment. Our approach can be seen as a higher level of abstraction of many of the current solutions, since it extends them with the required support to achieve higher flexibility, dependability, and expressiveness power.

Neila Ben Lakhal, Takashi Kobayashi, Haruo Yokota
Aspect-Oriented Workflow Languages

Most available aspect-oriented languages today are extensions to programming languages. However, aspect-orientation, which is a paradigm for decomposition and modularization, is not only applicable in that context. In this paper, we introduce aspect-oriented software development concepts to workflow languages in order to improve the modularity of workflow process specifications with respect to crosscutting concerns and crosscutting changes. In fact, crosscutting concerns such as data validation and security cannot be captured in a modular way when using the constructs provided by current workflow languages. We will propose a concern-based decomposition of workflow process specifications and present the main concepts of aspect-oriented workflow languages using AO4BPEL, which is an aspect-oriented workflow language for Web Service composition.

Anis Charfi, Mira Mezini

Workflow Management and Discovery

A Portable Approach to Exception Handling in Workflow Management Systems

Although the efforts from the Workflow Management Coalition (

WfMC

) led to the definition of a standard process definition language (

XPDL

), there is still no standard for the definition of expected exceptions in workflows. Yet, the very few Workflow Management Systems (

WfMC

) capable of managing exceptions, provide a proprietary exception handling unit, preventing workflow exception definitions from being portable from one system to another one.

In this paper, we show how generic process definitions based on

XPDL

can be seamlessly enriched with standard-conform exception handling constructs, starting from a high-level event-condition-action language. We further introduce a suitable rule compiler, enabling to yield portable process and exception definitions in a fully automated way.

Carlo Combi, Florian Daniel, Giuseppe Pozzi
Methods for Enabling Recovery Actions in Ws-BPEL

Self-Healing is an emerging exigency for Information Systems where processes are everyday more complicated and where many autonomous actors are involved. Roughly, self-healing mechanisms can be viewed as a set of automatic recovery actions fired at run-time according to the detected fault. These actions can be at infrastructure level, i.e. transparently to the process, or they can be defined in the workflow model and executed by the workflow engine. In the Service Oriented Computing world Ws-BPEL is the most used language for web-service orchestration, but standard recovery mechanisms provided by Ws-BPEL are not enough to implement, with reasonable effort, lots of suitable recovery actions.

This paper presents an approach where a designer defines a Ws-BPEL process annotated with some information about recovery actions and then a preprocessing phase, starting from this “annotated”Ws-BPEL, generates a “standard” Ws-BPEL, that is a file understandable for a standard Ws-BPEL engine. This approach has the advantage of avoiding any change in the engine using the standard capabilities to define specific behaviors that will realize recovery actions, but at the end are still a set of Ws-BPEL basic and structured activities.

Stefano Modafferi, Eugenio Conforti
BPEL Processes Matchmaking for Service Discovery

The capability to easily find useful services (software applications, software components, scientific computations) becomes increasingly critical in several fields. Current approaches for services retrieval are mostly limited to the matching of their inputs/outputs. Recent works have demonstrated that this approach is not sufficient to discover relevant components. In this paper we argue that, in many situations, the service discovery should be based on the specification of service behavior. The idea behind is to develop matching techniques that operate on behavior models and allow delivery of partial matches and evaluation of semantic distance between these matches and the user requirements. Consequently, even if a service satisfying exactly the user requirements does not exist, the most similar ones will be retrieved and proposed for reuse by extension or modification. To do so, we reduce the problem of behavioral matching to a graph matching problem and we adapt existing algorithms for this purpose. A prototype is presented which takes as input two BPEL models and evaluates the semantic distance between them; the prototype provides also the script of edit operations that can be used to alter the first model to render it identical with the second one.

Juan Carlos Corrales, Daniela Grigori, Mokrane Bouzeghoub
Evaluation of Technical Measures for Workflow Similarity Based on a Pilot Study

Service discovery of state dependent services has to take workflow aspects into account. To increase the usability of a service discovery, the result list of services should be ordered with regard to the relevance of the services. Means of ordering a list of workflows due to their similarity with regard to a query are missing. In this paper different similarity measures are presented and evaluated based on a pilot of an empirical study. In particular the different measures are compared with the study results. It turns out that the quality of the different measures differ significantly.

Andreas Wombacher

Dynamic and Adaptable Workflows

Evolution of Process Choreographies in DYCHOR

Process-aware information systems have to be frequently adapted due to business process changes. One important challenge not adequately addressed so far concerns the evolution of process choreographies, i.e., the change of interactions between partner processes in a cross-organizational setting. If respective modifications are applied in an uncontrolled manner, inconsistencies or errors might occur in the sequel. In particular, modifications of private processes performed by a single party may affect the implementation of the private processes of partners as well. In this paper we present the DYCHOR (DYnamic CHOReographies) framework which allows process engineers to detect how changes of private processes may affect related public views and – if so – how they can be propagated to the public and private processes of partners. In particular, DYCHOR exploits the semantics of the applied changes in order to automatically determine the adaptations necessary for the partner processes. Altogether our framework provides an important contribution towards the realization of adaptive, cross-organizational processes.

Stefanie Rinderle, Andreas Wombacher, Manfred Reichert
Worklets: A Service-Oriented Implementation of Dynamic Flexibility in Workflows

This paper presents the realisation, using a Service Oriented Architecture, of an approach for dynamic flexibility and evolution in workflows through the support of flexible work practices, based not on proprietary frameworks, but on accepted ideas of how people actually work. A set of principles have been derived from a sound theoretical base and applied to the development of

worklets

, an extensible repertoire of self-contained sub-processes aligned to each task, from which a dynamic runtime selection is made depending on the context of the particular work instance.

Michael Adams, Arthur H. M. ter Hofstede, David Edmond, Wil M. P. van der Aalst
Change Mining in Adaptive Process Management Systems

The wide-spread adoption of process-aware information systems has resulted in a bulk of computerized information about real-world processes. This data can be utilized for process performance analysis as well as for process improvement. In this context process mining offers promising perspectives. So far, existing mining techniques have been applied to operational processes, i.e., knowledge is extracted from execution logs (process discovery), or execution logs are compared with some a-priori process model (conformance checking). However, execution logs only constitute one kind of data gathered during process enactment. In particular, adaptive processes provide additional information about process changes (e.g., ad-hoc changes of single process instances) which can be used to enable organizational learning. In this paper we present an approach for mining change logs in adaptive process management systems. The change process discovered through process mining provides an aggregated overview of all changes that happened so far. This, in turn, can serve as basis for all kinds of process improvement actions, e.g., it may trigger process redesign or better control mechanisms.

Christian W. Günther, Stefanie Rinderle, Manfred Reichert, Wil van der Aalst

Services Metrics and Pricing

A Link-Based Ranking Model for Services

The number of services on the web is growing every day and finding useful and efficient ranking methods for services has become an important issue in modern web applications. In this paper we present a link-based importance model and efficient algorithms for distributed services collaborating through service calls. We adapt the PageRank algorithm and define a service importance that reflects its activity and its contribution to the quality of other services.

Camelia Constantin, Bernd Amann, David Gross-Amblard
Quality Makes the Information Market

In this paper we consider information exchange via the Web to be an information market. The notion of quality plays an important role on this information market. We present a model of quality and discuss how this model can be operationalized.

This leads us to quality measurement, interpretation of measurements and the associated accuracy. An illustration in the form of a basic quality assessment system is presented.

B. van Gils, H. A. (Erik) Proper, P. van Bommel, T. P. van der Weide
Bid-Based Approach for Pricing Web Service

We consider a problem of Web service resource allocation in an economic setting. We assume that different requestors have different valuations for services and a deadline for executing a service, after which it is no longer required. We formally show an optimal offline allocation that maximizes the total welfare, denoted as the total benefit of the requestors. We then propose a bid-based approach to resource allocation and pricing for Web services. Using a detailed simulation, we analyze its behavior and performance compared to other known algorithms. We empirically show that flexibility in service price benefits both the provider in terms of profit and the requestors in terms of welfare.

Our problem motivation stems from the expanding use of Service-Oriented Architecture (SOA) for outsourcing enterprize activities. While the most common method for pricing a Web service nowadays is a fixed-price policy (with a price of 0 in many cases), A Service-Oriented Architecture will increasingly generate competition among providers, underlying the importance of finding methodologies for pricing Web service execution.

Inbal Yahav, Avigdor Gal, Nathan Larson

Formal Approaches to Services

Customizable-Resources Description, Selection, and Composition: A Feature Logic Based Approach

Users preferences heterogeneity in distributed systems often forces resources suppliers to offer customizable-resources in order to fulfill different customer needs. We present in this paper a Feature Logic based approach to customizable-resources description, selection, and composition. In our approach, resources and requests are both specified in a logical framework by feature terms. The feature terms unification technique allows reasoning on these specifications in order to select and possibly compose the resources that are candidate to satisfy a client request.

Yacine Sam, François-Marie Colonna, Omar Boucelma
Defining and Modelling Service-Based Coordinated Systems

This paper introduces

MEO

– a model for securing service-based coordinated systems. The model uses constraints for expressing the application logic of a coordinated system and its required security strategies. Coordination activities are the key concepts used for controlling the execution of participating services. Constraints are specified as pre and post conditions of these coordination activities.

Thi-Huong-Giang Vu, Christine Collet, Genoveva Vargas-Solar
Web Service Mining and Verification of Properties: An Approach Based on Event Calculus

Web services are becoming more and more complex, involving numerous interacting business objects within complex distributed processes. In order to fully explore Web service business opportunities, while ensuring a correct and reliable execution, analyzing and tracking Web services interactions will enable them to be well understood and controlled. The work described in this paper is a contribution to these issues for Web services based process applications.

This article describes a novel way of applying process mining techniques to Web services logs in order to enable “Web service intelligence”. Our work attempts to apply Web service log-based analysis and process mining techniques in order to provide semantical knowledge about the context of and the reasons for discrepancies between process models and related instances.

Mohsen Rouached, Walid Gaaloul, Wil M. P. van der Aalst, Sami Bhiri, Claude Godart

Trust and Security in Cooperative IS

Establishing a Trust Relationship in Cooperative Information Systems

One method for establishing a trust relationship between two servers in a co-operative information system is to use a mutual attestation protocol based on hardware that implements the Trusted Computing Group’s TPM specification. It has been our experience in developing an eHealth demonstration system that the efficiency of such a protocol was relatively low. This inefficiency was a result of the high number of TPM function calls in response to the large number of protocol messages that must be sent by the end server systems to establish mutual trust between them prior to sending each application message (in our case, a medical record). In order to address this inefficiency, we developed a session-based mutual attestation protocol, where multiple application messages are sent over an interval of time where an established trust relationship holds. Moreover, the protocol partially addresses the security flaw due to the time interval between the time-of-attestation and time-of-use. This paper presents this new protocol, once again utilizing TPM microcontroller hardware, and compares its performance with that of our previous (per record) mutual attestation protocol.

Julian Jang, Surya Nepal, John Zic
A Unifying Framework for Behavior-Based Trust Models

Trust models have been touted to facilitate cooperation among unknown entities. Existing behavior-based trust models typically include a fixed evaluation scheme to derive the trustworthiness of an entity from knowledge about its behavior in previous interactions. This paper in turn proposes a framework for behavior-based trust models for open environments with the following distinctive characteristic. Based on a relational representation of behavior-specific knowledge, we propose a trust-policy algebra allowing for the specification of a wide range of trust-evaluation schemes. A key observation is that the evaluation of the standing of an entity in the network of peers requires centrality indices, and we propose a first-class operator of our algebra for computation of centrality measures. This paper concludes with some preliminary performance experiments that confirm the viability of our approach.

Christian von der Weth, Klemens Böhm
A WS-Based Infrastructure for Integrating Intrusion Detection Systems in Large-Scale Environments

The growing need for information sharing among partnering organizations or members of virtual organizations poses a great security challenge. One of the key aspects of this challenge is deploying intrusion detection systems (IDS) that can operate in heterogeneous, large-scale environments. This is particularly difficult because the different networks involved generally use IDSs that have not been designed to work in a cooperative fashion. This paper presents a model for integrating intrusion detection systems in such environments. The main idea is to build compositions of IDSs that work as unified systems, using a service-oriented architecture (SOA) based on the Web Services technology. The necessary interoperability among the elements of the compositions is achieved through the use of standardized specifications, mainly those developed by IETF, W3C and OASIS . Dynamic compositions are supported through service orchestration. We also describe a prototype implementation of the proposed infrastructure and analyze some results obtained through experimentation with this prototype.

José Eduardo M. S. Brandão, Joni da Silva Fraga, Paulo Manoel Mafra, Rafael R. Obelheiro

P2P Systems

An Adaptive Probabilistic Replication Method for Unstructured P2P Networks

We present

APRE

, a replication method for unstructured Peer-to-Peer overlays. The goal of our method is to achieve real-time replication of even the most sparsely located content relative to demand.

APRE

adaptively

expands

or

contracts

the replica set of an object in order to improve the sharing process and achieve a low load distribution among the providers. To achieve that, it utilizes search knowledge to identify possible replication targets inside query-intensive areas of the overlay. We present detailed simulation results where

APRE

exhibits both efficiency and robustness over the number of requesters and the respective request rates. The scheme proves particularly useful in the event of flash crowds, managing to quickly adapt to sudden surges in load.

Dimitrios Tsoumakos, Nick Roussopoulos
Towards Truthful Feedback in P2P Data Structures

Peer-to-Peer data structures (P2P data structures) let a large number of anonymous peers share the data-management workload. A common assumption behind such systems is that peers behave cooperatively. But as with many distributed systems where participation is voluntary, and the participants are not clearly observable, unreliable behavior is the dominant strategy. This calls for reputation systems that help peers choose reliable peers to interact with. However, if peers exchange feedback on experiences with other peers, spoof feedback becomes possible, compromising the reputation system. In this paper we propose and evaluate measures against spoof feedback in P2P data structures. While others have investigated mechanisms for truthtelling recently, we are not aware of any studies in P2P environments. The problem is more difficult in our context because detecting unreliable peers is more difficult as well. On the other hand, a peer can observe the utility of feedback obtained from other peers, and our approach takes advantage of this. To assess the effectiveness of our approach, we have conducted extensive analytical and experimental evaluations. As a result, truthful feedback tends to have a much higher weight than spoof feedback, and collaboration attacks are difficult to carry out under our approach.

Erik Buchmann, Klemens Böhm, Christian von der Weth
Efficient Peer-to-Peer Belief Propagation

In this paper, we will present an efficient approach for distributed inference. We use belief propagation’s message-passing algorithm on top of a DHT storing a Bayesian network. Nodes in the DHT run a variant of the spring relaxation algorithm to redistribute the Bayesian network among them. Thereafter correlated data is stored close to each other reducing the message cost for inference. We simulated our approach in Matlab and show the message reduction and the achieved load balance for random, tree-shaped, and scale-free Bayesian networks of different sizes.

As possible application, we envision a distributed software knowledge base maintaining encountered software bugs under users’ system configurations together with possible solutions for other users having similar problems. Users would not only be able to repair their system but also to foresee possible problems if they would install software updates or new applications.

Roman Schmidt, Karl Aberer

Collaborative Systems Design and Development

Designing Cooperative IS: Exploring and Evaluating Alternatives

At the early stages of the cooperative information system development one of the major problems is to explore the space of alternative ways of assignment and delegations of goals among system actors. The exploration process should be guided by a number of criteria to determine whether the adopted alternative is good-enough. This paper frames the problem of designing actor dependency networks as a multi-agent planning problem and adopts an off-the-shelf planner to offer a tool (P-Tool) that generates alternative actor dependency networks, and evaluates them in terms of metrics derived from Game Theory literature. As well, we offer preliminary experimental results on the scalability of the approach.

Volha Bryl, Paolo Giorgini, John Mylopoulos
Natural MDA: Controlled Natural Language for Action Specifications on Model Driven Development

Current technologies are continuously evolving and software companies need to adapt their processes to these changes. Such adaptation often requires new investments in training and development. To address this issue, OMG defined a model driven development approach (MDD) which insulates business and application logic from technology evolution. Current MDD approaches falls short in fully derive implementation from models described at a high abstraction level. We propose a controlled natural language to complement UML models as an action specification language. In this article, we describe the language, its impact on systems development and the tools developed to support it. In order to demonstrate the language usability we present an application example.

Luciana N. Leal, Paulo F. Pires, Maria Luiza M. Campos, Flávia C. Delicato
Managing Distributed Collaboration in a Peer-to-Peer Network

Shared mutable information objects called u-forms provide an attractive foundation on which to build collaborative systems. As we scale up such systems from small fully-connected workgroups to large, highly distributed, and partially disconnected groups, we have found that peer-to-peer technology and optimistic replication strategies provide a cost-effective mechanism for maintaining good performance. Unfortunately, such systems present well-known coordination and consistency problems. This paper discusses strategies for addressing those difficulties at different levels of the system design, focusing on providing solutions in the information architecture rather than at the infrastructure layer. Addressing problems at this higher layer allows greater freedom in design, and simplifies moving from one infrastructural base to another as technology evolves. Our primary strategy is to enable robust decentralized and asynchronous collaboration while designing architectures that do not rely on two users writing to the same u-form at the same time in different venues. Techniques are provided for simple messaging, collaborative maintenance of collections, indexing supporting rich query, and stand-off annotation and elaboration of third-party datasets. We outline the application of these techniques in a working collaborative system.

Michael Higgins, Stuart Roth, Jeff Senn, Peter Lucas, Dominic Widdows

Collaborative Systems Development

Developing Collaborative Applications Using Sliverware

Despite computers’ widespread use for personal applications, very few programming frameworks exist for creating synchronous collaborative applications. Existing research in CSCW (computer supported cooperative work), specifically approaches that attempt to make current application implementations collaboration-aware, are difficult to implement for two reasons: the systems are focused too narrowly (e.g., on Internet-only applications), or the systems are simply too complicated to be adopted (e.g., they are hard to set up and adapt to concrete applications). Enabling real-time collaboration demands lightweight, modular middleware—

sliverware

—that enables the fine-grained interactions required by collaborative applications. In this paper, we introduce sliverware and give a specific example in the guise of a

distributed keyboard

that multiplexes input from several users into a single stream that each user receives just like input from a normal keyboard. The result is simple, real-time collaboration based on a shared, distributed view of data that enables rapid development of highly coupled coordinating applications.

Seth Holloway, Christine Julien
A Framework for Building Collaboration Tools by Leveraging Industrial Components

Groupware applications allow a distributed group of human users to work apart together over a computer network. They are difficult to develop due to the needs to suit a range of collaboration tasks that are often with diverse and evolutionary requirements. To address this problem, we propose a new framework in which shared data components conforming to a well-defined interface can be dynamically plugged in for flexible sharing, and a simple transformation tool is provided such that the myriad of industrial collaboration-transparent components can be transformed into shared components. The validity of our framework is evaluated by building a suite of typical collaboration tools such as group editors. Under our framework, most components in the Java Development Kit (JDK) can be transformed automatically for prototyping collaboration tools. With minimal manual work, those tools can be adapted to achieve advanced flexibility, e.g., data and control components can be bound dynamically to switch control protocols.

Du Li, Yi Yang, James Creel, Blake Dworaczyk
Evaluation of a Conceptual Model-Based Method for Discovery of Dependency Links

In practice dependency management often suffers from labor intensity and complexity in creating and maintaining the dependency relations. Our method targets projects, where developers are geographically distributed and a wide range of tools is used. A conceptual domain model is used to inter-relate the development objects and to automate dependency link discovery. The proposed method is based on association of development objects with concepts from domain model. These associations are used to compute dependency among development objects, and are stepwise refined to direct dependency links.

A preliminary empirical evaluation of the method is conducted. The method is evaluated both on performance and psychological variables. The evaluation has been performed in laboratory settings using two real cases. The results, although preliminary, provide positive evidence about the ability of our method to automate discovery of dependency relations, the analysis indicates that the method is perceived to be easy to use and useful by its potential users.

Darijus Strasunskas, Sari Hakkarainen

Cooperative IS Applications

Advanced Recommendation Models for Mobile Tourist Information

Personalized recommendations in a mobile tourist information system suffer from a number of limitations. Most pronounced is the amount of initial user information needed to build a user model. In this paper, we adopt and extend the basic concepts of recommendation paradigms by exploiting a user’s personal information (e.g., preferences, travel histories) to replace the missing information. The designed algorithms are embedded as recommendation services in our TIP prototype. We report on the results of our analysis regarding effectiveness and performance of the recommendation algorithms. We show how a number of limiting factors were successfully eliminated by our new recommender strategies.

Annika Hinze, Saijai Junmanee
Keeping Track of the Semantic Web: Personalized Event Notification

The semantic web will not be a static collection of formats, data and meta-data but highly dynamic in each aspect. This paper proposes a personalized event notification system for semantic web documents (

ENS-SW

). The system can intelligently detect and filter changes in semantic web documents by exploiting the semantic structure of those documents. In our prototype, we combine the functionalities of user profiles and distributed authoring systems. Typically, both approaches would lack the ability to handle semantic web documents.

This paper introduces the design and implementation of our event notification system for semantic web documents that handles the XML representation of RDF. We analyzed our prototype regarding accuracy and efficiency in change detection. Our system supports sophisticated change detection including partial deletion, awareness for document restructuring, and approximate filter matches.

Annika Hinze, Reuben Evans
A Gestures and Freehand Writing Interaction Based Electronic Meeting Support System with Handhelds

In this work, we present an Electronic Meeting Support system for handhelds. The design principles applied for developing the system are aimed to help reduce the problems associated with having a small size screen to interact with. The human-handheld interaction is based only in gestures and freehand writing, avoiding the need of widgets and virtual keyboards. The content of the generated documents are organized as concept maps, which gives more flexibility to reorganize and merge the contributions of the meeting attendees. Our system is based on handhelds interconnected with an ad-hoc wireless network. The system architecture is a peer-to-peer one, avoiding the need of central repositories thus allowing meetings to take place anywhere.

Gustavo Zurita, Nelson Baloian, Felipe Baytelman, Mario Morales

Ontologies, Databases and Applications of Semantics (ODBASE) 2006 International Conference

ODBASE 2006 International Conference (Ontologies, DataBases, and Applications of Semantics) PC Co-chairs’ Message

Welcome to the Fifth International Conference on Ontologies, Databases, and Applications of Semantics (ODBASE 2006). This year’s ODBASE conference was held in Montpellier, France from October 29 to November 3, 2006.

The ODBASE conferences provide a forum for exchanging the latest research results on ontologies, data semantics, and other areas of computing related to the Semantic Web. We encourage participation of both researchers and practitioners in order to facilitate exchange of ideas and results on semantic issues in Web information systems. Towards this goal, we accepted both research and experience papers.

Maurizio Lenzerini, Erich Neuhold, V. S. Subrahmanian

Keynote

SomeWhere: A Scalable Peer-to-Peer Infrastructure for Querying Distributed Ontologies

In this invited talk, we present the SomeWhere approach and infrastructure for building semantic peer-to-peer data management systems based on simple personalized ontologies distributed at a large scale. Somewhere is based on a simple class-based data model in which the data is a set of resource identifiers (e.g., URIs), the schemas are (simple) definitions of classes possibly constrained by inclusion, disjunction or equivalence statements, and mappings are inclusion, disjunction or equivalence statements between classes of different peer ontologies. In this setting, query answering over peers can be done by distributed query rewriting, which can be equivalently reduced to distributed consequence finding in propositional logic. It is done by using the message-passing distributed algorithm that we have implemented for consequence finding of a clause w.r.t a set of distributed propositional theories. We summarize its main properties (soundness, completeness and termination), and we report experiments showing that it already scales up to a thousand of peers. Finally, we mention ongoing work on extending the current data model to RDF(S) and on handling possible inconsistencies between the ontologies of different peers.

M. -C. Rousset, P. Adjiman, P. Chatalic, F. Goasdoué, L. Simon

Foundations

Querying Ontology Based Database Using OntoQL (An Ontology Query Language)

Nowadays, ontologies are used in several research domains by offering the means to describe and represent concepts of information sources. Therefore, several approaches and systems storing ontologies and their instances in the same repository (database) have been proposed. As a consequence, defining a query language to support ontology-based database (OBDB) becomes a challenge for the database community. In this paper, we present

OntoQL

, an ontology query language for OBDBs. Firstly, we present formally the OBDB data model supported by this language. Secondly, an overview of the algebra defining the semantics of operators used in

OntoQL

is described. Several query examples showing the interest of this language compared to traditional database query languages are given along this paper. Finally, we present a prototype of the implementation of

OntoQL

.

Stéphane Jean, Yamine Aït-Ameur, Guy Pierra
Description Logic Reasoning with Syntactic Updates

Various data sources on the Web tend to be highly dynamic; this is evident in prominent Web services frameworks in which devices register or deregister their descriptions quite rapidly and in Semantic Web portals which allow content authors to modify or extend underlying ontologies and submit content. Such applications often leverage Description Logic (DL) reasoning for a variety of tasks (e.g., classifying Web service descriptions, etc); however, this can introduce substantial overhead due to content fluctuation, as DL reasoners have only been considered for relatively static knowledge bases. This work aims to provide more efficient DL reasoning techniques for frequently changing instance bases (ABoxes). More specifically, we investigate the process of incrementally updating tableau completion graphs used for reasoning in the expressive DLs

$\mathcal{SHOQ}$

and

$\mathcal{SHIQ}$

, which correspond to a large subset of the W3C standard Web Ontology Language, OWL-DL. We present an algorithm for updating completion graphs under the syntactic addition and removal of ABox assertions. We also provide an empirical analysis of the approach through an implementation in the OWL-DL reasoner, Pellet.

Christian Halashek-Wiener, Bijan Parsia, Evren Sirin
From Folksologies to Ontologies: How the Twain Meet

Ontologies are instruments for capturing and using formal semantics, and are often the result of a ”central committee controlled” style of working. A new trend on the Web is the increasing popularity of folksologies in the form of social bookmarking sites. Folksologies provide informal semantics and can be created and adopted by anybody anytime anywhere on the Internet. Shared meaning in a folksology emerges through the use of tags that are used to bookmark web pages, their usage frequency being considered a reliable indicator of their usefulness and acceptance.

Rather than choosing for either ontologies or folksologies, hybrid emergent semantics systems are needed that combine elements of both perspectives, depending on the particular application. There is a need to analyse the larger picture (including the full range of semantics’ functionalities in their context of use.

In this paper, we outline a number of key design characteristics of emergent semantics systems (ESS). We examine the functionalities of two existing examples of well-known ESSs: del.icio.us and Piggy Bank. Using the results of this comparison, we introduce DogmaBank as a proof of concept implementation of a next-generation ESS that introduces a more advanced combination of lexical and conceptual emergent semantics functionalities.

Peter Spyns, Aldo de Moor, Jan Vandenbussche, Robert Meersman
Transactional Behavior of a Workflow Instance

Workflow management systems usually interpret a workflow definition rigidly, allowing no deviations during execution. However, there are real life situations where users should be allowed to deviate from the prescribed static workflow definition for various reasons, including lack of information about parameter values and unavailability of the required resources. To flexibilize workflow execution, this paper proposes an exception handling mechanism that allows the execution to proceed when otherwise it would have been stopped. The proposal is introduced as a set of extensions to OWL-S that capture the information required for the flexibilization mechanism. In particular, this paper focus on the transactional behavior of a workflow instance, in the sense that it guarantees that either all actions executed by the instance terminate correctly or they are all abandoned.

Tatiana A. S. C. Vieira, Marco A. Casanova

Metadata

An Open Architecture for Ontology-Enabled Content Management Systems: A Case Study in Managing Learning Objects

An important goal of a content management system (CMS) is to acquire and organise content from different data sources in order to answer intelligently any ad-hoc requests from users as well as from peer systems. Existing commercial CMSs address this issue by deploying structured metadata (e.g. XML) to categorise content and produce search indices. Unfortunately, these metadata are not expressive enough to represent content for sophisticated searching. This paper presents an open architecture framework and a Java-based reference implementation for Ontology-enabled Content Management System. The reference implementation uses an open-source CMS called OpenCMS, the Protégé’s OWL library, and RacerPro reasoning engine. The implemented system is a web-based management system for learning objects which were derived from the course and instructional materials used in several postgraduate taught courses. We believe that our OeCMS architecture and implementation would provide a strong platform for developing semantic web protals in general.

Duc Minh Le, Lydia Lau
Ontology Supported Automatic Generation of High-Quality Semantic Metadata

Large amounts of data in modern information systems, such as the World Wide Web, require innovative information retrieval techniques to effectively satisfy users’ information need. A promising approach is to exploit document semantics in the IR process. For this purpose, high-quality semantic metadata is needed. This paper introduces a method to automatically create semantic metadata by using ontologically enhanced versions of common information extraction methods, such as named entity recognition and coreference resolution. Furthermore, this work also proposes the application of ontology-specific heuristic rules to further improve the quality of generated metadata. The results of our method was evaluated using a small test collection.

Ümit Yoldas, Gábor Nagypál
Brokering Multisource Data with Quality Constraints

Access to multisource heterogeneous data is a fundamental research issue in a variety of contexts, including syndicated data retrieval, Web service selection and cooperative information systems. In these variable contexts, the brokering approach to multisource data access provides greater flexibility with respect to the more traditional data integration. The general brokering model assumes that the broker is submitted a query and has the responsibility to optimize the response along specified parameters such as time efficiency, completeness, and consistency. This paper takes a data quality perspective on data brokering and considers data accuracy. Furthermore, the data quality literature assumes that metadata are associated with data to describe their quality. Metadata support data selection without viewing and assessing data directly. On the contrary, previous brokering approaches view data. This paper compares previous results with those of a brokering approach based on metadata which assumes that actual data are transparent to the broker. Testing results comparing the delta between the data visibility and transparency approaches to data brokering are presented.

Danilo Ardagna, Cinzia Cappiello, Chiara Francalanci, Annalisa Groppi

Design

Enhancing the Business Analysis Function with Semantics

This paper outlines a prototypical work bench which offers semantically enhanced analytical capabilities to the business analyst. The business case for such an environment is outlined and user scenario development used to illustrate system requirements. Based upon ideas from meta-discourse and exploiting advances within the fields of ontology engineering, annotation, natural language processing and personal knowledge management, the Analyst Work Bench offers the automated identification of, and between business discourse items with possible propositional content. The semantically annotated results are visually presented allowing personalised report path traversal marked up against the original source.

Sean O’Riain, Peter Spyns
Ontology Engineering: A Reality Check

The theoretical results achieved in the ontology engineering field in the last fifteen years are of incontestable value for the prospected large scale take-up of semantic technologies. Their range of application in real-world projects is, however, so far comparatively limited, despite the growing number of ontologies online available. This restricted impact was confirmed in a three month empirical study, in which we examined over 34 contemporary ontology development projects from a process- and costs-oriented perspective. In this paper we give an account of the results of this study. We conclude that ontology engineering research should strive for a unified, lightweight and component-based methodological framework, principally targeted at domain experts, in addition to consolidating the existing approaches.

Elena Paslaru Bontas Simperl, Christoph Tempich
Conceptual Design for Domain and Task Specific Ontology-Based Linguistic Resources

Regardless of the knowledge representation schema chosen to implement a linguistic resource, conceptual design is an important step in its development. However, it is normally put aside by developing efforts as they focus on content, implementation and time-saving issues rather than on the software engineering aspects of the construction of linguistic resources. Based on an analysis of common problems found in linguistic resources, we present a reusable conceptual model which incorporates elements that give ontology developers the possibility to establish formal semantic descriptions for concepts and relations, and thus avoiding the aforementioned common problems. The model represents a step forward in our efforts to define a complete methodology for the design and implementation of ontology-based linguistic resources using relational databases and a sound software engineering approach for knowledge representation.

Antonio Vaquero, Fernando Sáenz, Francisco Alvarez, Manuel de Buenaga

Ontology Mappings

Model-Driven Tool Interoperability: An Application in Bug Tracking

Interoperability of heterogeneous data sources has been extensively studied in data integration applications. However, the increasing number of tools that produce data with very different formats, such as bug tracking, version control, etc., produces many different kinds of semantic heterogeneities. These semantic heterogeneities can be expressed as mappings between the tools metadata which describe the data manipulated by the tools. However, the semantics of complex mappings (

n:1

,

1:m

and

n:m

relationships) is hard to support. These mappings are usually directly coded in executable transformations using arithmetic expressions. And there is no mechanism to create and reuse complex mappings. In this paper we propose a novel approach to capture different kinds of complex mappings using correspondence models. The main advantage is to use high level specifications for the correspondence models that enable representing different kinds of mappings. The correspondence models may be used to automatically produce executable transformations. To validate our approach, we provide an experimentation with a real world scenario using bug tracking tools.

Marcos Didonet Del Fabro, Jean Bézivin, Patrick Valduriez
Reducing the Cost of Validating Mapping Compositions by Exploiting Semantic Relationships

Defining and composing mappings are fundamental operations required in any data sharing architecture (e.g. data warehouse, data integration). Mapping composition is used to generate new mappings from existing ones and is useful when no direct mapping is available. The complexity of mapping composition depends on the amount of syntactic and semantic information in the mapping. The composition of mappings has proven to be inefficient to compute in many situations unless the mappings are simplified to binary relationships that represent “similarity” between concepts. Our contribution is an algorithm for composing metadata mappings that capture explicit semantics in terms of binary relationships. Our approach allows the hard cases of mapping composition to be detected and semi-automatically resolved, and thus reduces the manual effort required during composition. We demonstrate how the mapping composition algorithm is used to produce a direct mapping between schemas from independently produced schema-to-ontology mappings. An experimental evaluation shows that composing semantic mappings results in a more accurate composition result compared to composing mappings as morphisms.

Eduard Dragut, Ramon Lawrence
Using Fuzzy Conceptual Graphs to Map Ontologies

This paper presents a new ontology mapping method. This method addresses the case in which a non-structured ontology is to be mapped with a structured one. Both ontologies are composed of triplets of the form (object, characteristic, value). Structured means that the values describing the objects according to a given characteristic are hierarchically organized using the

a kind of

relation. The proposed method uses fuzzy conceptual graphs [8] to represent and map objects from a source ontology to a target one. First, we establish a correspondence between characteristics of the source ontology and characteristics of the target ontology based on the comparison of their associated values. Then, we propose an original way of translating the description of an object of the source ontology using characteristics and values of the target ontology. The description thus translated is represented as a fuzzy conceptual graph. Finally, a new projection operation is used to find mappings between translated objects and actual objects of the target ontology. This method has been implemented and the results of an experimentation concerning the mapping of ontologies in the field of risk in food are presented.

David Doussot, Patrice Buche, Juliette Dibie-Barthélemy, Ollivier Haemmerlé
Formalism-Independent Specification of Ontology Mappings – A Metamodeling Approach

Recently, the advantages of metamodeling for the graphical specification of ontologies have been recognized by the semantic web community. This has lead to a number of activities concerned with the development of graphical modeling approaches for the Web Ontology Language based on the Meta Object Facility (MOF) and the Unified Modeling Language (UML). An aspect that has not been addressed so far is the need to specify mappings between heterogenous ontologies. With an increasing number of ontologies being available, the problem of specifying mappings is becoming more important and the rationales for providing model based graphical modeling support for mappings is the same as for the ontologies themselves. In this paper, we therefore propose a MOF-based metamodel for mappings between OWL DL ontologies.

Saartje Brockmans, Peter Haase, Heiner Stuckenschmidt

Information Integration

Virtual Integration of Existing Web Databases for the Genotypic Selection of Cereal Cultivars

The paper presents the development of a virtual database for the genotypic selection of cereal cultivars starting from phenotypic traits.

The database is realized by integrating two existing web databases, Gramene and Graingenes, and a pre-existing data source developed by the Agrarian Faculty of the University of Modena and Reggio Emilia. The integration process gives rise to a virtual integrated view of the underlying sources. This integration is obtained using the MOMIS system (Mediator envirOnment for Multiple Information Sources), a framework developed by the Database Group of the University of Modena and Reggio Emilia (www.dbgroup.unimo.it). MOMIS performs information extraction and integration from both structured and semistructured data sources. Information integration is performed in a semi-automatic way, by exploiting the knowledge in a Common Thesaurus (defined by the framework) and the descriptions of source schemas with a combination of clustering and Description Logics techniques. Momis allows querying information in a transparent mode for the user regardless of the specific languages of the sources. The result obtained by applying MOMIS to Gramene and Graingenes web databases is a queriable virtual view that integrates the two sources and allow performing genotypic selection of cultivars of barley, wheat and rice based on phenotypic traits, regardless of the specific languages of the web databases. The project is conducted in collaboration with the Agrarian Faculty of the University of Modena and Reggio Emilia and funded by the Regional Government of Emilia Romagna.

Sonia Bergamaschi, Antonio Sala
SMOP: A Semantic Web and Service Driven Information Gathering Environment for Mobile Platforms

In this paper, we introduce a mobile services environment, namely SMOP, in which semantic web based service capability matching and location-aware information gathering are both used to develop mobile applications. Domain independency and support on semantic matching in mobile service capabilities are the innovative features of the proposed environment. Built-in semantic matching engine of the environment provides the addition of new service domain ontologies which is critical in terms of system extensibility. Therefore the environment is generic in terms of developing various mobile applications and provides most relevant services for mobile users by applying semantic capability matching in service lookups. GPS (Global Positioning System) and map service utilization cause to find near services in addition to capability relevancy. The software architecture and system extensibility support of the environment are discussed in the paper. The real life implementation of the environment for the estate domain is also given as a case study in the evaluation section of the paper.

Özgür Gümüs, Geylani Kardas, Oguz Dikenelli, Riza Cenk Erdur, Ata Önal
Integrating Data from the Web by Machine-Learning Tree-Pattern Queries

Effienct and reliable integration of web data requires building programs called wrappers. Hand writting wrappers is tedious and error prone. Constant changes in the web, also implies that wrappers need to be constantly refactored. Machine learning has proven to be useful, but current techniques are either limited in expressivity, require non-intuitive user interaction or do not allow for

n

-ary extraction. We study using tree-patterns as an

n-ary

extraction language and propose an algorithm learning such queries. It calculates the most information-conservative tree-pattern which is a generalization of two input trees. A notable aspect is that the approach allows to learn queries containing both child and descendant relationships between nodes. More importantly, the proposed approach does not require any labeling other than the data which the user effectively wants to extract. The experiments reported show the effectiveness of the approach.

Benjamin Habegger, Denis Debarbieux

Agents

HISENE2: A Reputation-Based Protocol for Supporting Semantic Negotiation

A key issue in open multiagent systems is that of solving the difficulty of an agent to understand messages coming from other agents having different ontologies.

Semantic negotiation

is a new way of facing this issue, by exploiting techniques that allow the agents of a MAS to reach mutually acceptable agreements on the exchanged terms. The produced scenario is similar to that of human discussions, where human beings try to solve those situations in which the involved terms are not mutually understandable, by

negotiating

the semantics of these terms. The HISENE approach is a recent JADE-based protocol effectively supporting semantic negotiation. It is based on the idea that an agent that does not understand a term can automatically require the help of other agents that it considers particularly reliable. However, HISENE does not take in account either the possibility of wrong answers coming from the requested agents or the fact that a term can have different meanings. In order to cover these two important issues, in this paper we present an extension of HISENE, called HISENE2, and we show experimentally that it performs better than HISENE with respect both to the quality and the efficiency of the semantic negotiation.

Salvatore Garruzzo, Domenico Rosaci
An HL7-Aware Multi-agent System for Efficiently Handling Query Answering in an e-Health Context

In this paper we present a multi-agent system aiming at supporting patients to search health care services of their interest in an e-health scenario. The proposed system is HL7-aware in that it represents both patient and service information according to the directives of HL7, the information management standard adopted in medical context. In this paper we illustrate the technical characteristics of our system and we present a comparison between it and other related systems already proposed in the literature.

Pasquale De Meo, Gabriele Di Quarto, Giovanni Quattrone, Domenico Ursino
PersoNews: A Personalized News Reader Enhanced by Machine Learning and Semantic Filtering

In this paper, we present a web-based, machine-learning enhanced news reader (PersoNews). The main advantages of PersoNews are the aggregation of many different news sources, machine learning filtering offering personalization not only per user but also for every feed a user is subscribed to, and finally the ability for every user to watch a more abstracted topic of interest by employing a simple form of semantic filtering through a taxonomy of topics.

E. Banos, I. Katakis, N. Bassiliades, G. Tsoumakas, I. Vlahavas

Contexts

An Ontology-Based Approach for Managing and Maintaining Privacy in Information Systems

The use of ontologies in the fields of information retrieval and semantic web is well-known. Since long time researcher are trying to find ontological representations of the diverse laws to have a mechanism to retrieve fine granular legal information about diverse legal cases. However, one of the common problems software systems are faced with in constitutional states is the adapting of the diverse privacy directives. This is a very complex task due to lacks in current software solutions – especially from the architectural point of view. In fact, we miss software solutions that manage privacy directives in a central instance in a structured manner. Even more, such a solution should provide a fine granular access control mechanism on the data entities to ensure that every aspect of the privacy directives can be reflected. Moreover, the whole system should be transparent, comprehensible, and modifiable at runtime. This paper provides a novel solution for this by means of ontologies. The usage of ontologies in our approach differs from the conventional form in focusing on generating access control policies which are adapted from our software framework to provide fine granular access on the diverse data sources.

Dhiah el Diehn I. Abou-Tair, Stefan Berlik
Ontology-Based User Context Management: The Challenges of Imperfection and Time-Dependence

Robust and scalable user context management is the key enabler for the emerging context- and situation-aware applications, and ontology-based approaches have shown their usefulness for capturing especially context information on a high level of abstraction. But so far the problem has not been approached as a data management problem, which is key to scalability and robustness. The specific challenges lie in the imperfection of high-level context information, its time-dependence and the variability in the dynamics between its different elements. The approach presented in this paper presents a layered data model which structures the problems and is geared towards flexible and efficient query processing in combination of relational database and logic-based techniques. The techniques have been successfully applied for context-aware corporate learning support.

Andreas Schmidt
Moving Towards Automatic Generation of Information Demand Contexts: An Approach Based on Enterprise Models and Ontology Slicing

This paper outlines the first experiences of an approach for automatically deriving information demands in order to provide users with demand-driven information supply and decision support. The presented approach is based on the idea that information demands with respect to work activities can be identified by examining the contexts in which they exist and that a suitable source for such contexts are Enterprise Models. However, deriving contexts manually from large and complex models is very time consuming and it is therefore proposed that a better approach is to, based on an Enterprise Model, produce a domain ontology and from this then automatically derive the information demand contexts that exist in the model.

Tatiana Levashova, Magnus Lundqvist, Michael Pashkin

Similarity and Matching

Semantic Similarity of Ontology Instances Tailored on the Application Context

The paper proposes a framework to assess the semantic similarity among instances within an ontology. It aims to define a sensitive measurement of semantic similarity, which takes into account different hints hidden in the ontology definition and explicitly considers the application context. The similarity measurement is computed by combining and extending existing similarity measures and tailoring them according to the criteria induced by the context. Experiments and evaluation of the similarity assessment are provided.

Riccardo Albertoni, Monica De Martino
Finding Similar Objects Using a Taxonomy: A Pragmatic Approach

Several authors have suggested similarity measures for objects labeled with terms from a hierarchical taxonomy. We generalize this idea with a definition of information-theoretic similarity for taxonomies that are structured as directed acyclic graphs from which multiple terms may be used to describe an object. We discuss how our definition should be adapted in the presence of ambiguity, and introduce new similarity measures based on our definitions.

We present an implementation of our measures that is integrated with a relational database and scales to large taxonomies and datasets. We evaluate our measures by applying them to an object-matching problem from bioinformatics, and show that, for this task, our new measures outperform those reported in the literature. We also verified the scalability of our approach by applying it to patent similarity search, using patents classified with terms from the taxonomy defined by the United States Patent and Trademark Office.

Peter Schwarz, Yu Deng, Julia E. Rice
Towards an Inductive Methodology for Ontology Alignment Through Instance Negotiation

The Semantic Web needs methodologies to accomplish actual commitment on shared ontologies among different actors in play. In this paper, we propose a machine learning approach to solve this issue relying on classified instance exchange and inductive reasoning. This approach is based on the idea that, whenever two (or more) software entities need to align their ontologies (which amounts, from the point of view of each entity, to add one or more new concept definitions to its own ontology), it is possible to learn the new concept definitions starting from shared individuals (i.e. individuals already described in terms of both ontologies, for which the entities have statements about classes and related properties); these individuals, arranged in two sets of positive and negative examples for the target definition, are used to solve a learning problem which as solution gives the definition of the target concept in terms of the ontology used for the learning process. The method has been applied in a preliminary prototype for a small multi-agent scenario (where the two entities cited before are instantiated as two software agents). Following the prototype presentation, we report on the experimental results we obtained and then draw some conclusions.

Ignazio Palmisano, Luigi Iannone, Domenico Redavid, Giovanni Semeraro
Combining Web-Based Searching with Latent Semantic Analysis to Discover Similarity Between Phrases

Determining semantic similarity between words, concepts and phrases is important in many areas within Artificial Intelligence. This includes the general areas of information retrieval, data mining, and natural language processing. Existing approaches have primarily focused on noun to noun synonym comparison. We propose a new technique for the comparison of general expressions that combines web searching with Latent Semantic Analysis. This technique is more general than previous approaches, as it is able to match similarities between multi-word expressions, abbreviations, and alpha-numeric phrases. Consequently, it can be applied to more complex comparison problems such as ontology alignment.

Sean M. Falconer, Dmitri Maslov, Margaret-Anne Storey
A Web-Based Novel Term Similarity Framework for Ontology Learning

Given that pairwise similarity computations are essential in ontology learning and data mining, we propose a similarity framework that is based on a conventional Web search engine. There are two main aspects that we can benefit from utilizing a Web search engine. First, we can obtain the freshest content for each term that represents the up-to-date knowledge on the term. This is particularly useful for dynamic ontology management in that ontologies must evolve with time as new concepts or terms appear. Second, in comparison with the approaches that use the certain amount of crawled Web documents as corpus, our method is less sensitive to the problem of data sparseness because we access as much content as possible using a search engine. At the core of our proposed methodology, we present two different measures for similarity computation, a mutual information based and a feature-based metric. Moreover, we show how the proposed metrics can be utilized for modifying existing ontologies. Finally, we compare the extracted similarity relations with semantic similarity using WordNet. Experimental results show that our method can extract topical relations between terms that are not present in conventional concept-based ontologies.

Seokkyung Chung, Jongeun Jun, Dennis McLeod

Errata

Erratum
No Name
Erratum: Web Service Mining and Verification of Properties: An Approach Based on Event Calculus

The paper entitled ”Web Service Mining and Verification of Properties: An Approach Based on Event Calculus”, starting on page 408 of this publication, has been retracted. A significant part of the paper was copied from four pieces of work by the authors K. Mahbub and G. Spanoudakis. The pieces of work in question are:

A Framework for Requirements Monitoring of Service Based Systems

http://dx.doi.org/10.1145/1035167.1035181

Requirements Monitoring for Service-Based Systems: Towards a Framework Based on Event Calculus

http://dx.doi.org/10.1109/ASE.2004.1342769

Run-time Monitoring of Requirements for Systems Composed of Web-Services: Initial Implementation and Evaluation Experience

http://dx.doi.org/10.1109/ICWS.2005.100

A Scheme for Requirements Monitoring of Web Service Based Systems

http://www.soi.city.ac.uk/project/DOC_TechReport/TR_2004_DOC_02.pdf

Plagiarism was committed by the first author, Mohsen Rouached. The other authors were not aware of this. Moreover, the contribution of the third author, Wil M. P. van der Aalst, had nothing to do with the part of the work that was plagiarized.

Mohsen Rouached, Walid Gaaloul, Wil M. P. van der Aalst, Sami Bhiri, Claude Godart
Backmatter
Metadaten
Titel
On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE
herausgegeben von
Robert Meersman
Zahir Tari
Copyright-Jahr
2006
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-48289-5
Print ISBN
978-3-540-48287-1
DOI
https://doi.org/10.1007/11914853