Skip to main content
main-content

Inhaltsverzeichnis

Frontmatter

Posters of the 2005 CoopIS (Cooperative Information Systems) International Conference

Checking Workflow Schemas with Time Constraints Using Timed Automata

Nowadays, the ability of providing an automated support to the management of business processes is widely recognized as a main competitive factor for companies. One of the most critical resources to deal with is time, but, unfortunately, the time management support offered by available workflow management systems is rather rudimentary. We focus our attention on the modeling and verification of workflows extended with time constraints. We propose (finite)

timed automata

as an effective tool to specify timed workflow schemas and to check their consistency. More precisely, we reduce the consistency problem for workflow schemas to the emptiness problem for timed automata, making it possible to exploit the machinery developed to solve the latter to address the former.

Elisabetta De Maria, Angelo Montanari, Marco Zantoni

Cooperation Between Utility IT Systems: Making Data and Applications Work Together

The ongoing optimization of work processes requires a close cooperation of IT systems within an enterprise. Originating from requirements of the utility industry we present a concept of interoperability of utility software systems and its corresponding data. Our solution builds on industry standards – Common Information Model (CIM) as a power system domain data model and SOAP as a standard for messaging and interface specification. Together they provide a basis for translating data between applications and are seamlessly bound to a communication infrastructure.

Claus Vetter, Thomas Werner

Adapting Rigidly Specified Workflow in the Absence of Detailed Ontology

Adaptive workflow approaches (for example, [1]) promise to provide flexibility of web service composition. However, definition-time adaptive workflow approaches (for example, exception handlers, and alternative flow selection) do not account for service environment dynamics, such as availability of new services and changing QoS parameters of services. This paper introduces a new method of automatic, run-time adaption, called workflow

convergence

. It utilises ontology in early development; ontology that reflects service message structures though is not semantically rich enough to support pure semantics based discovery and composition [2,3]. Industry is yet to widely embark on developing the complex semantic models that are fundamental for these approaches. Our workflow adaptation approach ensures that ontology is useful and value-added at all stages of development, thus providing an

added incentive for industry to adopt such ontology modelling efforts.

The convergence approach is introduced in the following section. Convergence relies on a service description approach, introduced in the last section, that utilises ontology in early development.

Gregory Craske, Caspar Ryan

Modelling and Streaming Spatiotemporal Audio Data

In this paper, we describe a special application domain of data management – the production of high-quality spatial sound. The IOSONO system, developed by Fraunhofer IDMT, is based on the wave field synthesis. Here, a large number of loudspeakers is installed around the listening room. A rendering component computes the signal for each individual speaker from the position of the audio source in a scene and the characteristics of the listening room. So, we can achieve the impression that sound sources are on specific positions in the listening room.

Thomas Heimrich, Katrin Reichelt, Hendrik Rusch, Kai-Uwe Sattler, Thomas Schröder

Enhancing Project Management for Periodic Data Production Management

When data itself is the product, the management of data production is quite different from traditional goods production management. Production status, the quality of the products, product identifiers, deviations, and due dates are defined in terms of volatile data and are handled strictly to enable the resulting reports within the allotted time. This paper outlines how the information gathering process for a data production management system can be automated. The system’s architecture is based upon ideas of project management. Milestones are enriched with production information. The major benefits are the following. Operators understand easily this management. They can concentrate on production itself, but are provided with reliable management information without manual effort. Additionally, with this solution a production plan is automatically created in advance.

Anja Schanzenberger, Dave R. Lawrence, Thomas Kirsche

Cooperating Services in a Mobile Tourist Information System

Complex information systems are increasingly required to support the flexible delivery of information to mobile devices. Studies of these devices in use have demonstrated that the information displayed to the user must be limited in size, focussed in content [1] and adaptable to the user’s needs [2]. Furthermore, the presented information is often dynamic-even changing continuously. Eventbased communication provides strong support for selecting relevant information for dynamic information delivery.

Annika Hinze, George Buchanan

Posters of the 2005 DOA (Distributed Objects and Applications) International Conference

Flexible and Maintainable Contents Activities in Ubiquitous Environment

In future ubiquitous environments, contents (data, movie, text, graphics, etc.) will be more sophisticated and context-aware so that they can enrich user experience. We have proposed the Active Contents (AC) framework, which is based on contents encapsulation with program and aspect definition to allow contents to behave actively. AC can be seen as a software component with several viewpoints of contributors (planner, designer, programmer, etc.). The problem is about maintainability of AC which is modified by the contributors based on their own viewpoints.

In this position paper, we propose a mechanism to allow such contributors to modify AC with context-aware aspect. In our mechanism, based on location binding analysis for AC, parallel executions to be performed at a separate location are detected and automatically executed using workflow-aware communication.

Kazutaka Matsuzaki, Nobukazu Yoshioka, Shinichi Honiden

Using Model-Driven and Aspect-Oriented Development to Support End-User Quality of Service

Nowadays, more and more applications are distributed and require runtime guarantees from their underlying environment. To reach theses guarantees, Quality of Service (QoS) information must be provided. The development of QoS-aware applications requires the control and management of low-level resources. It is a complex task to correlate high-level domain-specific QoS requirements with low-level system-dependent characteristics. Various middleware architectures have been proposed to provide a QoS support [1][2] and to free the designer from low-level resource concerns. Some architectures [3] use dynamic adaptation of applications or a component-based conception [4]. Our approach uses the Model Driven Architecture [5]: the mapping of QoS constraints with platform specific resource management can be automated during the transformation steps leading from the model to the software implementation. In this paper, we present a solution to model high-level user-oriented QoS constraints, and we demonstrate that the MDA, associated with Aspect-Oriented Software Development, can ease the conception of QoS aware application.

David Durand, Christophe Logé

A Generic Approach to Dependability in Overlay Networks

Overlay networks are virtual communication structures that are logically “laid over” underlying hosting networks such as the Internet. They are implemented by deploying application-level topology maintenance and routing functionality at strategic places in the hosting network [1,2]. In terms of

dependability

, most overlays offer proprietary “self-repair” functionality to recover from situations in which their nodes crash or are unexpectedly deleted. This functionality is typically orthogonal to the purpose of the overlay, and a systematic and complete approach to dependability is rarely taken because it is not the focus of the work. We therefore propose to offer dependability as a

service

to any overlay.

Barry Porter, Geoff Coulson

An XML-Based Cross-Language Framework

We introduce XMLVM, a Turing complete XML-based programming language based on a stack-based, virtual machine. We show how XMLVM can automatically be created from Java class-files and .NET’s Intermediate Language. While the programmer is never directly exposed to XMLVM, we provide tools based on XMLVM for tasks such as cross-language functional testing or code migration.

Arno Puder

Software Design of Electronic Interlocking System Based on Real-Time Object-Oriented Modeling Technique

Electronic interlocking systems using micro-computer are developed to overcome the problems of conventional relay interlocking systems and to minimize the cost and the maintenance requirements when the system needs to be rebuilt or expanded.[1] However, it is very difficult to diagnose the root cause in case of a single device problem since there are multiple causes. Therefore, guaranteeing the stability to a equivalent level to relay interlocking systems is the main requirement for electronic interlocking systems to be benefit. This can be accomplished by a careful design of both the hardware and software and their interface. The stability of the interlocking software is determined by not only its reliability and efficiency of interlocking implementation but also by the convenience of maintenance. The method of real-time system development and (the) method of conventional data have an error of the confidential side, error detecting and error recovery, problem of exception situation processing, reusability of software of process and maintenance aspect and so on. The method for supplement shortcoming of these methods is real-time software development methodologies of object center. These methodologies play important role in solving complexity system, maintaining and requiring increased problems of software quantity as applying to object intention concept. But, because these methods are putting emphasis on analysis than design method, which are quitting emphasis on object structure among the analyses, development of real-time software has lacking aspects. A design approach for developing interlocking software to improve the problems of existing systems was proposed in this paper. A design and modeling strategy based on the Real-time Object-Oriented Modeling (ROOM)[2] procedure, which is the most appropriate approach in the initial stage of real-time software development, is proposed. Although it is an object-oriented method, it is a top-down design method that is similar to the structural analysis method based on the ROOM that is effective for real-time problems; therefore, it is not only convenient for standardization, expansion, and maintenance but also can contribute to improved reliability and stability of the electronic interlocking system.

Jong-Sun Kim, Ji-Yoon Yoo, Hack-Youp Noh

Posters of the 2005 ODBASE (Ontologies, Databases, and Applications of Semantics) International Conference

Ontology Based Negotiation Case Search System for the Resolution of Exceptions in Collaborative Production Planning

In this paper, we present an ontology based negotiation case search system that supports contractors to solve exceptions generated during the operation of supply chain.

Chang Ouk Kim, Young Ho Cho, Jung Uk Yoon, Choon Jong Kwak, Yoon Ho Seo

Enhanced Workflow Models as a Tool for Judicial Practitioners

In the past, attempts were made to make law and justice more accessible to general audience and to legal practitioners using models of legal texts. We present a new approach to make the judicial workflows easier to understand. By using process modelling methods, the developed representation emphasises on improving transparency, on promoting mutual trust and on formalising models for verification. To design semi-formal models interviews are used as well as legal texts are consulted. These models are formalised in a second step. The models are enhanced with hierarchies, modules and the generation of different views. Language problems are also treated. The subsequent formalised models are used to verify trigger events and timing of judicial workflows, which have very specific requirements in terms of periods of time and fixed dates. A new tool,

Lexecute

, is presented which gives new perspectives into justice and reveal new potentials for modelling methods in the field of justice.

Jörn Freiheit, Susanne Münch, Hendrik Schöttle, Grozdana Sijanski, Fabrice Zangl

Semantics of Information Systems Outsourcing

Businesses are in nature dynamic and change continuously. Because of different economic prospects they grow in size and portfolio or just the other way they have to reduce one of these aspects. There are several ways to accomplish growth or reduction. A smooth way may consist of outsourcing parts of one’s non-core business processes to specialized parties in the market. A variety of outsourcing models have been developed ([6]). Outsourcing can range from having all the business process (such as development, maintenance and operations) performed by an outsourcing partner, up to having a contract with a partner performing only one single business task. In our work we concentrate on conceptual modeling of outsourcing information systems, where outsourcing in the context of information systems will be defined as delegating a part of the functionality of the original system to an existing outside party (the supplier). Such functionality typically involves one or more operations (or services), where each operation satisfies certain input- and output requirements. These requirements will be defined in terms of the ruling service level agreements (SLAs). We provide a formal means to ensure that the outsourcing relationship between outsourcing party and supplier, determined by a SLA, satisfies specific correctness criteria. These correctness criteria are defined in terms of consistency and completeness between the outsourced operation and the associated operation offered by the supplier. Our correctness criterion will concern mappings between an existing outsourcer schema and an existing supplier schema, and will address both semantical and ontological aspects pertaining to outsourcing. Formal specifications as offered in our work can prove their value in the setup and evaluation of outsourcing contracts. We will perform our analysis within the modeling framework based on the UML/OCL formalism ([8,9]). The Object Constraint Language OCL offers a textual means to enhance UML diagrams, offering formal precision in combination with high expressiveness. In [1] it has been demonstrated that OCL has at least the same expressive power as the relational algebra, (the theoretical core of the relational query language SQL), thus making OCL a very powerful language for specification of constraints, queries and views.

H. Balsters, G. B. Huitema

Integration of Heterogeneous Knowledge Sources in the CALO Query Manager

We report on our effort to build a real system for integrating heterogeneous knowledge sources with different query answering and reasoning capabilities. We are conducting this work in the context of CALO (Cognitive Assistant that Learns and Organizes), a multidisciplinary project funded by DARPA to create cognitive software systems.

José Luis Ambite, Vinay K. Chaudhri, Richard Fikes, Jessica Jenkins, Sunil Mishra, Maria Muslea, Tomas Uribe, Guizhen Yang

Context Knowledge Discovery in Ubiquitous Computing

This article introduces the concept of context knowledge discovery process, and presents a middleware architecture which eases the task of ubiquitous computing developers, while supporting data mining and machine learning techniques.

Kim Anh Pham Ngoc, Young-Koo Lee, Sung-Young Lee

Ontology-Based Integration for Relational Data

Motivation.

Recent years gave witness to significant progress in database integration including several commercial implementations. However, existing works make strong assumptions about mapping representations but are weak on formal semantics and reasoning. Current research and practical application calls for more formal approaches in managing semantic heterogeneity [3].

Dejing Dou, Paea LePendu

Workshop on Agents, Web Services and Ontologies Merging (AWeSOMe)

AWeSOMe 2005 PC Co-chairs’ Message

We wish to extend a warm welcome to AWeSOMe’05, The First International Workshop on Agents, Web Services and Ontologies Merging. This workshop will be held in conjunction with the On The Move Federated Conferences and Workshops 2005 (OTM’05).

The current and future software needs are towards the development of large and complex Intelligent Networked Information Systems, covering a wide range of issues as required for the deployment of Internet- and Intranet-based systems in organizations and for e-business. The OTM’05 Federated Conferences and Workshops provide an opportunity for researchers and practitioners to their background to different emerging areas such as Data and Web Semantics, Distributed Objects, Web Services, Databases, Workflow, Cooperation, Interoperability and Mobility.

Web services are a rapidly expanding approach to building distributed software systems across networks such as the Internet. A Web service is an operation typically addressed via a URI, declaratively described using widely accepted standards, and accessed via platform-independent XML-based messages.

Pilar Herrero, Gonzalo Méndez, Lawrence Cavedon, David Martin

Document Flow Model: A Formal Notation for Modelling Asynchronous Web Services Composition

This paper presents a formal notation for modelling asynchronous web services composition, using context and coordination mechanisms. Our notation specifies the messages that can be handled by different web services, and describes a system of inter-related web services as the flow of documents between them. The notation allows the typical web services composition pattern, asynchronous messaging, and has the capability to deal with long-running service-to-service interactions and dynamic configuration behaviors.

Jingtao Yang, Corina Cîrstea, Peter Henderson

Realising Personalised Web Service Composition Through Adaptive Replanning

The emergence of fully-automated Web service composition as a potential facilitator of both eBusiness and ambient or ubiquitous computing is to be welcomed. However this emergence has exposed the need for flexibility and adaptivity due to the fundamentally unreliable nature of the networks and infrastructure on which the component services rely. Furthermore, a key to driving forward acceptance and adoption of this growing set of technologies is the improvement of the user’s overall experience therewith. Our experimentation has proven that it is quite possible to generate inflexible and only partially adaptive service compositions using out of the box A.I. planners. Because modifying the planner is beyond the scope of our research, we seek to use methods of pre-processing and post-analysis to enable AI planners to produce adaptive compositions. In this paper, the current state of our research is presented along with a proposed direction for improving the reconciliation of user needs with the available services.

Steffen Higel, David Lewis, Vincent Wade

Semantic Web Services Discovery in Multi-ontology Environment

Web services are becoming the basis for electronic commerce of all forms. The number of services being provided is increasing but different service providers use different ontologies for services’ descriptions. This has made it difficult for service discovery agents to compare and locate the desired services. Inputs and outputs are important pieces of information that can be used when searching for the needed services. Therefore, in this paper, to facilitate users or software agents for discovering Web services in multi-ontology environments, we propose an approach to determine the semantic similarity of services’ inputs/outputs that are described by different ontologies.

Sasiporn Usanavasin, Shingo Takada, Norihisa Doi

Security and Semantics

On the Application of the Semantic Web Rule Language in the Definition of Policies for System Security Management

The adoption of a policy-based approach for the dynamic regulation of a system or service (e.g. security, QoS or mobility service) requires an appropriate policy representation and processing. In the context of the Semantic Web, the representation power of languages enriched with semantics (i.e. semantic languages), together with the availability of suitable interpreters, make such kind of languages well suited for policies representation. In this paper, we describe our proposal for the combination of the CIM-OWL ontology (i.e., the mapping of the DMTF Common Information Model into OWL) with the Semantic Web Rule Language as the basis for a semantically-rich security policy language that can be used to formally describe the desired security behaviour of a system or service. An example of security policy in this language and its reasonig are also presented.

Félix J. García Clemente, Gregorio Martínez Pérez, Juan A. Botía Blaya, Antonio F. Gómez Skarmeta

On Secure Framework for Web Services in Untrusted Environment

In this paper we identify trust relationships among users and systems. We try to adhere to simplicity principle in our modelling of the system. By using simple model and free lightweight technologies, we show that it is possible to implement secure Web applications/services. The paper also addresses some security problems and issues about implementing Web Services.

Sylvia Encheva, Sharil Tumin

An Approach for Semantic Query Processing with UDDI

UDDI is not suitable for handling semantic markups for Web services due to its flat data model and limited search capabilities. In this paper, we introduce an approach to allow for support of semantic service descriptions and queries using registries that conforms to UDDI V3 specification. Specifically, we discuss how to store complex semantic markups in the UDDI data model and use that information to perform semantic query processing. Our approach does not require any modification to the existing UDDI registries. The add-on modules reside only on clients who wish to take advantage of semantic capabilities. This approach is completely backward compatible and can integrate seamlessly into existing infrastructure.

Jim Luo, Bruce Montrose, Myong Kang

Agents for Web Service Support

A Multiagent-Based Approach for Progressive Web Map Generation

Demands for web mapping services are increasing worldwide since the rise of Internet which became a growing medium to disseminate geospatial information. However, these services need to be more reliable, accessible and personalized. They also need to be improved in terms of data format, interoperability and on-the-fly processing and transfer. In this paper we propose a multiagent-based approach to generate maps on-the-fly in the context of web mapping applications. Our approach is abls to adapt the contents of maps in real-time to users’ needs and display terminals. It also speeds up map generation and transfer. Our approach which is called

progressive automatic map generation based on layers of interest

, combines different techniques: multiagent systems, cartographic generalization and multiple representations.

Nafaâ Jabeur, Bernard Moulin

Semantics of Agent-Based Service Delegation and Alignment

In this paper we concentrate on conceptual modeling and semantics of service delegation and alignment in information systems. In delegation, a source company wishes to hand over parts of its functionality together with related responsibilities to a supplying party. From the side of the outsourcer the search for a suitable supplier mostly will be a manual process with all the consequences of a long time to market, as well as trial and error before a good fit is obtained between both related parties. This paper addresses an agent-based solution for improving this match-making process in B2B markets. Part of the match-making process will be the alignment of business processes on the side of the outsourcer as well on the side of the supplier. We will provide a formal means to ensure that the delegation relationship, determined by a ruling service level agreement (SLA), satisfies specific correctness criteria. These correctness criteria are defined in terms of consistency and completeness between the delegated operation and the associated operation offered by the supplier. Our correctness criterion will concern mappings between an existing delegator schema and an existing supplier schema, and will address both semantical and ontological aspects pertaining to delegation and alignment. Agent-based delegation together with formal specifications can prove their value in the process of constructing delegation contracts. Our analysis will be performed within the modeling framework based on the UML/OCL formalism. The concepts we discussed in this paper are illustrated by an example of companies delegating billing services to Billing Service Providers.

H. Balsters, G. B. Huitema, N. B. Szirbik

Workshop on Context-Aware Mobile Systems (CAMS)

CAMS 2005 PC Co-chairs’ Message

Context awareness is increasingly forming one of the key strategies for delivering effective information services in mobile contexts. The limited screen displays of many mobile devices mean that content must be carefully selected to match the user’s needs and expectations, and context provides one powerful means of performing such tailoring. Context aware mobile systems will almost certainly become ubiquitous – already in the United Kingdom affordable ’smartphones’ include GPS location support. With this hardware comes the opportunity for ’on-board’ applications to use location data to provide new services – until recently such systems could only be created with complex and expensive components. Furthermore, the current ’mode’ of the phone (e.g. silent, meeting, outdoors), contents of the built-in calendar, etc. can all used to provide a rich context for the user’s immediate environment.

Annika Hinze, George Buchanan

Personalising Context-Aware Applications

The immaturity of the field of context-aware computing means that little is known about how to incorporate appropriate personalisation mechanisms into context-aware applications. One of the main challenges is how to elicit and represent complex, context-dependent requirements, and then use the resulting representations within context-aware applications to support decision-making processes. In this paper, we characterise several approaches to personalisation of context-aware applications and introduce our research on personalisation using a novel preference model.

Karen Henricksen, Jadwiga Indulska

Management of Heterogeneous Profiles in Context-Aware Adaptive Information System

Context-awareness is a fundamental aspect of the ubiquitous computing paradigm. In this framework, a relevant problem that has received little attention is the large heterogeneity of formats used to express a context information: text files in ad-hoc format, HTTP header, XML files over specific DTD’s, RDF, CC/PP and so on. So many applications meet difficulties in interpreting and integrating context information coming from different sources. In this paper we propose an approach to this problem. We first present a general architecture for context-aware adaptation that is able to take into account different coordinates of adaptation. We then show how, in this framework, external profiles are dynamically captured and translated into a uniform common representation that is used by the system to meet the requirements of adaptation. We also present a prototype application that implements the proposed approach.

Roberto De Virgilio, Riccardo Torlone

Context-Aware Recommendations on the Mobile Web

Recently, there has been a significant increase in the use of data via the mobile web. Since the user interfaces for mobile devices are inconvenient for browsing through many pages and searching their contents, many studies have focused on ways to recommend content or menus that users prefer. However, the mobile usage pattern of content or services differs according to context. In this paper, we apply context information—location, time, identity, activity, and device—to recommend services or content on the mobile web. A Korean mobile service provider has implemented context-aware recommendations. The usage logs of this service are analyzed to show the performance of context-aware recommendations.

Hong Joo Lee, Joon Yeon Choi, Sung Joo Park

Querying and Fetching

Capturing Context in Collaborative Profiles

In various application areas for alerting systems, the context and knowledge of several parties affect profile definition and filtering. For example, in healthcare nurses, doctors and patient influence the treatment process. Thus, profiles for alerting systems have to be generated by the explicit collaboration of several parties who may not know each other directly.

We propose the new concept of

collaborative profiles

to capture these different conditions and contexts. These profiles exploit each single party’s expert-knowledge for defining the context under which (health-related) alerting is required. Challenges include the definition and refinement of profiles as well as conflict detection in context definitions.

Doris Jung, Annika Hinze

Using Context of a Mobile User to Prefetch Relevant Information

Providing mobile users with relevant and up-to-date information on the move through wireless communication needs to take the current context of a user into account. In this paper, the context of a user with respect to his movement behaviour as well as device characteristics is under investigation. In outdoor areas, particularly in an urban area, obviously there is often sufficient communication bandwidth available. In some areas though, especially in rural areas, communication bandwidth coverage is often poor. Providing users in such areas with relevant information and making this information available in time is a major challenge. Prefetching tries to overcome these problems by using predefined user context settings. In situations where resource restrictions like limited bandwidth or insufficient memory apply, strategies come into place to optimize the process. Such strategies will be discussed. Evaluating the different types of users supports the approach of getting the relevant information to the user at the right time and the right place.

Holger Kirchner

Development and Engineering

Location-Based Mobile Querying in Peer-to-Peer Networks

Peer-to-peer (P2P) networks are receiving increasing attention in a variety of current applications. In this paper, we concentrate on applications where a mobile user queries peers to find either data (e.g., a list of restaurants) or services (e.g., a reservation service). We classify location-based queries in categories depending on parameters such as the user’s velocity, the nature of the desired information, and the anticipated proximity of this information. We then propose query routing strategies to ensure the distributed query evaluation on different peers in the application while optimizing the device and network energy consumption.

Michel Scholl, Marie Thilliez, Agnès Voisard

Seamless Engineering of Location-Aware Services

In this paper we present a novel approach to design and implement applications that provide location-aware services. We show how a clear separation of design concerns (e.g. applicative, context-specific, etc) helps to improve modularity. We stress that by using dependency mechanism among outstanding components we can get rid of explicit rule-based approach thus simplifying evolution and maintenance. We first discuss some related work in this field. Next, we introduce a simple exemplary scenario and present the big picture of our architectural approach. Then we detail the process of service definition and activation. A discussion on communication and composition mechanisms is next presented and we end presenting some concluding remarks and further work.

Gustavo Rossi, Silvia Gordillo, Andrés Fortier

Location

Context-Aware Negotiation for Reconfigurable Resources with Handheld Devices

Next-generation handhelds are expected to be multi-functional devices capable of executing a broad range of compute-intensive applications. These handheld devices are constraint by their physical size, their computational power and their networking ability. These factors could hinder future performance and versatility of handheld devices. Reconfigurable hardware incorporated within distributed servers can help address portable device constraints. This paper proposes a context-based negotiation and bidding technique enabling a handheld device to intelligently utilise its surrounding reconfigurable resources. The negotiation protocol enables handhelds to optimally offload their computational task. The contextual aspect of the protocol uses the location of a mobile device to identify the urgency of a user request. This helps to optimise the quality of service experienced by a handheld user. An architectural framework currently being deployed and tested as well as overall future objectives are outlined.

Timothy O’Sullivan, Richard Studdert

Location-Aware Web Service Architecture Using WLAN Positioning

The steady rise of mobile computing devices and wireless local-area networks (WLAN) has fostered a growing interest in location-aware systems and services (LBS). Context-awareness in mobile systems give users more convenience with services specific to their preferences, behaviour and physical position. This paper presents an architecture for LBSs in WLANs describing the essential location determination component. An example service using Web Service techniques is described to illustrate efficient service invocation and interaction.

Ulf Rerrer

A Light-Weight Framework for Location-Based Services

Context-aware mobile systems aim at delivering information and services tailored to the current user’s situation [1], [10]. One major application area of these systems is the tourism domain, assisting tourists especially during their vacation through location-based services (LBS) [4], [7]. Consequently a proliferation of approaches [2], [5], [8], [9], [12], [15], [17], [18] can be observed, whereby an in-depth study of related work has shown that some of these existing mobile tourism information systems exhibit few limitations [3], [19]: First, existing approaches often use proprietary interfaces to other systems (e.g. a Geographic Information System – GIS), and employ their own data repositories, thus falling short in portability and having to deal with time consuming content maintenance. Second, often thick clients are used that may lack out-of-the-box-usage. Third, existing solutions are sometimes inflexible concerning configuration capabilities of the system. To deal with those deficiencies, we present a lightweight framework for LBS that can be used for various application domains. This framework builds on existing GIS standards, incorporates already available Web content, can be employed out-of-the-box, and is configurable by using a Web-based interface. The applicability of the framework is demonstrated by means of a prototype of a mobile tourist guide.

W. Schwinger, Ch. Grün, B. Pröll, W. Retschitzegger

Architecture and Models

Context Awareness for Music Information Retrieval Using JXTA Technology

The development of mobile devices and wireless networks made it possible for users to seamlessly use different devices to recognize changes in their computing environment. In this paper, we propose a music information retrieval system (MIRS) that exploits various types of contextual information. Our system is based on JXTA technology which enables to join at any time and direct communication between peers. It supports the personalized retrieval of music information at specific moments and locations. Each peer can run a context interpreter and each edge peer functions as a query requester or a query responser. The query requester’s context interpreter analyzes user’s context and customizes the search result. The query responser’s context interpreter analyzes the pattern for the query melody and selects requested music information.

Hyosook Jung, Seongbin Park

A Model of Pervasive Services for Service Composition

We propose a formal definition of a pervasive service model targeting the very dynamic environments typical of mobile application scenarios. The model is based on a requirement analysis and evolves from existing service definitions. These are extended with pervasive features that allow for the modeling of context awareness, and pervasive functionality needed for the dynamic (re-) composition of services. In order to demonstrate its value, we present an implementation of a pervasive service platform that makes use of the service model.

Caroline Funk, Christoph Kuhmünch, Christoph Niedermeier

Selection Using Non-symmetric Context Areas

This paper targets with applications running on mobile devices and using context informations. Following previous studies from other authors, we extend the notion of context area replacing distance function by cost function. Using this extension, we exhibit three different modes of selection and demonstrate their differences on a mobile applications: the museum visit.

Diane Lingrand, Stéphane Lavirotte, Jean-Yves Tigli

Sharing Context Information in Semantic Spaces

In a highly mobile and active society, ubiquitous access to information and services is a general desire. The context of users is of high importance for an application to adapt automatically to changing situations and to provide the relevant data. We present a combination of space-based computing and the Semantic Web to provide a communication infrastructure that simplifies sharing data in dynamic and heterogenous systems. The infrastructure is called

Semantic Spaces

.

Reto Krummenacher, Jacek Kopecký, Thomas Strang

Grid Computing Workshop (GADA)

GADA 2005 PC Co-chairs’ Message

We wish to extend a warm welcome to GADA’05, The Second International Workshop on Grid Computing and its Application to Data Analysis, held in Ayia Napa (Cyprus), in conjunction with the On The Move Federated Conferences and Workshops 2005 (OTM’05).

This time around we have been fortunate enough to receive an even larger number of highly informative and engaging papers from all corners of the globe covering a wide range of scientific subjects and computational tool designs. It has been very inspiring to observe the extensive and well-formulated work being done in the development of processing and resource management tools to facilitate the ever increasing computational requirements of today’s and future projects.

Pilar Herrero, María S. Pérez, Victor Robles, Jan Humble

Web Services Approach in the Grid

Coordinated Use of Globus Pre-WS and WS Resource Management Services with GridWay

The coexistence of different Grid infrastructures and the advent of Grid services based on Web Services opens an interesting debate about the coordinated harnessing of resources based on different middleware implementations and even different Grid service technologies. In this paper, we present the loosely-coupled architecture of Grid

W

ay, which allows the coordinated use of different Grid infrastructures, although based on different Grid middlewares and services, as well as a straightforward resource sharing. This architecture eases the gradual migration from pre-WS Grid services to WS ones, and even, the long-term coexistence of both. We demonstrate its suitability with the evaluation of the coordinated use of two Grid infrastructures: a research testbed based on Globus WS Grid services, and a production testbed based on Globus pre-WS Grid services, as part of the LCG middleware.

Eduardo Huedo, Rubén S. Montero, Ignacio M. Llorente

Web-Services Based Modelling/Optimisation for Engineering Design

Results from the DIstributed Problem SOlving (DIPSO) project are reported, which involves the implementation of a Grid-enabled Problem Solving Environment (PSE) to support conceptual design. This is a particularly important phase in engineering design, often resulting in significant savings in costs and effort at subsequent stages of design and development. Allowing a designer to explore the potential design space provides significant benefit in channelling the constraints of the problem domain into a suitable preliminary design. To achieve this, the PSE will enable the coupling of various computational components from different “Centers of Excellence”. A Web Services-based implementation is discussed. The system will support clients who have extensive knowledge of their design domain but little expertise in state-of-the-art search, exploration and optimisation techniques.

Ali Shaikh Ali, Omer F. Rana, Ian Parmee, Johnson Abraham, Mark Shackelford

Workflow Management System Based on Service Oriented Components for Grid Applications

To efficiently develop and deploy parallel and distributed Grid applications, we have developed the Workflow based grId portal for problem Solving Environment(WISE). However, the Grid technology has become standardized related to Web-services for Service-oriented architecture and the workflow engine of WISE is insufficient to meet the requirements of a service-oriented environment. Therefore, we present a new workflow management system, called the Workflow management system based on Service-oriented components for Grid Applications(WSGA). It provides an efficient execution of programs for computational intensive problems using advanced patterns, dynamic resource allocation, pattern oriented resource allocation, and configures Web-service as an activity of workflow in Grid computing environment. In this paper, we propose the WSGA architecture design based on service-oriented components, and functions for using Web-services and increasing system performance. Also, we show an implementation method, and report the system performance evaluation of the WSGA architecture design.

Ju-Ho Choi, Yong-Won Kwon, So-Hyun Ryu, Chang-Sung Jeong

Grid Applications

Life Science Grid Middleware in a More Dynamic Environment

This paper proposes a model for integrating a higher level Semantic Grid Middleware with Web Service Resource Framework (WSRF) that extends the prototype presented in [1] informed by issues that were identified in our early experiments with the prototype. WSRF defines generic and open framework for modeling and accessing stateful resources using Web Services and Web Service Notification standardizing publish/subscribe notification for Web Services. In particular we focus on using WSRF to support data integration, workflow enactment and notification management in the leading EPSRC e-Science pilot project. We report on our experience from the implementation of our proposed model and argue that our model converges with peer-to-peer technology in a promising way forward towards enabling Semantic Grid Middleware in mobile ad-hoc networks environments.

Milena Radenkovic, Bartosz Wietrzyk

A Grid-Aware Implementation for Providing Effective Feedback to On-Line Learning Groups

Constantly providing feedback to on-line learning teams is a challenging yet one of the latest and most attractive issues to influence learning experience in a positive manner. The possibility to enhance learning group’s participation by means of providing appropriate feedback is rapidly gaining popularity due to its great impact on group performance and outcomes. Indeed, by storing parameters of interaction such as participation behaviour and giving constant feedback of these parameters to the group may influence group’s motivation and emotional state resulting in an improvement of the collaboration. Furthermore, by feeding back to the group the results of tracking the interaction data may enhance the learners’ and groups’ problem solving abilities. In all cases, feedback implies constantly receiving information from the learners’ actions stored in log files since the history information shown is continuously updated. Therefore, in order to provide learners with effective feedback, it is necessary to process large and considerably complex event log files from group activity in a constant manner, and thus it may require computational capacity beyond that of a single computer. To that end, in this paper we show how a Grid approach can considerably decrease the time of processing group activity log files and thus allow group learners to receive selected feedback even in real time. Our approach is based on the master-worker paradigm and is implemented using Globus technology running on the Planetlab platform. To test our application, we used event log files from the Basic Support for Collaborative Work (BSCW) system.

Santi Caballé, Claudi Paniagua, Fatos Xhafa, Thanasis Daradoumis

Security and Ubiquitous Computing

Caching OGSI Grid Service Data to Allow Disconnected State Retrieval

The Open Grid Services Infrastructure (OGSI) defines a standard set of facilities which allow the creation of wide area distributed computing applications, i.e. applications which cross organisational and administrative boundaries. An earlier project demonstrated significant potential for OGSI to support mobile or remote sensors, which require the integration of devices which are wirelessly and therefore only intermittently connected to the fixed network. Further, there is significant potential for mobile clients, e.g. PDAs, to be used for data analysis. However, OGSI currently assumes the availability of a permanent network connection between client and service. This paper proposes the use of caching to provide improved access to the state of intermittently connected OGSI grid services. The paper also describes a prototype which has been implemented and tested to prove the viability of the approach.

Alastair Hampshire, Chris Greenhalgh

Shelter from the Storm: Building a Safe Archive in a Hostile World

The storing of data and configuration files related to scientific experiments is vital if those experiments are to remain reproducible, or if the data is to be shared easily. The prescence of historical (observed) data is also important in order to assist in model evaluation and development. This paper describes the design and implementation process for a data archive, which was required for a coastal modelling project.

The construction of the archive is described in detail, from its design through to deployment and testing. As we will show, the archive has been designed to tolerate failures in its communications with external services, and also to ensure that no information is lost if the archive itself fails, i.e. upon restarting, the archive will still be in exactly the same state.

Jon MacLaren, Gabrielle Allen, Chirag Dekate, Dayong Huang, Andrei Hutanu, Chongjie Zhang

Event Broker Grids with Filtering, Aggregation, and Correlation for Wireless Sensor Data

A significant increase in real world event monitoring capability with wireless sensor networks brought a new challenge to ubiquitous computing. To manage high volume and faulty sensor data, it requires more sophisticated event filtering, aggregation and correlation over time and space in heterogeneous network environments. Event management will be a multi-step operation from event sources to final subscribers, combining information collected by wireless devices into higher-level information or knowledge. At the same time, the subscriber’s interest has to be efficiently propagated to event sources. We describe an event broker grid approach based on service-oriented architecture to deal with this evolution, focusing on the coordination of event filtering, aggregation and correlation function residing in event broker grids. An experimental prototype in the simulation environment with Active BAT system is presented.

Eiko Yoneki

Distributed Authentication in GRID5000

Between high-performance clusters and grids appears an intermediate infrastructure called cluster grid that corresponds to the interconnection of clusters through the Internet. Cluster grids are not only dedicated to specific applications but should allow the users to execute programs of different natures. This kind of architecture also imposes additional constraints as the geographic extension raises availability and security issues. In this context, authentication is one of the key stone by providing access to the resources. Grid5000 is a french project based on a cluster grid topology. This article expounds and justifies the authentication system used in Grid5000. We first show the limits of classical approaches that are local files and NIS in such configurations. We then propose a scalable alternative based on the LDAP protocol allowing to meet the needs of cluster grids, either in terms of availability, security and performances. Finally, among the various applications that can be executed in the Grid5000 platform, we present

μ

grid, a minimal middleware used for medical data processing.

Sebastien Varrette, Sebastien Georget, Johan Montagnat, Jean-Louis Roch, Franck Leprevost

Performance Enhancement in the Grid

A Load Balance Methodology for Highly Compute-Intensive Applications on Grids Based on Computational Modeling

Compute-intensive simulations are currently good candidates for being executed on distributed computers and Grids, in particular for applications with a large number of input data whose values change throughout the simulation time and where the communications are not a critical factor. Although the number of computations usually depends on the bulk of input data, there are applications in which the computational load depends on the particular values of some input data. We propose a general methodology to deal with the problem of improving load balance in these cases. It is divided into two main stages. The first one is an exhaustive study of the parallel code structure, using performance tools, with the aim of establishing a relationship between the values of the input data and the computational effort. The next stage uses this information and provides a mechanism to distribute the load of any particular simulating situation among the computational nodes. A load balancing strategy for the particular case of STEM-II, a compute-intensive application that simulates the behavior of pollutant factors in the air, has been developed, obtaining an important improvement in execution time.

D. R. Martínez, J. L. Albín, J. C. Cabaleiro, T. F. Pena, F. F. Rivera

Providing Autonomic Features to a Data Grid

Autonomic and Grid computing are complementary, in the sense that complex grid environments can take advantage of the features provided by autonomic computing and on the other hand, autonomic processes can be properly deployed by using grid technology. Besides, one of the most active fields in grid computing is the area of data grids, focusing on the data access. Our paper proposes a grid framework which provides autonomic characteristics in order to enhance the performance of the data access, predicting the future behaviour of the corresponding I/O system. The use of the autonomic system is transparent to the user. This paper also presents a study case of such system.

María S. Pérez, Alberto Sánchez, Ramiro Aparicio, Pilar Herrero, Manuel Salvadores

TCP Performance Enhancement Based on Virtual Receive Buffer with PID Control Mechanism

TCP is the only protocol widely available for reliable end-to-end congestion-controlled network communication, and thus it is the one used for almost all communications. Unfortunately, TCP is not designed with high-performance networking and computing. Thus the research for TCP to obtain good throughput in high-performance networking and computing is in progress all over the world actively. In this paper, we propose a new scheme which makes a TCP system achieve high throughput even with small buffer. The receive buffer almost empties due to the characteristic of original TCP but the amount of physical memory assigned for the buffer cannot be reduced because TCP flow control will downgrade TCP performance with the reduced buffer. However a TCP system applying our proposed scheme can reduce the size of physically assigned receive buffer without downgrading TCP performance. And then we use PID control mechanism as a tool to adjust the size of VRB properly. Lastly, we compare the throughput with two schemes, proposed scheme and original TCP scheme. As a result, the TCP using VRB obtains 46% higher throughput than the original one. And we also compare the amount of memory necessary for achieving the maximum throughput between two schemes. The result of second comparison shows that the proposed TCP spends 43% less memory than the tuned original TCP for same throughput.

Byungchul Park, Eui-Nam Huh, Hyunseung Choo, Yoo-Jung Kim

Databases on the Grid

Computational Grid vs. Parallel Computer for Coarse-Grain Parallelization of Neural Networks Training

Development of a coarse-grain parallel algorithm of artificial neural networks training with dynamic mapping onto processors of parallel computer system is considered in this paper. Parallelization of this algorithm done on the computational grid operated under Globus middleware is compared with the results obtained on the parallel computer Origin 300. Experiments show better efficiency for computational grid instead of parallel computer with an efficiency/price criterion.

Volodymyr Turchenko

Object-Oriented Wrapper for Relational Databases in the Data Grid Architecture

The paper presents a solution of the problem of wrapping relational databases to an object-oriented business model in the data grid architecture. The main problem with this kind of wrappers is how to utilize the native SQL query optimizer, which in majority of RDBMS is transparent for the users. In our solution we use the stack-based approach to query languages, its query language SBQL, updateable object-oriented virtual views and the query modification technique. The architecture rewrites the front-end OO query to a semantically equivalent back-end query addressing the M0 object model that is 1:1 compatible with the relational model. Then, in the resulting SBQL query the wrapper looks for patterns that correspond to optimizable SQL queries. Such patterns are then substituted by dynamic SQL

execute immediately

statements. The method is illustrated by a sufficiently sophisticated example. The method is currently being implemented within the prototype OO server ODRA devoted to Web and grid applications.

Kamil Kuliberda, Jacek Wislicki, Radoslaw Adamus, Kazimierz Subieta

Modeling Object Views in Distributed Query Processing on the Grid

This paper proposes a method for modeling views in Grid databases. Views are understood as independent data transformation services that may be integrated with other Database Grid Services. We show examples of graphical notation and semi-automated query construction. The solution is supported by a minimal metamodel, which also provides reflection capabilities for dynamic services orchestration.

Krzysztof Kaczmarski, Piotr Habela, Hanna Kozankiewicz, Kazimierz Subieta

Optimization of Distributed Queries in Grid Via Caching

Caching can highly improve performance of query processing in distributed databases. In this paper we show how this technique can be used in grid architecture where data integration is implemented by means of updatable views. Views integrate data from heterogeneous sources and provide users with their integrated form. The whole process of integration is transparent, i.e. users need not be aware that data are not located at one place. In data grids caching can be used at different levels of architecture. We focus on caching at the middleware layer where the cache is stored in the database of the integrating unit. These results can be then used while answering queries from grid users, so there will be no need to reevaluate the whole queries. In such a way caching can highly increase performance of applications operating on grid. In the paper we also present an example how a query can be optimized by rewriting to make use of cached results.

Piotr Cybula, Hanna Kozankiewicz, Krzysztof Stencel, Kazimierz Subieta

Replication

Modelling the $\surd{{\rm N}}$ + ROWA Model Approach Inside the WS-ReplicationResource

This paper presents the

$\surd{{\rm N}}$

+ ROWA model that is been develop at the Universidad Politecnica de Madrid with the aim of replicating information in Grid environments and optimizing the number of messages to be exchanged in the process as well as their use to built Grid environments based on WSRF specifications. The model presented in this paper will be one of the pillars of a new Grid service (WS-ReplicationResource) that, in the near future, will provide Grid systems with a high level service of resource replication.

Manuel Salvadores, Pilar Herrero, María S. Pérez, Alberto Sanchez

Workshop on Inter-organizational Systems and Interoperability of Enterprise Software and Applications (MIOS+INTEROP)

MIOS+INTEROP 2005 PC Co-chairs’ Message

Modern enterprises face a strong economical pressure to increase competitiveness, to operate on a global market, and to engage in alliances of several kinds. Agility has become the new guiding principle for enterprises. It means that enterprises must be able to easily collaborate with other enterprises expand, be it through participating in enterprise networks, or through mergers or acquisitions, or through insourcing or outsourcing of services. In order to meet these economical requirements, enterprises rely increasingly on the benefits of modern information and communication technology (ICT). However, the appropriate knowledge to deploy this technology as needed, and in an effective and efficient way, is largely lacking, particularly knowledge regarding the collaboration of enterprises and knowledge regarding the interoperability of their information systems.

Antonia Albani, Jan L. G. Dietz, Hervé Panetto, Monica Scannapieco

Service Modelling

Registering a Business Collaboration Model in Multiple Business Environments

Today business registries are regarded as means of finding services offered by a business partner. However, business registries may also serve as means of searching inter-organizational business process definitions that are relevant in one’s own business environment. Thus, it is important to define in which environments an inter-organizational business process definition is valid. Furthermore, environment-specific adaptations of the business process definition may be registered. In this paper the business process definitions are based on UMM business collaboration models. We discuss two approaches: Firstly, the binding of a model to business environments is specified within the model itself. Secondly, the binding of a model to business environments is defined in the registry meta-data.

Birgit Hofreiter, Christian Huemer

Component Oriented Design and Modeling of Cross-Enterprise Service Processes

Service processes are an important kind of cross-enterprise business processes. However, they show particular properties which require new approaches of design and modeling. Therefore, in this paper a method for designing and modeling service processes is developed. It is component-oriented and uses so-called perspective elements as granularity of the components. The service processes created by the method are both aligned to the customer requirements and efficient in their operation by the use of standardized components.

Rainer Schmidt

Comparing the Impact of Service-Oriented and Object-Oriented Paradigms on the Structural Properties of Software

Service-Oriented Architecture (SOA) is a promising approach for developing enterprise applications. While the concept of SOA has been described in research and industry literature, the techniques for determining optimal granularity of services and encapsulating business logic in software are unclear. This paper explores this problem using a case study developed with two contrasting approaches to building enterprise applications that utilise services, where one of the approaches employs coarse-grained services developed based on the principles of Object-Orientation (OO), and another approach is based on embedding business rules and logic into executable BPEL scripts and constructing a system as a set of fine-grained services. The quantitative comparison based on a set of mature software engineering metrics showed that a system developed using the BPEL-based approach has a potentially higher structural complexity, but at the same time lower coupling between software modules compared to an OO approach. It was also shown that some of the existing software metrics are inapplicable to SOA, hence new metrics need to be developed.

Mikhail Perepletchikov, Caspar Ryan, Keith Frampton

The Impact of Software Development Strategies on Project and Structural Software Attributes in SOA

Service-Oriented Architecture (SOA) is a promising approach for developing integrated enterprise applications. Although the architectural aspects of SOA have been investigated in research and industry literature, the actual process of designing and implementing services in SOA is not well understood. The goal of this paper is to identify tasks needed for successful design and implementation of services, and investigate their effect on the project and structural software attributes in the context of SOA. This facilitates the specification of guidelines for decreasing the required development effort and capital cost of the SOA projects, and improving the structural software attributes of service implementations. The tasks are identified in the context of top-down, bottom-up and meet-in-the-middle software development strategies.

Mikhail Perepletchikov, Caspar Ryan, Zahir Tari

Service Choreography and Orchestration

An Hybrid Intermediation Architectural Approach for Integrating Cross-Organizational Services

Nowadays, workflow research has shifted from fundamentals of workflow modelling and enactment towards improvement of the workflow modelling lifecycle and integration of workflow enactment engines with new enabling technologies for process invocation. These efforts along with the workflow component reusability trend, aim at tackling the issues concerning the dynamic and distributed environment of the e-business domain. Other totally distributed technologies like the intelligent multi Agent Systems proved to face some of the special requirements of conducting business through Internet, but they still present credibility problems concerning the overall control of the processes. We propose a web-based «intermediation hybrid architecture» for integrating services by exploiting and combining the advantages of strict centralized topologies that use workflow engines, with totally distributed systems which use agent technologies.

Giannis Verginadis, Panagiotis Gouvas, Gregoris Mentzas

A Framework Supporting Dynamic Workflow Interoperation

When a workflow process spans to multiple organizations, subprocess task model is an efficient way of representing remote services of other systems. The subprocess task usually represents a single service in conventional workflows. However, if a subprocess task comprises multiple services, and the number of services and the execution flow of the services cannot be decided until run-time, conventional ways of workflow design is not proper to handle such situations efficiently. All potentially reachable paths should be known at process build time in conventional workflow design. However, such an assumption does not always hold in real situations. In this paper, we propose a multi-subprocess task based framework for dynamic workflow interoperations. In the framework, we develop the multi-subprocess task model to handle a subprocess composed of multiple services that are unknown at process build time. In this paper, we also define and implement four components to support the dynamic workflow interoperation: Workflow engine, Adapter, Service Interface Repositories (SIRs), and XML messages. Adapter and SIR make a local WfMS transparent to the location and platform of the interoperating WfMSs by encapsulating external subprocesses and superprocesses. When an example scenario is implemented and evaluated in the proposed framework, the advantages of the framework are obvious in terms of automaticity, adaptability, and efficiency.

Jaeyong Shim, Myungjae Kwak, Dongsoo Han

A Text Mining Approach to Integrating Business Process Models and Governing Documents

As large companies are building up their enterprise architecture solutions, they need to relate business process descriptions to lengthy and formally structured documents of corporate policies and standards. However, these documents are usually not specific to particular tasks or processes, and the user is left to read through a substantial amount of irrelevant text to find the few fragments that are relevant to him. In this paper, we describe a text mining approach to establishing links between business process model elements and relevant parts of governing documents in Statoil, one of Norway’s largest companies. The approach builds on standard IR techniques, gives us a ranked list of text fragments for each business process activity, and can easily be integrated with Statoil’s enterprise architecture solution. With these ranked lists at hand, users can easily find the most relevant sections to read before carrying out their activities.

Jon Espen Ingvaldsen, Jon Atle Gulla, Xiaomeng Su, Harald Rønneberg

A Process-Driven Inter-organizational Choreography Modeling System

Currently we have been developing a process-driven e-business service integration (BSI) system through functional extensions of the ebXML technology. And it is targeting on the process-driven e-business service integration markets, such as e-Logistics, e-SCM, e-Procurement, and e-Government, that require process-driven multi-party collaborations of a set of independent organizations. The system consists of three major components –

Choreography Modeler

,

Runtime & Monitoring Client

and

EJB-based BSI Engine

. This paper particularly focuses on the choreography modeler that provides the modeling functionality for ebXML-based choreography and orchestration among engaged organizations in a process-driven multiparty collaboration. Now, it is fully operable on an EJB-based framework environment (J2EE, JBOSS, and Weblogic), and also it is applied to e-Logistics process automation and B2B choreography models of a postal service company. This paper mainly describes the implementation details of the modeler, especially focusing on modeling features of the process-driven multi-party collaboration functionality.

Kwang-Hoon Kim

A Petri Net Based Approach for Process Model Driven Deduction of BPEL Code

The management of collaborative business processes refers to the design, analysis, and execution of interrelated production, logistics and information processes, which are usually performed by different independent enterprises in order to produce and to deliver a specified range of goods or services. The effort to interconnect independently developed business process models and to map them to process-implementing software components is particularly high. The implementation of such collaborative inter-organizational business process models is assisted by so-called choreography languages that can be executed by software applications. In this paper, we present a Petri net based approach for process-model driven deduction of BPEL code. Our approach is based on a specific type of high-level Petri nets, so-called XML nets. We use XML nets both for modeling and coordinating business processes implemented as Web services and for deriving BPEL elements of the Web service based components. Our approach provides a seamless concept for modeling, analysis and execution of business processes.

Agnes Koschmider, Marco Mevius

From Inter-organizational Workflows to Process Execution: Generating BPEL from WS-CDL

The Web Service Choreography Description Language (WS-CDL) is a specification for describing multi party collaboration based on Web Services from a global point of view. WS-CDL is designed to be used in conjunction with the Web Services Business Process Execution Language (WS-BPEL or BPEL). Up to now, work on conceptual mappings between both languages is missing. This paper closes this gap by showing how BPEL process definitions of parties involved in a choreography can be derived from the global WS-CDL model. We have implemented a prototype of the mappings as a proof of concept. The automatic transformation leverages the quality of software components interacting in the choreography as advocated in the Model Driven Architecture concept.

Jan Mendling, Michael Hafner

PIT-P2M: ProjectIT Process and Project Metamodel

Within the constant evolution observed in IT/IS area, new processes emerged faced to new customer’s requirements and also due to new trends in software engineering community, such as unified or agile processes. Despite the great set of available tools in process and project management, still there is a real gap between “process” and “project” management approaches and respective tools. To assist team members on their work, the effort spend in a process customization could be used in other tasks, such as the project management task to control activities, work products and team members. In this paper, we describe a simplified SPEM-based metamodel for process specification and explain the motivation around this proposal. Considering this metamodel, we also propose a metamodel for project definition and configuration. To conclude, we demonstrate that this metamodel is better adapted to processes specification and can be applied in a project definition.

Paula Ventura Martins, Alberto Rodrigues da Silva

Interoperability of Networked Enterprise Applications

Requirements for Secure Logging of Decentralized Cross-Organizational Workflow Executions

The control of actions performed by parties involved in a decentralized cross-organizational workflow is done by several independent workflow engines. Due to the lack of a centralized coordination control, an auditing is required which supports a reliable and secure detection of malicious actions performed by these parties. In this paper we identify several issues which have to be resolved for such a secure logging system. Further, security requirements for a decentralized data store are investigated and evaluated with regard to decentralized data stores.

Andreas Wombacher, Roel Wieringa, Wim Jonker, Predrag Knežević, Stanislav Pokraev

Access Control Model for Inter-organizational Grid Virtual Organizations

The grid has emerged as a platform that enables to put in place an inter-organizational shared space known as Virtual Organization. The Virtual Organization (VO) encompasses users and resources supplied by the different partners for achieving the VO’s creation goal. Though many works offer solutions to manage a VO, the dynamic, on the fly creation of virtual organizations is still a challenge. Dynamic creation of VOs is associated with the automated generation of access control policy to trace its boundaries, specify the different partners’ rights within it and assure its management during its life time. In this paper, we propose an OrBAC (Organization Based Access Control model) based Virtual Organization model which serves as a corner stone in the VO creation automated process. OrBAC framework specifies the users’ access permissions/interdiction to the VO resources, where its administration model AdOrBAC flexibly models the multi-stakeholder administration in the Grid.

B. Nasser, R. Laborde, A. Benzekri, F. Barrère, M. Kamel

Interoperability Supported by Enterprise Modelling

The application of enterprise modelling supports the common understanding of the enterprise business processes in the company and across companies. To assure a correct cooperation between two or more entities it is mandatory to build an appropriate model of them. This can lead to a stronger amplification of all the cross-interface activities between the entities. Enterprise models illustrate the organisational business aspects as a prerequisite for the successful technical integration of IT systems or their configurations. If an IT system is not accepted because its usefulness is not transparent to the staff members, then it quickly loses its value due to erroneous or incomplete input and insufficient maintenance. This at the end results in investment losses.

The paper exemplifies the strengths, values, limitations and gaps of the application of enterprise modelling to support interoperability between companies. It illustrates a proposal for a common enterprise-modelling framework. This framework is presented in terms of problems to face and knowledge based methodological approach to help solving them. A specific application demonstrates enterprise modelling and the synchronisation between the models as prerequisite for the successful design of Virtual Enterprises.

Frank Walter Jaekel, Nicolas Perry, Cristina Campos, Kai Mertins, Ricardo Chalmeta

Using Ontologies for XML Data Cleaning

Real data is often affected by errors and inconsistencies. Many of them depend on the fact that schemas cannot represent a sufficiently wide range of constraints. Data cleaning is the process of identifying and possibly correcting data quality problems that affect the data. Cleaning data requires to gather knowledge on the domain to which the data refer. Anyway, existing data cleaning techniques still access this knowledge as a fragmented collection of heterogenous rules and ad hoc data transformations. Furthermore, data cleaning methodologies for an important class of data based on the semistructured XML data model have not yet been proposed. In this paper we introduce the

OXC

framework, that offers a methodology for XML data cleaning based on a uniform representation of domain knowledge through an ontology We describe how to define XML related data quality metrics based on our domain knowledge representation, and give a definition of various metrics related to the

completeness

data quality dimension.

Diego Milano, Monica Scannapieco, Tiziana Catarci

Applying Patterns for Improving Subcontracting Management

This paper studies inter-organizational communication of strategic design information. The focus is on global software subcontracting, where communication problems are common. Software patterns, which have been recognized as a valuable tool in software development, are proposed to be means to facilitate the communication of design information in subcontracting relationship. The position of patterns in subcontracting related processes are studied and the implications of introducing patterns to software subcontracting relationship are analyzed. As a result an evaluation of software patterns’ suitability as means for efficient, systematic and explicit communication in managing the subcontracting relationship is presented.

Riikka Ahlgren, Jari Penttilä, Jouni Markkula

Evaluation of Strategic Supply Networks

Based on changing market conditions companies are more and more focusing on their core competencies while outsourcing supporting processes to their business partners. The outsourcing of business processes results in a strong dependency between companies and their business partners building so called

value networks

. In order to perform well in such value networks, the

selection

of both direct and indirect suppliers is of main importance. Potential supply networks need therefore to be

identified

and

evaluated

in order to strategically select the appropriate supply network. The selection of adequate supply networks, satisfying evaluation criteria defined by the OEM, is the topic of this paper. It builds on preparatory work done in the area of strategic supply network development, where the identification and dynamic modeling of strategic supply networks has been elaborated, and focuses on the evaluation of strategic supply networks providing the base for supply network selection.

Antonia Albani, Nikolaus Müssigmann

Self Modelling Knowledge Networks

What the scope of knowledge management (KM) is concerned, the focus of attention is shifting towards inter-organisational aspects resulting in new requirements for the KM process. This paper introduces the concept of self modelling knowledge networks supporting KM in networks by means of dynamic self-configuration. Apart from introducing the concept, technical issues and design aspects of the implementation are discussed and a component model for an inter-organisational knowledge management system is introduced.

Volker Derballa, Antonia Albani

Workshop on Object-Role Modeling (ORM)

ORM 2005 PC Co-chairs’ Message

Fact-Oriented Modeling is a conceptual approach to modeling and querying the information semantics of business domains in terms of the underlying facts of interest, where all facts and rules may be verbalized in language that is readily understandable by non-technical users of those business domains. Unlike Entity-Relationship (ER) modeling and UML class diagrams, fact-oriented modeling treats all facts as relationships (unary, binary, ternary etc.). How facts are grouped into structures (e.g. attribute-based entity types, classes, relation schemes, XML schemas) is considered a lower level, implementation issue that is irrelevant to the capturing essential business semantics. Avoiding attributes in the base model enhances semantic stability and populatability, as well as facilitating natural verbalization. For information modeling, fact-oriented graphical notations are typically far more expressive than those provided by other notations. Fact-oriented textual languages are based on formal subsets of native languages, so are easier to understand by business people than technical languages like OCL. Fact-oriented modeling includes procedures for mapping to attribute-based structures, so may also be used to front-end other approaches.

Though less well known than ER and object-oriented approaches, fact- oriented modeling has been used successfully in industry for over 30 years, and is taught in universities around the world. The fact-oriented modeling approach comprises a family of closely related “dialects”, the most well known being Object-Role Modeling (ORM), Natural Language Information Analysis Method (NIAM), and Fully-Communication Oriented Information Modeling (FCO-IM). Though adopting a different graphical notation, the Object-oriented Systems Model (OSM) is a close relative, with its attribute-free philosophy.

Terry Halpin, Robert Meersman

Schema Management

Using Abstractions to Facilitate Management of Large ORM Models and Ontologies

Due to ever larger ORM models and ORM-represented ontologies, information management and its GUI representation is even more important. One useful mechanism is abstraction, which has received some attention in conceptual modelling and implementation, as well as its foundational characteristics. Extant heuristics for ORM abstractions are examined and enriched with several foundational aspects of abstraction. These improvements are applicable to a wider range of types of representations, including conceptual models and ontologies, thereby not only alleviating the Database Comprehension Problem, but also facilitate conceptual model and ontology browsing.

C. Maria Keet

Modularization and Automatic Composition of Object-Role Modeling (ORM) Schemes

In this pape0072 we present a framework and algorithm for modularization and composition of ORM schemes. The main goals of modularity are to enable and increase reusability, maintainability, distributed development of ORM schemes. Further, we enable effective browsing and management of such schemes through libraries of ORM schema modules. For automatic composition of modules, we present and implement a composition operator: all atomic concepts and their relationships (i.e. fact-types) and all constraints, across the composed modules, are combined together to form one schema (called modular schema).

Mustafa Jarrar

Modelling Context Information with ORM

Context-aware applications rely on implicit forms of input, such as sensor-derived data, in order to reduce the need for explicit input from users. They are especially relevant for mobile and pervasive computing environments, in which user attention is at a premium. To support the development of context-aware applications, techniques for modelling context information are required. These must address a unique combination of requirements, including the ability to model information supplied by both sensors and people, to represent imperfect information, and to capture context histories. As the field of context-aware computing is relatively new, mature solutions for context modelling do not exist, and researchers rely on information modelling solutions developed for other purposes. In our research, we have been using a variant of Object-Role Modeling (ORM) to model context. In this paper, we reflect on our experiences and outline some research challenges in this area.

Karen Henricksen, Jadwiga Indulska, Ted McFadden

Industry Perspectives

Using Object Role Modeling for Effective In-House Decision Support Systems

This is a practical application article that illustrates how Guidant Corporation, a medical device manufacturer of cardiac rhythm management (CRM) devices, utilizes Object Role Modeling (ORM). While some business environments allow only lip service to be paid to best practices, the cardiac rhythm management industry does not have room for error. These medical devices control the heart – lives depend on them. This article discusses Guidant’s use of ORM as a best practice to document the business data rules and establish them as the “single point of truth” across the spectrum of decision support system (DSS) activities.

Eric John Pierson, Necito dela Cruz

Requirements Engineering with ORM

The number of IT project overspends and failures suggest that many IT projects do not conform to requirements. Despite decades of development the IT industry still seems to lack an effective method of ensuring that a project will be right first time. This paper outlines an ORM based requirements engineering process that aims to reduce the number of IT project failures. The main deliverable of the process is a formal description of WHAT a system is required to do without reference to HOW is to be done. Data or process, which comes first? This paper answers this question by showing how to define processes by starting with an object-role model. To use the approach in this paper you will need the Object-Role Modeling tool embedded within the database function of Microsoft Visual Studio for Enterprise Architects 2003 or later together with two referenced books [Halpin 01] and [Halpin 03].

Ken Evans

Beyond Data Modeling

Generating Applications from Object Role Models

We propose a generic strategy for generating Information Systems (IS) applications on the basis of an Object Role Model (ORM). This strategy regards an ORM as specifying both static and dynamic aspects of the IS application.

We implemented the strategy in a prototype tool, thereby using state of the art software technology. The tool generates IS applications with a basic functionality.

We regard our strategy as a first investigation of a new way to generate IS applications. Many open and sometimes far reaching research questions arise from this first exploration.

Betsy Pepels, Rinus Plasmeijer

A Fact-Oriented Approach to Activity Modeling

In this paper we investigate the idea of using an ORM model as a starting point to derive an activity model, essentially providing an activity view on the original ORM model. When producing an ORM model of an inherently active domain, the resulting ORM model can provide an appropriate base to start out from. We will illustrate this basic idea by means of a running example. Much work remains to be done, but the results so-far look promissing.

H. A. (Erik) Proper, S. J. B. A. Hoppenbrouwers, Th. P. van der Weide

Future Directions

ORM 2

Object-role Modeling (ORM) is a fact-oriented modeling approach for specifying, transforming, and querying information at a conceptual level. Unlike Entity-Relationship modeling and Unified Modeling Language class diagrams, fact-oriented modeling is attribute-free, treating all elementary facts as relationships. For information modeling, fact-oriented graphical notations are typically far more expressive than other notations. Introduced 30 years ago, ORM has evolved into closely related dialects, and is supported by industrial and academic tools. Industrial experience has identified ways to improve current ORM languages (graphical and textual) and associated tools. A project is now under way to provide tool support for a second generation ORM (called ORM 2), that has significant advances over current ORM technology. This paper provides an overview of, and motivation for, the enhancements introduced by ORM 2, and discusses an open-source ORM 2 tool under development.

Terry Halpin

A World Ontology Specification Language

A language is proposed for the specification of the ontology of a world. Contrary to current ontology languages, it includes the transition space of a world, in addition to its state space. For the sake of a clear and deep understanding of the difference between state space and transition space, two kinds of facts are distinguished: stata (things that are just the case) and facta (things that are brought about). The application of the language is demonstrated using a library as the example case.

Jan L. G. Dietz

Applications

Using ORM to Model Web Systems

In this paper, we describe how ORM is extended, and combined with Concurrent Task Trees (CTT) to model the content as well as the functionality of a web system in the web design method WSDM. As WSDM uses an audience driven design approach, the use of ORM is somewhat different from its use for data modeling in the context of databases. We discuss the differences. We also discuss the benefits of using ORM for our purpose, modeling web systems using an audience driven approach.

Olga De Troyer, Sven Casteleyn, Peter Plessers

Object Role Modelling for Ontology Engineering in the DOGMA Framework

A recent evolution in the areas of artificial intelligence, database semantics and information systems is the advent of the Semantic Web that requires software agents and web services exchanging meaningful and unambiguous messages. A prerequisite for this kind of interoperability is the usage of an ontology. Currently, not many ontology engineering methodologies exist. This paper describes some basic issues to be taken into account when using the ORM methodology for ontology engineering from the DOGMA ontology framework point of view.

Peter Spyns

Formal Underpinnings

Fact Calculus: Using ORM and Lisa-D to Reason About Domains

We propose to use ORM and Lisa-D as means to formally reason about domains. Conceptual rule languages such as Lisa-D, RIDL and ConQuer allow for the specification of rules in a semi-natural language format that can more easily be understood by domain experts than languages such as predicate calculus, Z or OCL. If one would indeed be able to reason about properties of domains in terms of Lisa-D expressions, then this reasoning would be likely to be better accessible to people without a background in formal mathematics, such as “the average” domain expert. A potential application domain for such reasoning would be the field of

business rules

. If we can reason about business rules formulated in a semi-natural language format, the formal equivalence of (sets of) business rules (i.e. various paraphrasings) can be discussed with domain experts in a language and a fashion that is familiar to them.

S. J. B. A. Hoppenbrouwers, H. A. (Erik) Proper, Th. P. van der Weide

Schema Equivalence as a Counting Problem

In this paper we introduce some terminology for comparing the expressiveness of conceptual data modeling techniques, such as ER, NIAM, PSM and ORM, that are finitely bounded by their underlying domains. Next we consider schema equivalence and discuss the effects of the sizes of the underlying domains.This leads to the introduction of the concept of finite equivalence, which may serve as a means to a better understanding of the fundamentals of modeling concepts (utility). We give some examples of finite equivalence and inequivalence in the context of ORM.

H. A. (Erik) Proper, Th. P. van der Weide

Ph.D. Student Symposium

PhDS 2005 PC Co-chairs’ Message

With this Symposium for PhD students associated with the “On The Move Federated Conferences” we present the second edition of an event stimulating PhD students to summarise and present their results on an international forum. The Symposium supports PhD students in their research by offering an internationally highly reputed publication channel, namely the Springer LNCS proceedings of the OTM workshops, and by providing an opportunity to gain ample feedback from prominent professors. More specifically, students receive general advice on making the most of their research environment, on how to focus their objectives, on how to establish their methods, and on how to adequately present their results. These eminent professors thus help establishing a dialogue setting of scientific interrogation, discussion, reflexion and guidance between students and mentors. For taking up this challenging task, we would like to vividly thank our team of accompanying professors, namely:

Domenico Beneventano, DataBase Group Dipartimento di Ingegneria dell’ Informazione, Universitá di Modena e Reggio Emilia, Italy.

Jaime Delgado, Distributed Multimedia Applications Group, Universitat Pompeu Fabra, Spain.

Jan Dietz, Department of Information Systems, Technical University of Delft, The Netherlands.

Antonia Albani, Peter Spyns, Johannes Maria Zaha

Accelerating Distributed New Product Development by Exploiting Information and Communication Technology

Enterprises are increasingly under pressure through globalization of markets, rapid advances in technology, and increasing customer expectations. In order to create and sustain a competitive advantage in this environment, adaptation of value generating activities is required. One of the value generating activities and the focus of the Ph.D. thesis is new product development (NPD). Especially the duration of NPD is a central success factor, as it has direct implications on its profitability and also on the strategic position of an enterprise. Efforts to accelerate NPD typically focus on increasing efficiency by exploiting potentials in the design of the product, the organizational structures, or the development process. Utilization of information and communication technology (ICT) for this purpose is dominated by support for actual engineering activities e.g. through computer-aided systems (CAx). The acceleration of NPD related processes supported by ICT especially beyond the boundaries of a single enterprise has received much less widespread acceptance and utilization so far. The specific question to be answered by the Ph.D. thesis in this context is therefore how to accelerate distributed NPD by exploiting ICT. The research results will be used for further development of existing software applications supporting NPD.

Darius Khodawandi

Towards QoS-Awareness of Context-Aware Mobile Applications and Services

In our current connected wireless world, mobile devices are enabled to use various networking facilities. Although this enables mobile users to communicate

any time and any place

, it may also be very intrusive. There is a high need to manage the information stream a user receives on his/her mobile device. Context-awareness seems to be a promising way to manage this information stream and to provide the means to communicate at

the right time in the right way

.

Current context-aware applications benefit from the user context (e.g. location information), however, they do not consider the quality of service (QoS) offered by various networks (i.e. only best-effort QoS is considered). The research discussed in this paper focuses on a QoS- and context-aware service infrastructure supporting the development of mobile applications in a heterogeneous network environment. We argue that the use of context information helps to better capture the user’s required QoS and improves the delivered QoS.

Katarzyna Wac

Supporting the Developers of Context-Aware Mobile Telemedicine Applications

Telemedicine, which is defined as providing healthcare and sharing of medical knowledge over distance using telecommunication means, is a promising approach to improve and enhance the healthcare provisioning process. However, only recently, technology has evolved (i.e. miniaturization of high power mobile devices that can use high bandwidth mobile communication mechanisms) such that feasible advanced telemedicine applications can be developed. Current telemedicine systems offer proprietary solutions that are used in specific disease domains. For the acceptation, rapid development and introduction of novel and advanced telemedicine applications, there is a need for architectural mechanisms that supports developers in rapidly developing such telemedicine applications. The research discussed in this paper, focuses on the development of such mechanisms.

Tom Broens

Multilingual Semantic Web Services

In this paper, an overview of the PhD thesis with the same name will be presented. After an introduction of the subject and aim of the thesis, a couple of research questions will be asked. A brief overview of the state of the art in the domain will be done and then some problems that arise with it will be considered. Although the monolingual problematic is quite general and recurrent in most ontologies design tools, this thesis will focus on the approach used in DOGMA Studio. Some new terms and acronyms will be introduced : ”onternationalization”, ”concepton” and IMMO. Finally, some work issues for the thesis will be sketched then some conclusions will come before ending this document with some considerations about further problems that will follow the research.

Frédéric Hallot

Improving Information Retrieval Effectiveness by Using Domain Knowledge Stored in Ontologies

The huge number of available documents on the Web makes finding relevant ones a challenging task. The quality of results that traditional full-text search engines provide is still not optimal for many types of user queries. Especially the vagueness of natural languages, abstract concepts, semantic relations and temporal issues are handled inadequately by full-text search. Ontologies and semantic metadata can provide a solution for these problems. This work examines how ontologies can be optimally exploited during the information retrieval process, and proposes a general framework which is based on ontology-supported semantic metadata generation and ontology-based query expansion. The framework can handle imperfect ontologies and metadata by combining results of simple heuristics, instead of relying on a “perfect” ontology. This allows integrating results from traditional full-text engines, and thus supports a gradual transition from classical full-text search engines to ontology-based ones.

Gábor Nagypál

Top-k Skyline: A Unified Approach

The WWW has become a huge repository of information. For almost any knowledge domain there may exist thousands of available sources and billions of data instances. Many of these sources may publish irrelevant data. User-preference approaches have been defined to retrieve relevant data based on similarity, relevance or preference criteria specified by the user. Although many declarative languages can express user-preferences, considering this information during query optimization and evaluation remains as open problem. SQLf, Top-k and Skyline are three extensions of SQL to specify user-preferences. The first two filter irrelevant answers following a score-based paradigm. On the other hand, the latter produces relevant non-dominated answers using an order-based paradigm. The main objective of our work is to propose a unified approach that combines paradigms based on order and score. We propose physical operators for SQLf considering Skyline and Top-k features. Properties of those will be considered during query optimization and evaluation. We describe a Hybrid-Naive operator for producing only answers in the Pareto Curve with best score values. We have conducted initial experimental studies to compare the Hybrid operator, Skyline and SQLf.

Marlene Goncalves, María-Esther Vidal

Judicial Support Systems: Ideas for a Privacy Ontology-Based Case Analyzer

Nowadays, ontology is applied as an integral part of many applications in several domains, especially in the world of law. The ontology based judicial support system is believed as a useful tool to support, for example, the legal argumentation assistant and legal decision taking in court. The privacy case analyzer is considered as one of the most interesting applications of ontology based privacy judicial support systems. The efficiency of privacy case analyzers depend on several factors such as how to tackle the problem of linking cases to legislations, how to imply the guidance of privacy principles, and how to improve the extraction of cases. This paper addresses those items and describes the research issues that will be investigated challenges of ontology based judicial support systems.

Yan Tang, Robert Meersman

IFIP WG 2.12 and WG 12.4 International Workshop on Web Semantics (SWWS)

SWWS 2005 PC Co-chairs’ Message

Welcome to the Proceedings of the first IFIP WG 2.12 & WG 12.4 International Workshop on Web Semantics (SWWS’05). This book reflects the issues raised and presented during the SWWS workshop which proves to be an interdisciplinary forum for subject matters involving the theory and practice of web semantics. A special session on Regulatory Ontologies has been organized to allow researcher of different backgrounds (such as Law, Business, Ontologies, artificial intelligence, philosophy, and lexical semantics) to meet and exchange ideas.

This first year, a total of 35 papers were submitted to SWWS. Each submission was reviewed by at least two experts. The papers were judged according to their originality, validity, significance to theory and practice, readability and organization, and relevancy to the workshop topics and beyond. This resulted in the selection of 18 papers for presentation at the workshop and publication in this proceedings. We feel that these Proceedings will inspire further research and create an intense following. The Program Committee comprised: Aldo Gangemi, Amit Sheth, Androod Nikvesh, Mihaela Ulieru, Mohand-Said Hacid, Mukesh Mohania, Mustafa Jarrar, Nicola Guarino, Peter Spyns, Pieree Yves Schobbens, Qing Li, Radboud Winkels, Ramasamy Uthurusamy, Richard Benjamins, Rita Temmerman, Robert Meersman, Robert Tolksdorf, Said Tabet, Stefan Decker, Susan Urban, Tharam Dillon, Trevor Bench-Capon, Usuama Fayed, Valentina Tamma, Wil van der Aalst, York Sure, and Zahir Tari.

Tharam Dillon, Ling Feng, Mustafa Jarrar, Aldo Gangemi, Joost Breuker, Jos Lehmann, André Valente

Invited Papers on TRUST

Adding a Peer-to-Peer Trust Layer to Metadata Generators

In this paper we outline the architecture of a peer-to-peer Trust Layer that can be superimposed to metadata generators producing classifications, like our ClassBuilder and BTExact’s iPHI tools. Different techniques for aggregating trust values are also discussed. Our ongoing experimentation is aimed at validating the role of a Trust Layer as a non-intrusive, peer-to-peer technique for improving quality of automatically generated metadata.

Paolo Ceravolo, Ernesto Damiani, Marco Viviani

Building a Fuzzy Trust Network in Unsupervised Multi-agent Environments

In automated and unsupervised multi-agent environments, where agents act on behalf of their stakeholders, the measurement and computation of trust is a key building block upon which all business interaction scenarios rely. In environments, where the individual and independent calculation of trustworthiness values for future negotiation partners is desired, flexible algorithms and models imitating human reasoning are crucial. This paper introduces a trust evaluation model that imitates human reasoning by using fuzzy logic concepts. Furthermore, post-interaction processes such as business interaction reviews and credibility adjustment are used to continuously build and refine an information repository for future trust evaluation processes. Fuzzy logic offers a mathematical approach encompassing uncertainty and tolerance of imprecise data, and combined with our highly customizable model, it allows to meet the security needs of different stakeholders.

Stefan Schmidt, Robert Steele, Tharam Dillon, Elizabeth Chang

Regulatory Ontologies (WORM)

Building e-Laws Ontology: New Approach

Semantic Web provides tools for expressing information in a machine accessible form where agents (human or software) can understand it. Ontology is required to describe the semantics of concepts and properties used in web documents. Ontology is needed to describe products, services, processes and practices in any e-commerce application. Ontology plays an essential role in recognizing the meaning of the information in Web documents. This paper attempts to deploy these concepts in an e-law application. E-laws ontology has been built using existing resources. It has been shown that extracting concepts is less hard than building relationships among them. A new algorithm has been proposed to reduce the number of relationships, so the domain knowledge expert (i.e. lawyer) can refine these relationships.

Ahmad Kayed

Generation of Standardised Rights Expressions from Contracts: An Ontology Approach?

Distribution of multimedia copyrighted material is a hot topic for the music and entertainment industry. Piracy, peer to peer networks and portable devices make multimedia content easily transferable without respecting the associated rights and protection mechanisms. Standard and industry-led initiatives try to prevent this unauthorised usage by means of electronic protection and governance mechanisms. On the other hand, different organisations have been handling related legal issues by means of paper contracts. Now, the question is: How can we relate electronic protection measures with the paper contracts behind them?

This paper presents an analysis of current contract clauses and an approach to generate, from them, standardised rights expressions, or licenses, in an as automatic as possible way. A mapping between those contract clauses and MPEG-21 REL (Rights Expression Language), the most promising rights expressions standard, is also proposed, including an initial relational model database structure. An ontology-based approach for the problem is also pointed out. If contract clauses could be expressed as part of an ontology, this would facilitate the automatic licenses generation. Generic contracts ontologies and specific intellectual property rights ontologies would be the starting point, together with the presented analysis.

Silvia Llorente, Jaime Delgado, Eva Rodríguez, Rubén Barrio, Isabella Longo, Franco Bixio

OPJK into PROTON: Legal Domain Ontology Integration into an Upper-Level Ontology

The SEKT Project aims at developing and exploiting the knowledge technologies which underlie the Next Generation Knowledge Management, connecting complementary know-how of key European centers in three areas: Ontology Management Technology, Knowledge Discovery and Human Language Technology. This paper describes the development of PROTON, an upper-level ontology developed by Ontotext, and of the Ontology of Professional Judicial Knowledge (OPJK), modeled by a team of legal experts form the Institute of Law and Technology (IDT-UAB) for the

Iuriservice

prototype (a webbased intelligent FAQ for the Spanish judges on their first appointment designed by iSOCO). The paper focuses on the work done towards the integration of the OPJK built using a middle-out strategy into the system and top modules of PROTON, illustrating the flexibility of this independent upper-level ontology.

Núria Casellas, Mercedes Blázquez, Atanas Kiryakov, Pompeu Casanovas, Marta Poblet, Richard Benjamins

Applications of Semantic Web I (SWWS)

Semantic Transformation of Web Services

Web services have become the predominant paradigm for the development of distributed software systems. Web services provide the means to modularize software in a way that functionality can be described, discovered and deployed in a platform independent manner over a network (e.g., intranets, extranets and the Internet). The representation of web services by current industrial practice is predominantly syntactic in nature lacking the fundamental semantic underpinnings required to fulfill the goals of the emerging Semantic Web. This paper proposes a framework aimed at (1) modeling the semantics of syntactically defined web services through a process of interpretation, (2) scoping the derived concepts within domain ontologies, and (3) harmonizing the semantic web services with the domain ontologies. The framework was validated through its application to web services developed for a large financial system. The worked example presented in this paper is extracted from the semantic modeling of these financial web services.

David Bell, Sergio de Cesare, Mark Lycett

Modeling Multi-party Web-Based Business Collaborations

To remain competitive, enterprises have to mesh their business processes with their customers, suppliers and business partners. Increasing collaboration includes not only a global multi-national enterprise, but also an organization with its relationship to and business processes with its business partners. Standards and technologies permit business partners to exchange information, collaboration and carry out business transaction in a pervasive Web environment. There is however still very limited research activity on modeling multi-party Web-based business collaboration underlying semantics. In this paper, we demonstrate that an in-house business process has been gradually outsourced to third-parties and analyze how task delegations cause commitments between multiple business parties. Finally we provide process semantics for modeling multi-party Web-based collaborations.

Lai Xu, Sjaak Brinkkemper

A Framework for Task Retrieval in Task-Oriented Service Navigation System

Mobile access to the Internet is increasing drastically, and this is raising the importance of information retrieval via mobile units. We have developed a task-oriented service navigation system [6] that allows a user to find the mobile contents desired from the viewpoint of the task that the user wants to do. However, the user is still faced with the problem of having to select the most appropriate task from among the vast number of task candidates; this is difficult due to the fact that mobile devices have several limitations such as small displays and poor input methods. This paper tackles this issue by proposing a framework for retrieving only those tasks that suit the abstraction level of the user’s intention. If the user has settled on a specific object, the abstraction level is concrete, and tasks related to the handling of the specific object are selected; if not, tasks related to general objects are selected. Finally, we introduce two task retrieval applications that realize the proposed framework. By using this framework, we can reduce the number of retrieved tasks irrelevant to the user; simulations show that roughly 30% fewer tasks are displayed to the user as retrieval results.

Yusuke Fukazawa, Takefumi Naganuma, Kunihiro Fujii, Shoji Kurakake

Realizing Added Value with Semantic Web

The Business Case for Semantic Web requires reports on its usefulness in real-life scenarios. In this study we introduce a framework for analysing potential application scenarios of the Semantic Web, which is based on the concept of added value. Based on this analysis, we have formulated a hypothetical life cycle of Semantic Web technologies in corporate IT infrastructures. It identifies corporate knowledge management as the application area, where a killer application is most likely to be developed. As a proof of concept the design, deployment and evaluation of a Skills Management system is presented.

Dariusz Kłeczek, Tomasz Jaśkowski, Rafał Małanij, Edyta Rowińska, Marcin Wasilewski

Applications of Semantic Web II (SWWS)

Ontology-Based Searching Framework for Digital Shapes

Knowledge related to Shape Modelling is multi-faceted because of the complexity and heterogeneity of the involved resources and because different applications may cast different semantics on them. A fast evolution of the field is now conditioned by how research teams will be able to communicate and share resources and knowledge. The field needs to be formalized in order to achieve a shared conceptualization accessible by the whole scientific community and eventually to ensure an actual exploitation of its knowledge within the Semantic Web. In this context, the main objective of the Network of Excellence AIM@SHAPE is twofold: on the one hand to devise tools to capture the implicit semantics of digital shapes, and on the other hand to encode and formalize the domain knowledge into context-dependent ontologies. The paper describes the first results in the direction of developing an ontology for shape acquisition and reconstruction and its effective use in the Digital Shape Workbench, a searching framework for sharing resources (shapes, tools and publications) and their related knowledge.

Riccardo Albertoni, Laura Papaleo, Marios Pitikakis, Francesco Robbiano, Michela Spagnuolo, George Vasilakis

Ontology Metadata Vocabulary and Applications

Ontologies have seen quite an enormous development and application in many domains within the last years, especially in the context of the next web generation, the Semantic Web. Besides the work of countless researchers across the world, industry starts developing ontologies to support their daily operative business. Currently, most ontologies exist in pure form without any additional information, e.g. authorship information, such as provided by Dublin Core for text documents. This burden makes it difficult for academia and industry e.g. to identify, find and apply – basically meaning to reuse – ontologies effectively and efficiently. Our contribution consists of (i) a proposal for a metadata standard, so called Ontology Metadata Vocabulary (OMV) which is based on discussions in the EU IST thematic network of excellence Knowledge Web and (ii) two complementary reference implementations which show the benefit of such a standard in decentralized and centralized scenarios, i.e. the Oyster P2P system and the Onthology metadata portal.

Jens Hartmann, Raúl Palma, York Sure, M. Carmen Suárez-Figueroa, Peter Haase, Asunción Gómez-Pérez, Rudi Studer

Ontological Foundation for Protein Data Models

In this paper we proposed a Protein Ontology to integrate protein data and information from various Protein Data Sources. Protein Ontology provides the technical and scientific infrastructure and knowledge to allow description and analysis of relationships between various proteins. Protein Ontology uses relevant protein data sources of information like PDB, SCOP, and OMIM. Protein Ontology describes: Protein Sequence and Structure Information, Protein Folding Process, Cellular Functions of Proteins, Molecular Bindings internal and external to Proteins, and Constraints affecting the Final Protein Conformation. We also created a database of 10 Major Prion Proteins available in various Protein data sources, based on the vocabulary provided by Protein Ontology. Details about Protein Ontology are available online at http://www.proteinontology.info/.

Amandeep S. Sidhu, Tharam S. Dillon, Elizabeth Chang

Modeling and Querying Techniques for Semantic Web (SWWS)

SWQL – A Query Language for Data Integration Based on OWL

The Web Ontology Language OWL has been advocated as a suitable model for semantic data integration. Data integration requires expressive means to map between heterogeneous OWL schemas. This paper introduces SWQL (Semantic Web Query Language), a strictly typed query language for OWL, and shows how it can be used for mapping between heterogeneous schemas. In contrast to existing RDF query languages which focus on selection and navigation, SWQL also supports construction and user-defined functions to allow for instantiating integrated global schemas in OWL.

Patrick Lehti, Peter Fankhauser

Modeling Views for Semantic Web Using eXtensible Semantic (XSemantic) Nets

The emergence of Semantic Web (SW) and the related technologies promise to make the web a meaningful experience. Yet, high level modeling, design and querying techniques proves to be a challenging task for organizations that are hoping utilize the SW paradigm for their industrial applications, which are still using traditional database techniques. To address such an issue, in this paper, we propose a view model for the SW (SW-View), to SW-enable traditional solutions. First we outline the view model, its properties and some modeling issues, followed by some discussions on modeling such views (at the conceptual level). We also provide a brief discussion on how this view model is utilized in the design and construction of materialized ontology views to support extraction of sub-ontologies.

R. Rajugan, Elizabeth Chang, Ling Feng, Tharam S. Dillon

On the Cardinality of Schema Matching

In this paper we discuss aspects of cardinality constraints in schema matching. A new cardinality classification is proposed, emphasizing the challenges in schema matching that evolve from cardinality constraints. We also offer a new research direction for automating schema matching to manage cardinality constraints.

Avigdor Gal

Reputation Ontology for Reputation Systems

The growing development of web-based reputation systems in the 21

st

century will have a powerful social and economic impact on both business entities and individual customers, because it makes transparent quality assessment on products and services to achieve customer assurance in the distributed web-based Reputation Systems. The web-based reputation systems will be the foundation for web intelligence in the future. Trust and Reputation help capture business intelligence through establishing customer trust relationships, learning consumer behavior, capturing market reaction on products and services, disseminating customer feedback, buyers’ opinions and end-user recommendations. It also reveals dishonest services, unfair trading, biased assessment, discriminatory actions, fraudulent behaviors, and un-true advertising. The continuing development of these technologies will help in the improvement of professional business behavior, sales, reputation of sellers, providers, products and services. Given the importance of reputation in this paper, we propose ontology for reputation. In the business world we can consider the reputation of a product or the reputation of a service or the reputation of an agent. In this paper we propose ontology for these entities that can help us unravel the components and conceptualize the components of reputation of each of the entities.

Elizabeth Chang, Farookh Khadeer Hussain, Tharam Dillon

Ontologies (SWWS)

Translating XML Web Data into Ontologies

Translating XML data into ontologies is the problem of finding an instance of an ontology, given an XML document and a specification of the relationship between the XML schema and the ontology. Previous study [8] has investigated the

ad hoc

approach used in XML data integration. In this paper, we consider to translate an XML web document to an instance of an OWL-DL ontology in the Semantic Web. We use the semantic mapping discovered by our prototype tool [1] for the relationship between the XML schema and the ontology. Particularly, we define the

solution

of the translation problem and develop an algorithm for computing a

canonical solution

which enables the ontology to answer queries by using data in the XML document.

Yuan An, John Mylopoulos

Self-tuning Personalized Information Retrieval in an Ontology-Based Framework

Reliability is a well-known concern in the field of personalization technologies. We propose the extension of an ontology-based retrieval system with semantic-based personalization techniques, upon which automatic mechanisms are devised that dynamically gauge the degree of personalization, so as to benefit from adaptivity but yet reduce the risk of obtrusiveness and loss of user control. On the basis of a common domain ontology KB, the personalization framework represents, captures and exploits user preferences to bias search results towards personal user interests. Upon this, the intensity of personalization is automatically increased or decreased according to an assessment of the imprecision contained in user requests and system responses before personalization is applied.

Pablo Castells, Miriam Fernández, David Vallet, Phivos Mylonas, Yannis Avrithis

Detecting Ontology Change from Application Data Flows

In this paper we describe a clustering process selecting a set of typical instances from a document flow. These representatives are viewed as semi-structured descriptions of domain categories expressed in a standard semantic web format, such as OWL [15]. The resulting bottom-up ontology may be used to check and/or update existing domain ontologies used by the e-business infrastructure.

Paolo Ceravolo, Ernesto Damiani

Workshop on Semantic-based Geographical Information Systems (SeBGIS)

SeBGIS 2005 PC Co-chairs’ Message

Nowadays new applications ask for enriching the semantics associated to geographical information in order to support a wide variety of tasks including data integration, interoperability, knowledge reuse, knowledge acquisition, knowledge management, spatial reasoning and many others. Examples of such semantic issues are temporal and spatio-temporal data management, 3D manipulation, spatial granularity and multiple resolutions, multiple representations (providing different perspectives of the same information), vague and ambiguous geographic concepts, the relationship between geographic and physical concepts, and identity of geographic objects through time.

At the same time the recent years brought many developments that radically changed how we understand information processing. Data warehouses and OLAP systems have evolved as a fundamental approach for developing advanced decision support systems. This lead to improved data mining techniques allowing to extract semantics from raw data. Further, the success of Internet has generated a paradigm shift in distributed information processing leading to the area of Semantic Web, in which semantics is the fundamental component for achieving communication both for humans and applications. At the same time, mobile and wireless computing have entered everyone.s life through dedicated devices leading to location-based services. Finally, Grid computing, a paradigm enabling applications to integrate computational and information resources managed by diverse organizations in widespread locations, pushes the frontier of global interoperability. The fact that all these recent developments are entering the geographic domain increases the importance of the elicitation of the semantics of geographical information.

Esteban Zimányi, Emmanuel Stefanakis

Measuring, Evaluating and Enriching Semantics

How to Enrich the Semantics of Geospatial Databases by Properly Expressing 3D Objects in a Conceptual Model

Geospatial conceptual data models represent semantic information about the real world that will be implemented in a spatial database. When linked to a repository, they offer a rich basis for formal ontologies. Several spatial extensions [5, 15, 17] have been proposed to data models and repositories in order to enrich the semantics of spatial objects, typically by specifying the geometry of objects in the schema and sometimes by adding geometric details in the repository. Considering the success of such 2D spatial extensions as well as the increased demand for 3D objects management, we defined a 3D spatial extension based on the concept of PVL already used in Perceptory and elsewhere. This paper presents 3D concepts and 3D PVL to help defining the geometry of 3D objects in conceptual data models and repositories. Their originality stems from the fact that no similar solution exists yet for real-life projects. The enrichment of the meaning of 3D objects geometries is discussed as well as its impact on costs, delays and acquisition specifications.

Suzie Larrivée, Yvan Bédard, Jacynthe Pouliot

Evaluating Semantic Similarity Using GML in Geographic Information Systems

This paper proposes a method for evaluating the semantic similarity of GML elements (concepts). Due to the relevance of the

Is-in

relationship in the geographic context, it focuses on GML elements organized according to

Part-of

(

meronymic

) hierarchies. It also proposes the method’s application to

Part-of

hierarchies, due to the semantics of the meronymic relationship within the geographic context, referred to as “place-area”. This semantics essentially concerns parts which are similar to and inseparable from the whole. A further contribution refers to the modeling of the

Part-of

hierarchy in GML. In particular, from the perspective of applying the

information content

approach to

Part-of

hierarchies, this paper proposes a method to represent and distinguish the concepts involved in the meronymic relationship within the element hierarchy.

Fernando Ferri, Anna Formica, Patrizia Grifoni, Maurizio Rafanelli

Measuring Semantic Differences Between Conceptualisations: The Portuguese Water Bodies Case – Does Education Matter?

In 2003 Pires [9] published a study that compared the results from the study performed by David Mark and Barry Smith with a similar study applied to Portuguese subjects. The paper concluded that the methodology of Mark and Smith, to establish ontologies from surveys of how users apply terminology, is applicable to identify conceptualization differences in GIS applications.

This paper is an extension of that work, presenting the results of the study in terms of university background. In response to series of differently phrased elicitations, 533 subjects (university students from several parts of Portugal and several academic disciplines) were asked to give examples of geographical categories. By this we statistically counted the most mentioned terms and related these to the university background.

The results were analysed in order to test the hypothesis: Students from different backgrounds have different conceptualizations of geographical categories due to their scholarly background. Our analysis refutes this hypothesis: students present the same examples for the presented categories and their disciplinary backgrounds cannot be shown to have an influence on the category choices.

Ontology has been conceived as a branch of metaphysics that studies the theory of objects and their relationships [3].

In this paper we aim to explore the relation Ontology/Geographical Information Systems from the cognition perspective. We raise the question; does scholarly background influence geographic categorization? In order to answer this question we used a survey that studied a specific set of geographic concepts, water bodies. The main reason behind the choice of these specific entities is that Portugal is a country exposed to the Atlantic and where water has been considered as an important element since the time of the Discoverers.

The survey is based on a similar approach taken in other parts of the world, such as England, Finland and others [11].

Paulo Pires, Marco Painho, Werner Kuhn

Schemata Integration

Spatio-temporal Schema Integration with Validation: A Practical Approach

We propose to enhance a schema integration process with a validation phase employing logic-based data models. In our methodology, we validate the source schemas against the data model; the inter-schema mappings are validated against the semantics of the data model and the syntax of the correspondence language. In this paper, we focus on how to employ a reasoning engine to validate spatio-temporal schemas and describe where the reasoning engine is plugged into our integration methodology. The validation phase distinguishes our integration methodology from other approaches. We shift the emphasis on automation from the a priori discovery to the a posteriori checking of the inter-schema mappings. By doing so, we take advantage of the expressive power of the common data model in the source schema description and inter-schema mapping definition.

A. Sotnykova, N. Cullot, C. Vangenot

Preserving Semantics When Transforming Conceptual Spatio-temporal Schemas

Conceptual models provide powerful constructs for representing the semantics of real-world application domains. However, much of this semantics may be lost when translating a conceptual schema into a logical or a physical schema. The reason for this semantic loss is the limited expressive power of logical and physical models. In this paper we present a methodology that allows to transform conceptual schemas while preserving their semantics. This is realized with the help of integrity constraints that are automatically generated at the logical and physical levels. As a result, the semantics of an application domain is kept in the database as opposed to keeping it in the applications accessing the database. We present such a methodology using the MADS conceptual spatio-temporal model.

Esteban Zimányi, Mohammed Minout

Using Image Schemata to Represent Meaningful Spatial Configurations

Spatial configurations have a meaning to humans. For example, if I am standing on a square in front of a building, and this building has a door, then this means to me that this door leads into the building. This type of meaning can be nicely captured by image schemata, patterns in our mind that help us making sense of what we perceive. Spatial configurations can be structured taxonomically and mereologically by means of image schemata in a way that is believed to be close to human cognition. This paper focuses on a specific application domain, train stations, but also tries to generalise to other levels of scale and other types of spaces, showing benefits and limits.

Urs-Jakob Rüetschi, Sabine Timpf

Geovisualization and Spatial Semantics

Collaborative geoVisualization: Object-Field Representations with Semantic and Uncertainty Information

Techniques and issues for the characterisation of an object-field representation that includes notions of semantics and uncertainty are detailed. The purpose of this model is to allow users to capture objects in field with internally variable levels of uncertainty, to visualize users’ conceptualizations of those geographic domains, and to share their understanding with others using embedded semantics. Concepts from collaborative environments inform the development of this semantic-driven model as well as the importance of presenting all collaborators’ analysis in a way that enables them to fully communicate their views and understandings about the object and the field within it. First, a conceptual background is provided which briefly addresses collaborative environments and the concepts behind an object-field representation. Second, implementation of that model within a database is discussed. Finally, a LandCover example is presented as a way of illustrating the applicability of the semantic model.

Vlasios Voudouris, Jo Wood, Peter F Fisher

Semantics of Collinearity Among Regions

Collinearity is a basic arrangement of regions in the plane. We investigate the semantics of collinearity in various possible meanings for three regions and we combine these concepts to obtain definitions for four and more regions. The aim of the paper is to support the formalization of projective properties for modelling geographic information and qualitative spatial reasoning. Exploring the semantics of collinearity will enable us to shed light on elementary projective properties from which all the others can be inferred. Collinearity is also used to find a qualitative classification of the arrangement of many regions in the plane.

Roland Billen, Eliseo Clementini

Automatic Acquisition of Fuzzy Footprints

Gazetteer services are an important component in a wide variety of systems, including geographic search engines and question answering systems. Unfortunately, the footprints provided by gazetteers are often limited to a bounding box or even a centroid. Moreover, for a lot of non–political regions, detailed footprints are nonexistent since these regions tend to have gradual, rather than crisp, boundaries. In this paper we propose an automatic method to approximate the footprints of crisp, as well as imprecise, regions using statements on the web as a starting point. Due to the vague nature of some of these statements, the resulting footprints are represented as fuzzy sets.

Steven Schockaert, Martine De Cock, Etienne E. Kerre

The Double-Cross and the Generalization Concept as a Basis for Representing and Comparing Shapes of Polylines

Many shape recognition techniques have been presented in literature, most of them from a quantitative perspective. Research has shown that qualitative reasoning better reflects the way humans deal with spatial reality. The current qualitative techniques are based on break points resulting in difficulties in comparing analogous relative positions along polylines. The presented shape representation technique is a qualitative approach based on division points, resulting in shape matrices forming a shape data model and thus forming the basis for a cognitively relevant similarity measure for shape representation and shape comparison, both locally and globally.

Nico Van de Weghe, Guy De Tré, Bart Kuijpers, Philippe De Maeyer

Algorithms and Data Structures

Range and Nearest Neighbor Query Processing for Mobile Clients

Indexing techniques have been developed for wireless data broadcast environments, in order to conserve the scarce power resources of the mobile clients. However, the use of interleaved index segments in a broadcast cycle increases the average access latency for the clients. In this paper, we present the broadcast-based spatial query processing algorithm for LBS. In this algorithm, we simply sort the data objects based on their locations, then the server broadcasts them sequentially to the mobile clients. The experimental results show that the proposed BBS scheme significantly reduces the

Access latency

.

KwangJin Park, MoonBae Song, Ki-Sik Kong, Chong-Sun Hwang, Kwang-Sik Chung, SoonYoung Jung

An Efficient Trajectory Index Structure for Moving Objects in Location-Based Services

Because moving objects usually moves on spatial networks, efficient trajectory index structures are required to gain good retrieval performance on their trajectories. However, there has been little research on trajectory index structure for spatial networks, like road networks. In this paper, we propose an efficient trajectory index structure for moving objects in Location-based Services (LBS). For this, we design our access scheme for efficiently dealing with the trajectories of moving objects on road networks. In addition, we provide both an insertion algorithm to store the initial information of moving object trajectories and one to store their segment information. We also provide a retrieval algorithm to find a set of moving objects whose trajectories match the segments of a query trajectory. Finally, we show that our trajectory access scheme achieves about one order of magnitude better retrieval performance than TB-tree.

Jae-Woo Chang, Jung-Ho Um, Wang-Chien Lee

Systems and Tools

MECOSIG Adapted to the Design of Distributed GIS

For more than ten years MECOSIG has been used as a method for GIS design and implementation in various national and international projects achieved in our laboratory. During a decade, the method has been progressively improved and extended without modification of its basic principles. However the emergence of distributed GIS, implying several organizations capable to play various roles, requires the reappraisal of the methodology. New concerns are identified and a collection of new tools must be deployed. Taking the most of various recent researches completed for public authorities in Belgium, this paper presents some significant adaptations of the original MECOSIG method in order to cope with a distributed GIS environment.

Fabien Pasquasy, François Laplanche, Jean-Christophe Sainte, Jean-Paul Donnay

The Emerge of Semantic Geoportals

Geoportals (geographic portals) are entry points on the Web, where various geographic information resources can be easily discovered. They organize geospatial data and services through catalogs containing metadata records, which can be queried in order to give access to related resources. However, the current organization of geographic information in metadata catalogs is unable to capture the semantics of the data being described; therefore users often miss geographical resources of interest when searching for geospatial data in the World Wide Web. In this paper, we present an innovative approach in the development of geoportals, based on the next generation of the Web, called the Semantic Web. This approach relies in the organization of the geo-data at the semantic level through appropriate geographic ontologies, and the exploitation of this organization through the user interface of the geoportal. To the best of our knowledge, this approach is the first that combines the expressiveness of geo-ontologies in the context of geographic portals.

Athanasis Nikolaos, Kalabokidis Kostas, Vaitis Michail, Soulakellis Nikolaos

Ontology Assisted Decision Making – A Case Study in Trip Planning for Tourism

Traditional trip planning involves decisions made by tourists in order to explore an environment, such as a geographic area, usually without having any prior knowledge or experience with it. Contemporary technological development has facilitated not only human mobility but also has set the path for various applications to assist tourists in way-finding, event notification using location-based services etc. Our approach explores how the use of ontologies can assist tourists plan their trip, in a web-based environment. The methodology consists of building two separate ontologies, one for the users profile and another one concerning tourism information and data in order to assist visitors of an area plan their visit.

Eleni Tomai, Maria Spanaki, Poulicos Prastacos, Marinos Kavouras

Workshop on Ontologies, Semantics and E-Learning (WOSE)

WOSE 2005 PC Co-chairs’ Message

We are happy and proud to introduce the proceedings for this 2nd ”Workshop on Ontologies, Semantics and E-learning”. We use the term ”ontologies” to refer to an as precise and formal as possible definition of the semantics of objects and their inter-relationships for a specific application domain.

It is interesting to note that the concept of ontologies has started to be applied in the context of learning technologies. Indeed, after some initial ontology related work in the late ’80s and early ’90s, mainly in the context of so-called ”Intelligent Tutoring Systems”, the focus over the last 10 years has been more on the development of learning objects, repositories and interoperability specifications, instead of on the formal representation and analysis of their meaning.

Peter Spyns, Erik Duval, Aldo de Moor, Lora Aroyo

E-Learning and Ontologies

Towards the Integration of Performance Support and e-Learning: Context-Aware Product Support Systems

Traditionally performance support and e-Learning have been considered as two separate research fields. This paper integrates them by introducing the concept of context-aware product support systems, which utilizes the notions of context, information object and ontology. The context signifies whether learning or performing goals are pursued in different situations and defines the configuration of the domain knowledge accordingly. An information object is viewed as an enabler of a modular virtual documentation, advancing information reuse. The ontology formalizes the representation of the knowledge contained in the system, facilitates interoperability, and constitutes one of the main building blocks of context-aware product support systems. The prototype system developed illustrates the applicability of the approach.

Nikolaos Lagos, Rossitza M. Setchi, Stefan S. Dimov

Taking Advantage of LOM Semantics for Supporting Lesson Authoring

Learning Object Metadata (LOM) is an interoperable standard aimed to foster the reuse of learning material for authoring lessons. Nevertheless, few work was done on taking advantage of LOM-semantics to facilitate retrieval of learning material. This article suggests an original approach which uses the structure of a lesson in order to automatically generate LOM-semantic-based queries for retrieving learning material for that lesson whereas the user continues to formulate easy-to-write queries without semantic specifications. This proposal consists of a four-component framework attempting to consider the main issues of semantic-based retrieval of documents.

Olivier Motelet, Nelson A. Baloian

Repurposing Learning Object Components

This paper presents an ontology-based framework for repurposing learning object components. Unlike the usual practice where learning object components are assembled manually, the proposed framework enables on-the-fly access and repurposing of learning object components. The framework supports two processes: the decomposition of learning objects into their components as well as the automatic assembly of these components in real-world applications. For now, the framework supports slide presentations. As an application, we will present in this paper the integration of this functionality in MS PowerPoint.

Katrien Verbert, Jelena Jovanović, Dragan Gašević, Erik Duval

Ontology Technology for E-learning

Interoperable E-Learning Ontologies Using Model Correspondences

Despite the fact that ontologies are a good idea for interoperation, existing e-Learning systems use different ontologies for describing their resources. Consequently, exchanging resources between systems as well as searching appropriate ones from several sources is still a problem.

We propose the concept of

correspondences

in addition to metadata standards. Correspondences specify relationships between ontologies that can be used to bridge their heterogeneity. We show how we use them for building evolvable federated information systems on e-Learning resources in the World Wide Web. Beside this integration scenario, we also describe other interoperation scenarios where correspondences can be useful.

Susanne Busse

Towards Ontology-Guided Design of Learning Information Systems

Courseware increasingly consists of generic information and communication tools. These offer a plethora of functionalities, but their usefulness to a particular learning community is not easy to assess. The aim should be to develop comprehensive learning information systems tailored to the specific needs of a community. Design patterns are important instruments for capturing best practice design knowledge. Ontologies, in turn, can help to precisely capture and reason about these patterns. In this paper, we make the case for ontology-guided learning IS design, and sketch the ontological core of a potential design method. Such methods should enable communities to specify and access relevant best design practice patterns. Design knowledge can then be reused across communities, improving the quality of communication support provided, while preventing wheels from being reinvented.

Aldo de Moor

Learning to Generate an Ontology-Based Nursing Care Plan by Virtual Collaboration

We describe how a web-based collaboration system is used to generate a document by referring ontology within a specific subject area. We have chosen nursing science as the target subject area as the nurses collaboratively plan and apply necessary treatments to the patients. The planning process involves coming up with a set of decisions and activities to perform according to the knowledge embedded in the nursing subject ontology. The nurses copiously record the patient conditions, which they observed, and also the ensuing reasoning outcome in the form of the diagnoses. Nursing care plan is a representative document, which is generated during the application of nursing processes. The plan includes general patient information, medical history, one or more goals of the nursing care plan, nursing diagnoses, expected outcomes of the care, and possible nursing interventions. We are developing a collaborative nursing care plan generation system, where several nurses can record and update the collected factual information about the patients and then come up with the most appropriate nursing diagnoses, outcomes, and interventions. The nursing care plan generation system is designed to double as a learning aid in order for the nurses by allowing them to observe what others do during the plan generation process. Eventually, the nurses are expected to share the same semantics of each ontology as they repeat the ontology-based decision makings.

Woojin Paik, Eunmi Ham

Ontologies and Virtual Reality

Generating and Evaluating Triples for Modelling a Virtual Environment

Our purpose is to extract RDF-style triples from text corpora in an unsupervised way and use them as preprocessed material for the construction of ontologies from scratch. We have worked on a corpus taken from Internet websites and describing the megalithic ruin of Stonehenge. Using a shallow parser, we select functional relations, such as the syntactic structure subject-verb-object. The selection is done using prepositional structures and frequency measures in order to select the most relevant triples. Therefore, the paper stresses the choice of patterns and the filtering carried out in order to discard automatically all irrelevant structures. At the same occasion, we are experimenting with a method to objectively evaluate the material generated automatically.

Marie-Laure Reinberger, Peter Spyns

An Ontology-Driven Approach for Modeling Behavior in Virtual Environments

Usually, ontologies are used to solve terminology problems or to allow automatic processing of information. They are also used to improve the development of software. One promising application area for ontologies is Virtual Reality (VR). Developing a VR application is very time consuming and requires skilled people. Introducing ontologies in the development process can eliminate these barriers. We have developed an approach, VR-WISE, which uses ontologies for describing a Virtual Environment (VE) at a conceptual level. In this paper we will describe the Behavior Ontology, which defines the modeling concepts for object behavior. Such an ontology has several advantages. It improves the intuitiveness; facilitates cross-platform VR development; smoothens integration with other ontologies; enhances the interoperability of VR applications; and allows for more intelligent systems.

Bram Pellens, Olga De Troyer, Wesley Bille, Frederic Kleinermann, Raul Romero

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise