Skip to main content
Top

2006 | Book

On the Move to Meaningful Internet Systems 2006: OTM 2006 Workshops

OTM Confederated International Workshops and Posters, AWeSOMe, CAMS, COMINF, IS, KSinBIT, MIOS-CIAO, MONET, OnToContent, ORM, PerSys, OTM Academy Doctoral Consortium, RDDS, SWWS, and SeBGIS 2006, Montpellier, France, October 29 - November 3, 2006. Proceedings, Part II

Editors: Robert Meersman, Zahir Tari, Pilar Herrero

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

Table of Contents

Frontmatter

Workshop on Ontology Content and Evaluation in Enterprise (OnToContent)

OnToContent 2006 PC Co-chairs’ Message

Welcome to the proceedings of the first international workshop on ontology content and evaluation in enterprise (OnToContent’06). This book reflects the issues raised and presented during the workshop, which gives an attention to ontology content, specially ontologies in human resources and employment, and healthcare and life sciences.

Mustafa Jarrar, Claude Ostyn, Werner Ceusters, Andreas Persidis

Ontology Quality and Consensus

Unit Tests for Ontologies

In software engineering, the notion of unit testing was successfully introduced and applied. Unit tests are easy manageable tests for small parts of a program – single units. They proved especially useful to capture unwanted changes and side effects during the maintenance of a program, and they grow with the evolution of the program.

Ontologies behave quite differently than program units. As there is no information hiding in ontology engineering, and thus no black box components, at first the idea of unit testing for ontologies seems not applicable. In this paper we motivate the need for unit testing, describe the adaptation to the unit testing approach, and give use cases and examples.

Denny Vrandečić, Aldo Gangemi
Issues for Robust Consensus Building in P2P Networks

The need for semantic interoperability between ontologies in a peer-to-peer (P2P) environment is imperative. This is because, by definition participants in P2P environment are equal, autonomous and distributed. For example, the synthesis of concepts developed independently by different academic researchers, different research labs, various emergency service departments and, hospitals and pharmacies, just to mention a few, are an assertive request for cooperation and collaboration among these independent peers. In this work we are looking at issues that enable us to build a robust semantic consensus to solve the interoperability problem among heterogeneous ontologies in P2P networks. To achieve a robust semantic consensus we focus on three key issues: i. semantic mapping faults, ii. consensus construction iii. fault-tolerance. All these three issues will be further elaborated in this paper, initial steps to address theses issues will be described and fault-tolerant semantic mapping research directions will be further identified.

A. -R. Mawlood-Yunis, M. Weiss, N. Santoro
Ontology and Agent Based Model for Software Development Best Practices’ Integration in a Knowledge Management System

In this paper, we will focus on the importance of management of knowledge held by the organization’s human resources and gained through experience and practice. For this issue, we propose a model for software best practices’ integration in a Knowledge Management System (KMS) of a Software Development Community of Practices (SDCoP). This model aims on the one hand, to integrate human, organizational, cultural and technical dimensions of the Knowledge Management (KM) discipline and on the other hand, to cover all the KM’s process. In response to these needs, the proposed model, purpose of this paper, is founded on the basis of ontologies and intelligent agents’ technologies.

Nahla Jlaiel, Mohamed Ben Ahmed
Extracting Ontological Relations of Korean Numeral Classifiers from Semi-structured Resources Using NLP Techniques

Many studies have focused on the facts that numeral classifiers give decisive clues to the semantic categorizing of nouns. However, few studies have analyzed the ontological relationships of classifiers or the construction of classifier ontology. In this paper, a semi-automatic method of extracting and representing the various ontological relations of Korean numeral classifiers is proposed. Shallow parsing and word-sense disambiguation were used to extract semantic relations from natural language texts and from wordnets.

Youngim Jung, Soonhee Hwang, Aesun Yoon, Hyuk-Chul Kwon

Ontology Construction and eHealth Ontologies

A Transfusion Ontology for Remote Assistance in Emergency Health Care (Position Paper)

Transfusion Ontology is a simple task-based ontology developed in the emergency health care domain. Taking the assumption that ontologies are instruments for supporting exchange of information among parties, the principles governing the design of this ontology was mainly based on the identification of the interactions of messages to be exchanged among parties. This paper shows how this simple design principle is able to guide a whole ontology construction.

Paolo Ceravolo, Ernesto Damiani, Cristiano Fugazza
Ontological Distance Measures for Information Visualisation on Conceptual Maps

Finding the right semantic distance to be used for information research, classification or text clustering using Natural Language Processing is a problem studied in several domains of computer science. We focus on measurements that are real distances: i.e. that satisfy all the properties of a distance. This paper presents one

isa

-

distance measurement that may be applied to taxonomies. This distance, combined with a distance based on relations other than

isa

, may be a step towards a real semantic distance for ontologies. After presenting the purpose of this work and the position of our approach within the literature, we formally detail our

isa

-distance. It is extended to other relations and used to obtain a MDS projection of a musical ontology in an industrial project. The utility of such a distance in visualization, navigation, information research and ontology engineering is underlined.

Sylvie Ranwez, Vincent Ranwez, Jean Villerd, Michel Crampes
The Management and Integration of Biomedical Knowledge: Application in the Health-e-Child Project (Position Paper)

The Health-e-Child project aims to develop an integrated healthcare platform for European paediatrics. In order to achieve a comprehensive view of children’s health, a complex integration of biomedical data, information, and knowledge is necessary. Ontologies will be used to formally define this domain knowledge and will form the basis for the medical knowledge management system. This paper introduces an innovative methodology for the vertical integration of biomedical knowledge. This approach will be largely clinician-centered and will enable the definition of

ontology fragments

, connections between them (

semantic bridges

) and enriched ontology fragments (

views

). The strategy for the specification and capture of fragments, bridges and views is outlined with preliminary examples demonstrated in the collection of biomedical information from hospital databases, biomedical ontologies, and biomedical public databases.

E. Jimenez-Ruiz, R. Berlanga, I. Sanz, R. McClatchey, R. Danger, D. Manset, J. Paraire, A. Rios

Competence Ontologies

Ontology-Based Systems Dedicated to Human Resources Management: An Application in e-Recruitment

This paper presents the CommOn framework (Competency Management through Ontologies) which aims at developing operational Knowledge-Based Systems founded on ontologies and dedicated to the management of competencies. Based on two different models (implemented within specific tools developped with the Protégé-2000 framework), Common allows a Knowledge Engineering (i) to build competency reference systems related to particular domains such as Healthcare or Information and Telecommunication, (ii) to identify and formally represent competency profiles (related to a job seeker, a job offer or a training offer) and (iii) to automatically match competency profiles. Developed in the context of Semantic Web Technology, the CommOn framework permits the building of domain ontologies and knowledge bases represented with Semantic Web Languages and the development of Competency-Based Web Services dedicated to Human Resources Management. The use of CommOn is illustrated in the context of a project (related to e-recruitment) which aims at developing the first Macedonian web-based platform dedicated to the definition of an efficient networking of employment and training operators. However, Common is not limited to e-recruitment application and it can be used for different purposes such as staff development and deployment, job analysis or economic evaluation.

Vladimir Radevski, Francky Trichet
Towards a Human Resource Development Ontology for Combining Competence Management and Technology-Enhanced Workplace Learning

Competencies as abstractions of work-relevant human behaviour have emerged as a promising concept for making human skills, knowledge and abilities manageable and addressable. On the organizational level, competence management uses competencies for integrating the goal-oriented shaping of human assets into management practice. On the operational and technical level, technology-enhanced workplace learning uses competencies for fostering learning activities of individual employees. It should be obvious that these two perspectives belong together, but in practice, a common conceptualization of the domain is needed. In this paper, we want to present such a reference ontology that builds on existing approaches and experiences from two case studies.

Andreas Schmidt, Christine Kunzmann
The eCCO System: An eCompetence Management Tool Based on Semantic Networks

According to several national Institutions’, ICT company associations’ and the Italian Ministry of Innovation and Technology’s requests for standardisation of ICT job profiles, an ICT job profiles model was defined. It is based on ontologies principle to describe “Knowledge Objects”, “Skills”, “Competences” and their relations. The model takes into account the several formalisations from national and company ICT job profiles frameworks and refers to the scientific literature about ontology and competences definitions. A software tool has been implemented on the basis of the model defined. It allows collecting individual users’ job profiles and analysing the gaps against a set of ICT standard profiles. The semantic network behind gives the system flexibility. Furthermore, the system enables the dynamical enrichment of the semantic network by uploading new skills by users, who can also suggest linkages between the new nodes. The system administrator will later evaluate and accept them.

Barbara Pernici, Paolo Locatelli, Clementina Marinoni
Competency Model in a Semantic Context: Meaningful Competencies (Position Paper)

In this paper, we will propose our ideas for a semantically ready competency model. The model will allow semantic enrichment on different levels, creating truly meaningful competencies. The aim of this model is to provide a flexible approach for (re)use, matching, interpretation, exchange and storage for competencies. Our competency model is based on the DOGMA ontology framework and the proposed IEEE standards RCD and SCRM. We will focus on the model itself and how semantics can be applied to it as these elements form the basis for any kind of processing on them.

Stijn Christiaens, Jan De Bo, Ruben Verlinden
Pruning Terminology Extracted from a Specialized Corpus for CV Ontology Acquisition

This paper presents an experimental study for extracting a terminology from a corpus made of Curriculum Vitae (CV). This terminology is to be used for ontology acquisition. The choice of the pruning rate of the terminology is crucial relative to the quality of the ontology acquired. In this paper, we investigate this pruning rate by using several evaluation measures (precision, recall, F-measure, and ROC curve).

Mathieu Roche, Yves Kodratoff

Workshop on Obect-Role Modeling(ORM)

ORM 2006 PC Co-chairs’ Message

Following a successful workshop held in Cyprus in 2005, this is the second in a series of fact-oriented modeling workshops run in conjunction with the OTM conferences. Fact-oriented modeling is a conceptual approach to modeling and querying the information semantics of business domains in terms of the underlying facts of interest, where all facts and rules may be verbalized in language that is readily understandable by non-technical users of those business domains. Unlike Entity-Relationship (ER) modeling and UML class diagrams, fact-oriented modeling treats all facts as relationships (unary, binary, ternary etc.). How facts are grouped into structures (e.g. attribute-based entity types, classes, relation schemes, XML schemas) is considered a lower level, implementation issue that is irrelevant to the capturing essential business semantics. Avoiding attributes in the base model enhances semantic stability and populatability, as well as facilitating natural verbalization. For information modeling, fact-oriented graphical notations are typically far more expressive than those provided by other notations. Fact-oriented textual languages are based on formal subsets of native languages, so are easier to understand by business people than technical languages like OCL. Fact-oriented modeling includes procedures for mapping to attribute-based structures, so may also be used to front-end other approaches.

Terry Halpin, Robert Meersman

Modeling Extensions

Part-Whole Relations in Object-Role Models

Representing parthood relations in ORM has received little attention, despite its added-value of the semantics at the conceptual level. We introduce a high-level taxonomy of types of meronymic and mereological relations, use it to construct a decision procedure to determine which type of part-whole role is applicable, and incrementally add mandatory and uniqueness constraints. This enables the conceptual modeller to develop models that are closer to the real-world subject domain semantics, hence improve quality of the software.

C. Maria Keet
Exploring Modelling Strategies in a Meta-modelling Context

We are concerned with a core aspect of the

processes

of obtaining conceptual models. We view such processes as information gathering dialogues, in which strategies may be followed (possibly, imposed) in order to achieve certain modelling goals. Many goals and strategies for modelling can be distinguished, but the current discussion concerns

meta-model driven

strategies, aiming to fulfil modelling goals or obligations that are the direct result of meta-model choices (i.e. the chosen modelling language). We provide a rule-based conceptual framework for capturing strategies for modelling, and give examples based on a simplified version of the Object Role Modelling (ORM) meta-model. We discuss strategy rules directly related to the meta-model, and additional procedural rules. We indicate how the strategies may be used to dynamically set a modelling agenda. Finally, we describe a generic conceptual structure for a

strategy catalog

.

P. van Bommel, S. J. B. A. Hoppenbrouwers, H. A. (Erik) Proper, T. P. van der Weide
Giving Meaning to Enterprise Architectures: Architecture Principles with ORM and ORC

Formalization of architecture principles by means of ORM and Object Role Calculus (ORC) is explored. After a discussion on reasons for formalizing such principles, and of the perceived relationship between principles and (business) rules, two exploratory example formalizations are presented and discussed. They concern architecture principles taken from The Open Group’s Architecture Framework (TOGAF). It is argued that when using ORM and ORC for formal modelling of architecture principles, the underlying logical principles of the techniques may lead to better insight into the rational structure of the principles. Thus, apart from achieving formalization, the quality of the principles as such can be improved.

P. van Bommel, S. J. B. A. Hoppenbrouwers, H. A. (Erik) Proper, Th. P. van der Weide

Data Warehousing

Using ORM-Based Models as a Foundation for a Data Quality Firewall in an Advanced Generation Data Warehouse

Data Warehouses typically represent data being integrated from multiple source systems. There are inherent data quality problems when data is being consolidated in terms of data semantics, master data integration, cross functional business rule conflicts, data cleansing, etc. This paper demonstrates how multiple Object Role Models were successfully used in establishing a data quality firewall architecture to define an Advanced Generation Data Warehouse. The ORM models realized the 100% Principle in ISO TR9007 Report on Conceptual Schemas, and were transformed into attribute-based models to generate SQL DBMS schemas. These were subsequently used in RDBMS code generation for a 100% automated implementation for the data quality firewall checks based on the described architecture. This same Data Quality Firewall approach has also been successfully used in implementing multiple web based applications, characteristically yielding 35-40% savings in development costs.

Baba Piprani
Evolution of a Dynamic Multidimensional Denormalization Meta Model Using Object Role Modeling

At Guidant, a Boston Scientific Company

,

systems that collect data in support of medical device clinical research trials must be capable of collecting large, dynamic sets of attributes that are often reused in later research activities. Their resultant design, based on conceptual analysis using Object Role Modeling (ORM), transforms each unique business fact into an instance in a highly normalized star schema structure with related dimensions. When it becomes necessary to generate focused denormalized reporting structures from this star schema, hereafter referred to as miniature data marts or simply “mini marts”, the dynamic nature of these source attributes can present a maintenance challenge. Using ORM, we propose a meta model that supports the definition, creation, and population of these denormalized reporting structures sourced from a multidimensional fact table that also leverages a hierarchical taxonomic classification of the subject domain.

Joe Hansen, Necito dela Cruz

Language and Tool Issues

Fact-Oriented Modeling from a Programming Language Designer’s Perspective

We investigate how achievements of programming languages research can be used for designing and extending fact oriented modeling languages. Our core contribution is that we show how extending fact oriented modeling languages with the

single

concept of

algebraic data types

leads to a natural and straightforward modeling of complex information structures like unnamed collection types and higher order types.

Betsy Pepels, Rinus Plasmeijer, H. A. (Erik) Proper
Automated Verbalization for ORM 2

In the analysis phase of information systems development, it is important to have the conceptual schema validated by the business domain expert, to ensure that the schema accurately models the relevant aspects of the business domain. An effective way to facilitate this validation is to verbalize the schema in language that is both unambiguous and easily understood by the domain expert, who may be non-technical. Such verbalization has long been a major aspect of the Object-Role Modeling (ORM) approach, and basic support for verbalization exists in some ORM tools. Second generation ORM (ORM 2) significantly extends the expressibility of ORM models (e.g. deontic modalities, role value constraints, etc.). This paper discusses the automated support for verbalization of ORM 2 models provided by NORMA (Neumont ORM Architect), an open-source software tool that facilitates entry, validation, and mapping of ORM 2 models. NORMA supports verbalization patterns that go well beyond previous verbalization work. The verbalization for individual elements in the core ORM model is generated using an XSLT transform applied to an XML file that succinctly identifies different verbalization patterns and describes how phrases are combined to produce a readable verbalization. This paper discusses the XML patterns used to describe ORM constraints and the tightly coupled facilities that enable end-users to easily adapt the verbalization phrases to cater for different domain experts and native languages.

Terry Halpin, Matthew Curland
T-Lex: A Role-Based Ontology Engineering Tool

In the DOGMA ontology engineering approach ontology construction starts from a (possibly very large) uninterpreted base of elementary fact types called lexons that are mined from linguistic descriptions (be it from existing schemas, a text corpus or formulated by domain experts). An ontological commitment to such ”lexon base” means selecting/reusing from it a meaningful set of facts that approximates well the intended conceptualization, followed by the addition of a set of constraints, or rules, to this subset. The commitment process is inspired by the fact-based database modeling method NIAM/ORM2, which features a recently updated, extensive graphical support. However, for encouraging lexon reuse by ontology engineers a more scalable way of visually browsing a large Lexon Base is important. Existing techniques for similar semantic networks rather focus on graphical distance between concepts and not always consider the possibility that concepts might be (fact-) related to a large number of other concepts. In this paper we introduce an alternative approach to browsing large fact-based diagrams in general, which we apply to lexon base browsing and selecting for building ontological commitments in particular. We show that specific characteristics of DOGMA such as grouping by contexts and its ”double articulation principle”, viz. explicit separation between lexons and an application’s commitment to them can increase the scalability of this approach. We illustrate with a real-world case study.

Damien Trog, Jan Vereecken, Stijn Christiaens, Pieter De Leenheer, Robert Meersman

Dynamic Rules and Processes

Modeling Dynamic Rules in ORM

This paper proposes an extension to the Object-Role Modeling approach to support formal declaration of dynamic rules. Dynamic rules differ from static rules by pertaining to properties of state transitions, rather than to the states themselves. In this paper, application of dynamic rules is restricted to so-called single-step transactions, with an old state (the input of the transaction) and a new state (the direct result of that transaction). Such restricted rules are easier to formulate (and enforce) than a constraint applying historically over all possible states. In our approach, dynamic rules specify an elementary transaction type indicating which kind of object or fact is being added, deleted or updated, and (optionally) pre-conditions relevant to the transaction, followed by a condition stating the properties of the new state, including the relation between the new state and the old state. These dynamic rules are formulated in a syntax designed to be easily validated by non-technical domain experts.

Herman Balsters, Andy Carver, Terry Halpin, Tony Morgan
Some Features of State Machines in ORM

ORM provides an excellent approach for information modeling, but to date has been limited mainly to descriptions of static information structures. This paper provides an outline of how ORM could be extended to add behavioral descriptions through the use of state machines. Most of the discussion is illustrated by an example of how a simple model could be extended in this way. Some suggestions are given for an outline process for adding state machine descriptions to ORM models and the developments required to integrate such descriptions into a comprehensive modeling environment.

Tony Morgan

The Modeling Process and Instructional Design

Data Modeling Patterns Using Fully Communication Oriented Information Modeling (FCO-IM)

Data modeling patterns is an emerging field of research in the data modeling area. Its aims are to create a body of knowledge to help understand data modeling problems better and to create better data models. Current data modeling patterns are generally domain-specific patterns (only applicable in a specific domain, e.g. a business situation) and with an Entity Relationship Modeling (ERM) way of thinking. This paper discusses data modeling patterns using the expressive power of Fully Communication Oriented Information Modeling (FCO-IM), a Dutch Fact Oriented Modeling (FOM) method. We also consider more abstract data modeling patterns – the generic patterns – and describe a few basic generic data modeling patterns in brief as well as a generic pattern found in content versioning problems.

Fazat Nur Azizah, Guido Bakema
Using Fact-Orientation for Instructional Design

In this paper we will show how fact-orientation can be used as a knowledge structuring approach for verbalizable knowledge domains, e.g. knowledge that is contained in articles, text books and instruction manuals further to be referred to as ‘subject matter’. This article will illustrate the application of the fact-oriented approach as a subject matter structuring tool for a small part of the sub-domains of operations management and marketing within the university subject of business administration. We will also show that the factoriented modeling constructs allow us to structure knowledge on the first five levels of Bloom’s taxonomy of educational objectives and we will show how the fact-oriented approach complies to the 4C/ID model for instructional design. Moreover, we will derive a ‘knowledge structure metrics’ model that can be empirically estimated and that can be used to estimate the complexity metric of a subject matter.

Peter Bollen
Capturing Modeling Processes – Towards the MoDial Modeling Laboratory

This paper is part of an ongoing research effort to better understand the process of conceptual modeling. As part of this effort, we are currently developing a modeling laboratory named MoDial (

Mo

deling

Dial

ogues). The main contribution of this paper is a conceptual meta-model of that part of MoDial which aims to capture the elicitation aspects of the

modeling process

used in creating a model, rather than the model as such.The current meta-model is the result of a two-stage research process. The first stage involves theoretical input from literature and earlier results. The second stage is concerned with (modest) empirical validation in terms of interviews with modeling experts.

S. J. B. A. (Stijn) Hoppenbrouwers, L. (Leonie) Lindeman, H. A. (Erik) Proper

Workshop on Pervasive Systems (PerSys)

PerSys 2006 PC Co-chairs’ Message

We welcome our workshop attendees to a very interesting collection of discussions. We received many papers and had to limit the acceptance rate to 40those papers submitted. The papers cover a range of topics from Infrastructure, Service Discovery, Personalization, Environments, Security, Privacy, to Wireless and Sensor Networks showing the scope of interests and activities represented by the PerSys participants. We would like to thank everyone who submitted a paper and believe you will all find the presentations and discussions stimulating and worthwhile. The reviewers of the papers have done a terrific job in helping authors prepare better final papers and deserve special thanks for completing the job promptly. Last, we would like to thank the organizers of the conference a sincere thanks for the hard work they have put in to providing us with the opportunity to gather, present, listen, and discuss our research.

Skevos Evripidou, Roy Campbell

Infrastructure for Pervasive Systems

Live and Heterogeneous Migration of Execution Environments

Application migration and heterogeneity are inherent issues of pervasive systems. Each implementation of a pervasive system must provide its own migration framework which hides heterogeneity of the different resources. This leads to the development of many frameworks that perform the same functionality. We propose a minimal execution environment, the micro virtual machine, that factorizes process migration implementation and offers heterogeneity, transparency and performance. Systems implemented on top of this micro virtual machine, such as our own Java virtual machine, will therefore automatically inherit process migration capabilities.

Nicolas Geoffray, Gaël Thomas, Bertil Folliot
A Comparative Study Between Soft System Bus and Traditional Middlewares

Although persistent availability is one of the basic requirements of the middlewares for pervasive computing systems to provide "anytime anywhere" services, very little work has been done on this issue. Soft System Bus (SSB) was proposed to provide "anytime anywhere" middleware platform support to large-scale reactive systems which run continuously and persistently. However, till now no SSB has been implemented. In this paper, we present a comparative study between the SSB and different types of traditional middlewares, e.g., synchronous Request/Reply, Message Oriented, Publish/ Subscribe middlewares, etc, in order to find the implementation issues of an SSB. We show that although existing middlewares have some characteristics that are common in an SSB too, they lack some features which are unique and essential for an SSB. Finally, we present the implementation issues of an SSB to ensure persistent availability.

Mohammad Reza Selim, Takumi Endo, Yuichi Goto, Jingde Cheng

Service Discovery and Personalization

A Semantic Location Service for Pervasive Grids

Grid computing environments have recently been extended with Pervasive computing characteristics leading to a new paradigm, namely the Pervasive Grid Computing. In particular, QoS of existing Grid services is being augmented by means of location-awareness. This paper presents a location service that locates active mobile objects, such as Wi-Fi enabled devices and RFID tagged entities, in a real Pervasive Grid. The key feature of the service is the use of ontologies and rules to define a uniform, unambiguous and well-defined model for the location information, independently from the particular positioning system. Moreover, the location service performs logic and reasoning mechanisms both for providing physical and semantic locations of mobile objects and for inferring the finest granularity for location information in the case a mobile object is located by more than one positioning system. The service has been developed at the top of the standard OGSA architecture.

Antonio Coronato, Giuseppe De Pietro, Massimo Esposito
A Pragmatic and Pervasive Methodology to Web Service Discovery

Service-Oriented computing can be characterized as the motivating force for interoperable web applications. Businesses that embark on service-oriented technology are inclined to an effective interoperation with other businesses. This interoperation not only can expand businesses’ markets but also allow the provision of quality services to citizens for carrying out their everyday transactions. However for an effective utilization of service-oriented computing a variety of other technologies must be involved namely semantics, pragmatics, effective search techniques, intelligent agents and pervasive services. In this paper we focus in one of the phases of service-oriented computing, the web service discovery, and propose an effective pragmatic and pervasive methodology that will enable potential business partners to discover the web services/processes of their interest. We are particularly emphasizing on the importance of pragmatics and their application during web service discovery, because only a pragmatic consideration can provide complete and unambiguous meaning to web service usage.

Electra Tamani, Paraskevas Evripidou
Mobile User Personalization with Dynamic Profiles: Time and Activity

Mobile clients present a new and more demanding breed of users. Solutions provided for the desktop users are often found inadequate to support this new breed of users. Personalization is such a solution. The moving user differs from the desktop user in that his handheld device is truly personal. It roams with the user and allows him access to info and services at any given time from anywhere. As the moving user is not bound to a fixed place and to a given time period, factors such as time and current experience becomes increasingly important for him. His context and preferences are now a function of time and experience and the goal of personalization is to match the local services to this time-depended preferences. In this paper we exploit the importance of time and experience in personalization for the moving user and present a system that anticipates and compensates the time-dependant shifting of user interests. A prototype system is implemented and our initial evaluation results indicate performance improvements over traditional personalization schemes that range up to 173%.

Christoforos Panayiotou, George Samaras

Pervasive Environments

Rethinking the Design of Enriched Environments

Most advances in pervasive computing focus strongly on technological issues (e.g. connectivity, portability, etc.); as technology becomes more complex and pervasive, design achieves a greater relevance. Inadequate design leads to unnatural interaction that may overload users, hampering the old aspiration of creating transparent artifacts. Transparency is a concept that describes technology that allows users to focus their attention on the main activity goals instead of on the technology itself. Transparency is strongly related with the relevance of individuals’ goals, their knowledge, and conventions learned as social beings. This paper aims to provide a framework for the design of augmented artifacts that exploit users’ knowledge about how things work in the world both in the syntactic and the semantic level.

Felipe Aguilera, Andres Neyem, Rosa A. Alarcón, Luis A. Guerrero, Cesar A. Collazos
The eHomeConfigurator Tool Suite

In contrast to decreasing costs of electronic appliances enabling the realization of pervasive systems, the price of individual development of the software making up eHome systems is one of the major problems preventing their large-scale adoption. By introducing a specification, configuration, and deployment process, the environment-specific development effort is reduced. We support this process by our tool suite, the eHomeConfigurator which is introduced in this paper. It creates a configuration graph, capable of describing dependencies and contexts of components in the eHome field. The tool suite is used to configure and deploy various eHome services on different home environments. Compared to the classical development process, the effort for setting up eHome systems is reduced significantly and opens up the possibility to decrease the development costs for eHome systems.

Ulrich Norbisrath, Christof Mosler, Ibrahim Armac
Mobile Phone + RFID Technology = New Mobile Convergence Toward Ubiquitous Computing Environment

Radio Frequency Identification (RFID) is increasingly used to improve the efficiency of business processes by providing the capability of automatic identification and data capture and it is expected to extend its significant potential to daily life improvement. On the other hand, the convergence of the telecommunication, media and information technology industries has generated massive change in not just industries but also our daily lives, and none are more apparent than the mobile phone. Such convergence trend for mobile phone seems inevitable and the integration of RFID technology and mobile phone is regarded as one of promising technologies to accelerate such convergence trend toward the ubiquitous computing environment. In this paper, we discuss the benefits, business cases and system architectures emerging from the integration between RFID technology – especially, RFID reader – and mobile phone with existing wireless network infrastructure in terms of business and software perspectives. Moreover, we present the reference implementation for the RFID-based cellular network architecture incorporated with a mobile phone mounted with RFID reader in order to evaluate the technical feasibility of this convergence.

Taesu Cheong, Marie Kim
Clicky: Input in Pervasive Systems

Pervasive computing systems are physically bounded spaces of physical and digital devices such as desktop computers, laptops, handheld devices, and sensors. These systems attempt to integrate the information space of computers and the physical space of the real world. With multiple devices and heterogeneous inputs, seamless interaction with these systems remains a challenge. We have developed an input system that overcomes many obstacles facing users in pervasive systems. We provide support for multiple users to control multiple mice on the same display. We also provide an interface using a video stream, allowing the user to click on devices shown in the video to control other devices in the room. Finally, we demonstrate experimentally that these techniques are useful in a pervasive space and do not impose a significant overhead in our system. Using our system, users are able to seamlessly interact with pervasive systems through a single, universal input.

Andrew Weiler, Jeffrey Naisbitt, Roy Campbell

Security and Privacy

A User-Centric Privacy Framework for Pervasive Environments

One distinctive feature of pervasive computing environments is the common need to gather and process context information about real persons. Unfortunately, this unavoidably affects persons’ privacy. Each time someone uses a cellular phone, a credit card, or surfs the web, he leaves a trace that is stored and processed. In a pervasive sensing environment, however, the amount of information collected is much larger than today and also might be used to reconstruct personal information with great accuracy. The question we address in this paper is how to

control

dissemination and flow of personal data across organizational, and personal boundaries, i.e., to potential addressees of privacy relevant information. This paper presents the

User-Centric Privacy Framework

(UCPF). It aims at protecting a user’s privacy based on the enforcement of privacy preferences. They are expressed as a set of constraints over some set of context information. To achieve the goal of cross-boundary control, we introduce two novel abstractions, namely

Transformations

and

Foreign Constraints

, in order to extend the possibilities of a user to describe privacy protection criteria beyond the expressiveness usually found today.

Transformations

are understood as any process that the user may define over a specific piece of context. This is a main building block for obfuscating – or even plainly lying about – the context in question.

Foreign Constraints

are an important complementing extension because they allow for modeling conditions defined on external users that are

not

the tracked individual, but may influence disclosure of personal data to third parties. We are confident that these two easy-to-use abstractions together with the general privacy framework presented in this paper constitute a strong contribution to the protection of the personal privacy in pervasive computing environments.

Susana Alcalde Bagüés, Andreas Zeidler, Carlos Fernandez Valdivielso, Ignacio R. Matias
An XML-Based Security Architecture for Integrating Single Sign-On and Rule-Based Access Control in Mobile and Ubiquitous Web Environments

Since mobile and Web applications are integrated, the number of services, a typical mobile user can now access, has greatly increased. With a variety of services, a user will be frequently asked to provide his security information to a system. This iterative request is one critical problem which can cause frequent transmission of user’s security information. Another serious problem is how an administrator controls access request of internal users who were authenticated. In order to establish effective security scheme for integrated environments, Single Sign-On and access control also need to be integrated. In this paper, we propose an XML-based architecture integrating authentication and access control policy in integrated environment to be extended to ubiquitous environment. To provide flexibility, extensibility, and interoperability between environments to be integrated, we have implemented an architecture based on SAML and XACML, which are standardized specifications. By specifying security policies in XML schema and exchanging security information according to that schema, the proposed architecture offers the opportunities to build standardized schemes for authentication and authorization. Additionally, the proposed architecture makes it possible to establish a fine-grained access control scheme by specifying the XML element unit as a target to be protected.

Jongil Jeong, Dongil Shin, Dongkyoo Shin
Architecture Framework for Device Single Sign On in Personal Area Networks

This paper addresses the Single Sign On (SSO) issue in personal Area Networks (PANs) comprising of heterogeneous handheld devices. Architectures for service SSO solutions at the enterprise level are already in the market and some standards for such solutions exist. In this paper however we introduce the notion of device level SSO. By device SSO, we refer to the process of logging on to one device and then subsequently being authorized for other devices on a need only basis, without the user being prompted for his credentials or requiring any further manual interaction. Device SSO secures the authentication process in a PAN and alleviates the users from the burden of handling and managing the credentials of each device in the PAN. While borrowing elements from the enterprise level SSO standards, our architecture has been custom-tailored to the characteristics and inherent features of a PAN environment. Client server and peer-to-peer SSO schemes have been designed to fit both PAN star and mesh architectures. The proposed scheme is an application layer solution that is independent of the device platform and the underlying radio link. A sample prototype application has been developed as a proof of concept that runs on laptops and PDAs communicating over Bluetooth links.

Appadodharana Chandershekarapuram, Dimitrios Vogiatzis, Spyridon Vassilaras, Gregory S. Yovanof

Wireless and Sensor Networks

Applying the Effective Distance to Location-Dependent Data

In wireless mobile environments, we increasingly use data that depends on the location of mobile clients. However, requested geographical objects (GOs) do not exist in all areas with uniform distribution. More urbanized areas have greater population and greater GO density. Thus the results of queries may vary based on the perception of distance. We use urbanization as a criterion to analyze the density of GOs. We propose the Effective Distance (ED) measurement, which is not a physical distance but the perceived distance varying based on the extent of urbanization. We present the efficiency of supporting location-dependent data on GOs with proposed ED. We investigate several membership functions to establish this proposed ED based on the degree of urbanization. In our evaluation, we show that the z-shaped membership function can flexibly adjust the ED. Thus, we obtain improved performance to provide the location-dependent data because we can differentiate the ED for very densely clustered GOs in urbanized areas.

JeHyok Ryu, Uğur Çetintemel
Scalable Context Simulation for Mobile Applications

Mobility, and, implicitly, context-awareness, offer significant opportunities to service providers to augment and differentiate their respective services. Though this potential has been long acknowledged, the dynamic nature of a mobile user’s context can lead to various difficulties when engineering mobile, context-aware applications. Classic software engineering elements are well understood in the fixed computing domain. However, the mobile domain introduces further degrees of difficulty into the process. The testing phase of the software engineering cycle is a particular case in point as modelling the myriad of scenarios that mobile users may find themselves is practically impossible. In this paper, we describe a scalable framework that can be used both for initial prototyping and for the final testing of mobile context-aware applications. In particular, we focus on scenarios where the application is in essence a distributed multi-agent system, comprising a suite of agents running on both mobile devices and on fixed nodes on a wireless network.

Y. Liu, M. J. O’Grady, G. M. P. O’Hare

Doctoral Consortium

OTM 2006 Academy Doctoral Consortium PC Co-chairs’ Message

The OTM Academy Doctoral Consortium is the third edition of an event at the “On The Move Federated Conferences” that provides a platform for researchers at the beginning of their career. Promising doctoral students are encouraged to showcase their work on an international forum, where they are provided with an opportunity to gain feedback and guidance on future research directions from prominent professors.

Antonia Albani, Gábor Nagypál, Johannes Maria Zaha

Enterprise Systems

Multidimensional Effort Prediction for ERP System Implementation

The Ph.D. thesis builds upon the state-of-the-art in effort prediction for ERP-implementation projects. While current approaches use the complexity of the technical system as only indicator for estimation, a multidimensional key ratio scorecard is developed to enhance the quality of effort prediction. Key ratios from the technical dimension are extended towards service-oriented architectures as the upcoming, dominating architectural ERP concept. Within the organizational dimension, competencies of the project team are evaluated and quantified as key ratios. Key ratios from the situational dimension are used for approximating the expected degree of employee cooperation. As fuzzy key ratios cannot be manually derived from business requirements, an IT-based tool is designed, prototypically realized and empirically evaluated in order to be used for retrieval of defined key ratios in the scorecard.

Torben Hansen
A Decision-Making Oriented Model for Corporate Mobile Business Applications

Information and communication technology (ICT) has witnessed mobility as an important development over the last few years. So far the discussion about mobile information systems has largely been techno-centric and supplier oriented rather than business-centric and user oriented. The lack of available methods for business driven evaluation motivates the research question of effective decision making support. This paper summarizes the PhD research effort aimed at constructing a decision support model to enhance the awareness and understanding of mobile applications impact. It presents a formal approach and lays out in the underlying research methodology as well as the epistemological position. This contribution also highlights the recent and dynamic debate on suitable IS research standards, thereby inviting a discussion on appropriate artifact-based design research and the central question of validation to successfully complete the research project.

Daniel Simonovich
Applying Architecture and Ontology to the Splitting and Allying of Enterprises: Problem Definition and Research Approach

Organizations increasingly split off parts and start cooperating with those parts, for instance in Shared Service Centers or by using in- or outsourcing. Our research aims to find the right spot and the way to split off those parts: “Application of which organization-construction rules to organizations with redundancy in processes and ICT leads to adequate splitting of enterprises”? Architecture and ontology play a role in the construction of any system. From organizational sciences we expect specific construction rules for splitting an enterprise, including criteria for “adequately” splitting an enterprise. We intend to find and test those construction rules by applying action research to real-life cases in which ontology and architecture are used.

Martin Op ’t Land

System Development

Profile-Based Online Data Delivery

This research is aimed at providing

theoretically rigorous, flexible, efficient

, and

scalable

methodologies for intelligent delivery of data in a dynamic and resource constrained environment. Our proposed solution utilizes a uniform client and server profilization for data delivery and describe the challenges in developing optimized hybrid data delivery schedules. We also present an approach that aims at constructing automatic adaptive policies for data delivery to overcome various modeling errors utilizing feedback.

Haggai Roitman
An Incremental Approach to Enhance the Accuracy of Internet Routing

Internet is composed of a set of autonomous systems (AS) managed by an administrative authority. The Border Gateway Protocol (BGP) is the exterior routing protocol used to exchange network reachability between the border routers of each autonomous network. BGP allows the ASes to apply policies when they select the routes that their traffic will take. Policies are based on business relationships and traffic engineering constraints. It is currently assumed that the exchanged reachability information is correct. In other words, the ASes that originate a network prefix are supposed to be authorized to advertise it. It also means that the announced routing information is conformant with the routing policies of the ASes. This assumption is not true anymore. We review existing proposals aiming to solve internet routing security issues and present our contributions. First, we propose a system able to detect and to react to illegitimate advertisements. Then, we describe our current work that focuses on the specification of a collaborative framework between ASes aiming at cautiously select routes.

Ines Feki
Ontology-Based Change Management of Composite Services

As services in an SOA can be composed of lower level services, there will exist dependencies between services. Changing services can impact other services in the SOA and a lot of manual service management tasks are required. Our goal is to create a method for analyzing change effects of service adaptations within an SOA. More specific, we study: which services are affected when a certain service is adapted (dependencies), how the services are affected when a certain service is adapted (effects), and what is the best way to deal with this service adaptation (advice). We take an ontological approach to solve the problem and test this approach by building a prototype (tool).

Linda Terlouw
Research on Ontology-Driven Information Retrieval

An increasing number of recent information retrieval systems make use of ontologies to help the users clarify their information needs and come up with semantic representations of documents. A particular concern here is the integration of these semantic approaches with traditional search technology. The research presented in this paper examines how ontologies can be efficiently applied to large-scale search systems for the web. We describe how these systems can be enriched with adapted ontologies to provide both an in-depth understanding of the user’s needs as well as an easy integration with standard vector-space retrieval systems. The ontology concepts are adapted to the domain terminology by computing a feature vector for each concept. Later, the feature vectors are used to enrich a provided query. The whole retrieval system is under development as part of a larger Semantic Web standardization project for the Norwegian oil & gas sector.

Stein L. Tomassen

Workshop on Reliability in Decentralized Distributed Systems (RDDS)

RDDS 2006 PC Co-chairs’ Message

Middleware has become a popular technology for building distributed systems from tiny sensor networks to large scale peer-to-peer (P2P) networks. Support such as asynchronous and multipoint communication is well suited for constructing reactive distributed computing applications over wired and wireless networks environments. While the middleware infrastructures exhibit attractive features from an application development perspective (e.g., portability, interoperability, adaptability), they are often lacking in robustness and reliability.

Eiko Yoneki, Pascal Felber

P2P

Core Persistence in Peer-to-Peer Systems: Relating Size to Lifetime

Distributed systems are now both very large and highly dynamic. Peer to peer overlay networks have been proved efficient to cope with this new deal that traditional approaches can no longer accommodate. While the challenge of organizing peers in an overlay network has generated a lot of interest leading to a large number of solutions, maintaining critical data in such a network remains an open issue. In this paper, we are interested in defining the portion of nodes and frequency one has to probe, given the churn observed in the system, in order to achieve a given probability of maintaining the persistence of some critical data. More specifically, we provide a clear result relating the size and the frequency of the probing set along with its proof as well as an analysis of the way of leveraging such an information in a large scale dynamic distributed system.

Vincent Gramoli, Anne-Marie Kermarrec, Achour Mostefaoui, Michel Raynal, Bruno Sericola
Implementing Chord for HP2P Network

We propose a two-layer hybrid P2P network (HP2P), in which the upper layer is structured Chord network, and the lower layer is unstructured flooding network. HP2P benefits from the advantages of both structured and unstructured networks and significantly improves the performance such as stability, scalability and lookup latency of the network. In this paper, the Chord overlay algorithm is formalized. The data structure, node joining, node leaving, routing table stabilizing and lookup services are introduced in detail. Further the caching mechanism is employed to accelerate lookup services. In particular, the analysis shows that the stability of Chord overlay in HP2P network has been enhanced indeed.

Yang Cao, Zhenhua Duan, Jian-Jun Qi, Zhuo Peng, Ertao Lv
A Generic Self-repair Approach for Overlays

Self-repair is a key area of functionality in overlay networks, especially as overlays become increasingly widely deployed and relied upon. Today’s common practice is for each overlay to implement its own self-repair mechanism. However, apart from leading to duplication of effort, this practice inhibits choice and flexibility in selecting from among multiple self-repair mechanisms that make different deployment-specific trade-offs between dependability and overhead. In this paper, we present an approach in which overlay networks provide functional behaviour only, and rely for their self-repair on a

generic self-repair service

. In our previously-published work in this area, we have focused on the distributed algorithms encapsulated within our self-repair service. In this paper we focus instead on API and integration issues. In particular, we show how overlay implementations can interact with our generic self-repair service using a small and simple API. We concretise the discussion by illustrating the use of this API from within an implementation of the popular Chord overlay. This involves minimal changes to the implementation while considerably increasing its available range of self-repair strategies.

Barry Porter, Geoff Coulson, François Taïani
Providing Reliable IPv6 Access Service Based on Overlay Network

During the transition from IPv4 to IPv6, it is necessary to provide IPv6 access service for IPv6 islands inside the native IPv4 network to access the native IPv6 network. The existing IPv4/IPv6 transition methods make IPv6-relay gateways maintained by IPv6 service providers become potential communication bottlenecks. In this paper, a new method PS6 is presented to reduce the reliance on these relay gateways by shifting the burden to edge gateways on each IPv6 island. In this method, direct tunnels are set up between IPv6 islands, and a overlay network is maintained between edge gateways of IPv6 islands to propagate information of tunnel end points. After describing the algorithm and overlay network design, we analyze the scalability of the algorithm. The simulation results and theoretical analysis show that the proposed method is reliable and scalable.

Xiaoxiang Leng, Jun Bi, Miao Zhang

Distributed Algorithms

Adaptive Voting for Balancing Data Integrity with Availability

Data replication is a primary means to achieve fault tolerance in distributed systems. Data integrity is one of the correctness criteria of data-centric distributed systems. If data integrity needs to be strictly maintained even in the presence of network partitions, the system becomes (partially) unavailable since no potentially conflicting updates are allowed on replicas in different partitions. Availability can be enhanced if data integrity can be temporarily relaxed during degraded situations. Thus, data integrity can be balanced with availability.

In this paper, we contribute with a new replication protocol based on traditional quorum consensus (voting) that allows the configuration of this trade-off. The key idea of our Adaptive Voting protocol is to allow non-critical operations (that cannot violate critical constraints) even if no quorum exists. Since this might impose replica conflicts and data integrity violations, different reconciliation policies are needed to re-establish correctness at repair time. An availability analysis and an experimental evaluation show that Adaptive Voting provides better availability than traditional voting if (i) some data integrity constraints of the system are relaxable and (ii) reconciliation time is shorter than degradation time.

Johannes Osrael, Lorenz Froihofer, Matthias Gladt, Karl M. Goeschka
Efficient Epidemic Multicast in Heterogeneous Networks

The scalability and resilience of epidemic multicast, also called probabilistic or gossip-based multicast, rests on its symmetry: Each participant node contributes the same share of bandwidth thus spreading the load and allowing for redundancy. On the other hand, the symmetry of gossiping means that it does not avoid nodes or links with less capacity. Unfortunately, one cannot naively avoid such symmetry without also endangering scalability and resilience. In this paper we point out how to break out of this dilemma, by lazily deferring message transmission according to a configurable policy. An experimental proof-of-concept illustrates the approach.

José Pereira, Rui Oliveira, Luís Rodrigues
Decentralized Resources Management for Grid

Among all components of a grid or peer-to-peer application, the resources management is unavoidable. Indeed, new resources like computational power or storage capacity must be quickly and efficiently integrated. This management can be achieved either by a fully centralized way (

BOINC

) or by a hierarchical way (

Globus

,

DIET

). In the latter case, there is a greater flexibility and a greater scalability. But the counterpart is the difficulty to design and to deploy such a solution, particularly if the resources are volatile.

In this article, we combine random walks and circulating word to derive a fully distributed solution to the resources management. Random walks have proved their efficiency in distributed computing and are well suited to dynamical networks like peer-to-peer or grid networks. There is no condition on nodes lifetime and we need only one application for each node.

Thibault Bernard, Alain Bui, Olivier Flauzac, Cyril Rabat

Reliability Evaluation

Reliability Evaluation of Group Service Providers in Mobile Ad-Hoc Networks

A main challenge for applications in mobile ad-hoc networks (MANETs) is to survive link and node failures. One approach to overcome volatility of MANETs is to evaluate the reliability of communication peers allowing to choose the most reliable instances and to estimate the risk of failure before using a remote service. We present an approach that integrates discovery and reliability evaluation of group service providers. A group service provider is a node providing a service which is used by a group of nodes simultaneously.

Joos-Hendrik Böse, Andreas Thaler
Traceability and Timeliness in Messaging Middleware

Distributed messaging middleware can provide few delivery guarantees. Nevertheless, applications need to monitor timely delivery of important messages, both at the time and retrospectively, such as for reasons of safety and audit in a healthcare application. Timely, decentralised notification enables out-of-band resolution of problems beyond the messaging system’s control. Instead of relying on applications to generate (and monitor) a proof of delivery message, which may itself be lost in transit, we instead propose a general-purpose service that provides local delivery logs, remote delivery confirmation, and automatic warning events about potentially undelivered messages, in a clean extension of the pub/sub paradigm. The service can also provide an audit trail across a chain of messages, by using message tokens to correlate the receipt and subsequent publication of otherwise apparently unrelated messages. This allows meaningful analysis of failures in a workflow, enabling reliable applications which recover gracefully from communication failures.

Brian Shand, Jem Rashbass
Transaction Manager Failover: A Case Study Using JBOSS Application Server

This paper describes, for the case of Enterprise Java Bean components and JBoss application server, how replication for availability can be supported to tolerate application server/transaction manager failures. Replicating the state associated with the progression of a transaction (i.e., which phase of two-phase commit is enacted and the transactional resources involved) provides an opportunity to continue a transaction using a backup transaction manager if the transaction manager of the primary fails. Existing application servers do not support this functionality. The paper discusses the technical issues involved and shows how a solution can be engineered.

A. I. Kistijantoro, G. Morgan, S. K. Shrivastava

Workshop on Semantic-Based Geographical Information Systems (SeBGIS)

SeBGIS 2006 PC Co-chairs’ Message

Many societal applications, for example in domains such as health care, land use, disaster management, and environmental monitoring, increasingly rely on geographical information for their decision making. With the emergence of the World WideWeb this information is typically located in multiple, distributed, diverse, and autonomously-maintained systems. Therefore, strategic decision making in these societal applications relies on the ability to enrich the semantics associated to geographical information in order to support a wide variety of tasks including data integration, interoperability, knowledge reuse, knowledge acquisition, knowledge management, spatial reasoning, and many others. While all research realized in the area of Semantic Web provides a foundation for annotating information resources with machine-readable meaning, much work must still be done for eliciting semantics of geographical information. Spatial information has embraced many recent technological developments in which semantics plays a crucial role. Data warehouses and OLAP systems constitute a fundamental component of todays diverse locations.

Esteban Zimány

GIS Integration and GRID

Matching Schemas for Geographical Information Systems Using Semantic Information

Integration and interoperability is a basic requirement for geographic information systems (GIS). The web provides access to geographic data in several ways: on the one hand, web-based interactive GIS applications provide maps and routing information to end users; on the other hand, the data of some GIS can be accessed in a programmatic way using a web service. Thereby, the data is made available for other GIS applications. However, integrating data from various sources is a tedious task which requires the mapping of the involved schemas as a first step. Schema matching analyzes and identifies similarities of two schemas, but all approaches can be only semi-automatic as human intervention is required to verify the result of a schema matching algorithm. In this paper, we present an approach that improves the matching result of existing solutions by using semantic information provided by the context of the geographic application. This reduces the effort for manually correcting the results which has been validated in several application examples.

Christoph Quix, Lemonia Ragia, Linlin Cai, Tian Gan
Towards a Context Ontology for Geospatial Data Integration

Recently, Geospatial data and Geographic Information Systems (GIS) have been increasingly used. As a result, the integration of geospatial data has become a crucial task for decision makers. Since GIS and geospatial databases are designed by different organizations using different representation models and there are diverse levels of detail for the spatial features, it is much more complex to achieve data integration in geospatial databases. To help matters, context information may be employed to improve two fundamental aspects in Geospatial Data Integration: (1) schema mapping generation and (2) query answering. However, a relevant issue when using context is how to better represent context information. Ontologies are an interesting approach to represent context, since they enable sharing and reusability and help reasoning. In this paper, we propose a context ontology to formally represent context in geospatial data integration. We also present an example where this context ontology is used to improve query processing.

Damires Souza, Ana Carolina Salgado, Patricia Tedesco
Spatial Data Access Patterns in Semantic Grid Environment

Starting from the era of stove-piped Geographical Information Systems up to interesting mash-ups involving Internet based mapping services, the approaches of handling Geographical Information (GI) have changed significantly. This paper briefly concentrates up on the distinctive features and current implementations of Spatial Data Access Patterns. Considering the information requirements of the users in Emergency Operations Center, this paper identifies the issues and challenges in handling GI in dynamically changing environments. Two new patterns based on appropriate integration of Semantics and Grid technology are introduced, that may satisfy the identified requirements. Necessary alterations in current practices for handling GI is discussed with detail considerations for realization of these patterns on open environments.

Vikram Sorathia, Anutosh Maitra

Spatial Data Warehouses

GeWOlap: A Web Based Spatial OLAP Proposal

Data warehouses and OLAP systems help to interactively analyze huge volumes of data. Spatial OLAP refers to the integration of spatial data in multidimensional applications at the physical, logical and conceptual level. In order to include spatial information as a result of the decision-making process, we propose to define spatial measures as geographical objects in the multidimensional data model. This raises problems regarding aggregation operations and cube navigation in both semantic and implementation aspects. This paper presents a

GeWOlap

, a web based, integrated and extensible GIS-OLAP prototype, able to support geographical measures. Our approach is illustrated by its application in a project for the CORILA consortium (Consortium for Coordination of Research Activities concerning the Venice Lagoon System).

Sandro Bimonte, Pascal Wehrle, Anne Tchounikine, Maryvonne Miquel
Affordable Web-Based Spatio-temporal Applications for Ad-Hoc Decisions

This paper outlines a framework to support the demand-driven analysis of spatio-temporal data. It will support decision making involving complex, multidimensional data and addresses the following challenges: (1) the demand-driven acquisition of context data, (2) the combination of context and user data, (3) the consideration of different aspects and levels of detail on the analysis and (4) the storage and integration of analysis results for further use. The framework will provide web-services based on open standards to populate and explore multidimensional spatio-temporal structures interactively. Online Analitical Pro-cesing (OLAP) concepts will serve as model for storing and querying the data. Roaming-services will be used to update contents coming from different Spatial Data Infrastructures. Semantical descriptions will allow switching analysis operations at different levels of detail. Research questions and challenges related to the underlying model and implementation aspects will be discussed.

Vera Hernández Ernst
Requirements Specification and Conceptual Modeling for Spatial Data Warehouses

Development of a spatial data warehouse (SDW) is a complex task, which if assisted with the methodological framework could facilitate its realization. In particular, the requirements specification phase, being one of the earliest steps of system development, should attract attention since it may entail significant problems if faulty or incomplete. However, a lack of methodology for the SDW design and the presence of two actors in specifying data requirements, i.e., users and source systems, complicates more the development process. In this paper, we propose three different approaches for requirements specifications that lead to the creation of conceptual schemas for SDW applications.

E. Malinowski, E. Zimányi

Spatio-temporal GIS

A Classification of Spatio-temporal Entities Based on Their Location in Space-Time

We present an axiomatic theory of spatio-temporal entities based on the primitives

spatial-region

,

part-of

, and

is-an-instance-of

. We provide a classification of spatio-temporal entities according to the number and kinds of regions at which they are located in spacetime and according to whether they instantiate or are instantiated at those regions. The focus on location and instantiation at a location as the central notions of this theory makes it particularly appropriate for serving as a foundational ontology for geography and geographic information science.

Thomas Bittner, Maureen Donnelly
Efficient Storage of Interactions Between Multiple Moving Point Objects

The quintessence of the Qualitative Trajectory Calculus – Double-Cross (QTC

C

) is to describe the interaction between two moving objects adequately. Its naturalness has been studied before, both theoretically and by means of illustrative examples. Using QTC

C

, this paper extends the fundamental approach to interactions of configurations of multiple moving objects. In order to be able to optimally store and analyse trajectories of moving objects within QTC

C

, a transformation from traditional quantitative information to QTC

C

information is needed. This process is explained and illustrated by means of an example. It is shown that once this transformation process is done, the storage and analysis of real world moving objects from the point of view of QTC

C

, enables querying of moving objects.

Nico Van de Weghe, Frank Witlox, Anthony G. Cohn, Tijs Neutens, Philippe De Maeyer
Implementing Conceptual Spatio-temporal Schemas in Object-Relational DBMSs

Several spatio-temporal conceptual models have been proposed in the literature. Some of these models have associated CASE tools assisting the user from the creation of the conceptual schema until the generation of a physical schema for a target DBMS or GIS. However, such CASE tools only consider the translation of information structures (i.e. attributes). Since current DBMSs or GISs provide limited support for temporal and spatio-temporal information, when translating conceptual schemas it is necessary to automatically generate the functions and procedures allowing to manipulate spatio-temporal information in the target platform. In this paper we describe how to realize this generation in an object-relational DBMSs.

Esteban Zimányi, Mohammed Minout

Semantic Similarity

A Semantic Similarity Model for Mapping Between Evolving Geospatial Data Cubes

In a decision-making context, multidimensional geospatial databases are very important. They often represent data coming from heterogeneous and evolving sources. Evolution of multidimensional structures makes difficult, even impossible answering to temporal queries, because of the lack of relationships between different versions of spatial cubes created at different time. This paper proposes a semantic similarity model redefined from a model applied in the ontological field to establish semantic relations between data cubes. The proposed model integrates several types of similarity components adapted to different hierarchical levels of dimensions in multidimensional databases and also integrates similarity between features of concepts. The proposed model has been applied to a set of specifications from different inventory in Montmorency Forest in Canada. Results show that the proposed model improves precision and recall compared to the original model. Finally, further investigation is suggested in order to integrate the proposed model to SOLAP tools as future works.

Mohamed Bakillah, Mir Abolfazl Mostafavi, Yvan Bédard
Query Approximation by Semantic Similarity in GeoPQL

This paper proposes a method for query approximation in Geographical Information Systems. In our approach, queries are expressed by the Geographical Pictorial Query Language (GeoPQL), and query approximation is performed using

WordNet,

a lexical database for the English language available on the Internet. Due to the focus on geographical context, we address WordNet

partitiontaxonomies.

If a concept contained in a query has no match in the database, the query is approximated using the immediate superconcepts and subconcepts in the WordNet taxonomy, and their related degrees of similarity. Semantic similarity is evaluated using the

information content

approach, which shows a higher correlation with human judgment than the traditional similarity measures.

Fernando Ferri, Anna Formica, Patrizia Grifoni, Maurizio Rafanelli
Sim-DL: Towards a Semantic Similarity Measurement Theory for the Description Logic $\mathcal ALCNR$ in Geographic Information Retrieval

Similarity measurement theories play an increasing role in GIScience and especially in information retrieval and integration. Existing feature and geometric models have proven useful in detecting close but not identical concepts and entities. However, until now none of these theories are able to handle the expressivity of description logics for various reasons and therefore are not applicable to the kind of ontologies usually developed for geographic information systems or the upcoming geospatial semantic web. To close the resulting gap between available similarity theories on the one side and existing ontologies on the other, this paper presents ongoing work to develop a context-aware similarity theory for concepts specified in expressive description logics such as

$\mathcal ALCNR$

.

Krzysztof Janowicz

Enhancing Usability

A Primer of Geographic Databases Based on Chorems

The goal of this paper is not to present outcomes of research, but rather present a new research plan in the use of chorems in geographic information systems. Created by R. Brunet, chorems are a schematic representation of a territory. Presently, geographic decision-makers are not totally satisfied by conventional cartography, essentially because they want to know where and what are the problems. And so chorems appear as an interesting approach to unveil geographic problems, and so to help decision makers understand their territory, their structure and their evolution. After having given the definition and presented some applications of chorems, we show how chorems can be discovered by spatial data mining, can help decision making, and also how chorem maps can be a novel approach to visually entry a geographic database or datawarehouse. Comparing with the Ben Shneiderman’s approach, chorems can give an overview of the territory; then by zooming and filtering, and sub-chorem maps can be generated for smaller territories. Finally a list of barriers to overcome is given as main landmarks for a new research program in order to design new kind of geographic information systems or spatial decision support systems.

Robert Laurini, Françoise Milleret-Raffort, Karla Lopez
A Service to Customize the Structure of a Geographic Dataset

This paper deals with the usability of vector geographic data structures. We define usability of a geographic representation as the ability to fit the user’s view, the user application and the user platform, plus the ability to be derived from a data producer dataset and to be maintained in the future if needed. Specifying the structure of a usable representation and deriving the corresponding data require specific tools and expertness. We propose to assist users in this process by means of a Web-based system able to assist users in specifying and performing a dataset restructuration process. The first step is to help users to set their requirements. To achieve this goal, we propose a graphical interface to specify differences between an existing data structure and a target data structure. The second step is to help users to commit these requirements into a transformation process applied to an existing dataset. We propose mechanisms to plan and execute this process and to check its result. The system relies on knowledge of existing data structures, platforms grammar rules and typical application schemas. The first part of this paper analyses the notion of a data structure and how to specify it. The second part describes our proposal.

Sandrine Balley, Bénédicte Bucher, Thérèse Libourel
Using a Semantic Approach for a Cataloguing Service

Environmental applications (support for territorial diagnostics, monitoring of practices, integrated management, etc.) have strengthened the case for efforts in the establishment of sharing and mutualisation infrastructures for georeferenced information. Within the framework of these initiatives, our work has led us to design and create a tool for cataloguing resources for environmental applications. This tool can be used to catalogue different types of resources (digital maps, vector layers, geographical databases, documents, etc.) by using the ISO 19115 standard, and offers a search engine for these resources. The goal of this proposition is to improve the relevance of search engines by relying on semantic knowledge (thematic and spatial) of the concerned domains. In the first stage, the proposition consists of helping the user in his search by offering mechanisms to expand on or to filter his query. In the second stage, we use the results obtained and the underlying semantics for a global presentation of the results.

Paul Boisson, Stéphane Clerc, Jean-Christophe Desconnets, Thérèse Libourel

IFIP WG 2.12 and WG 12.4 International Workshop on Web Semantics (SWWS)

SWWS 2006 PC Co-chairs’ Message

Welcome to the proceedings of the second IFIP WG 2.12 & WG 12.4 International Workshop on Web Semantics (SWWS’06). This proceeding reflects the issues raised and presented during the SWWS workshop which proves to be an interdisciplinary forum for subject matters involving the theory and practice of web semantics. This year three special tracks were organized as part of the main workshop, namely; Security, Trust & Reputation Systems, Fuzzy Models and Systems and Regulatory Ontologies The focus of such special tracks is to allow researcher of different backgrounds (such as business, formal models, trust, security, law, ontologies, fuzzy sets, artificial intelligence and philosophy) to meet and exchange ideas.

Katia Sycara, Elizabeth Chang, Ernesto Damiani, Mustafa Jarrar, Tharam Dillon

Security, Risk and Privacy for the Semantic Web

Reputation Ontology for Reputation Systems

The growing development of web-based reputation systems in the 21

st

century will have a powerful social and economic impact on both business entities and individual customers, because it makes transparent quality assessment on products and services to achieve customer assurance in the distributed web-based Reputation Systems. The web-based reputation systems will be the foundation for web intelligence in the future. Trust and Reputation help capture business intelligence through establishing customer trust relationships, learning consumer behavior, capturing market reaction on products and services, disseminating customer feedback, buyers’ opinions and end-user recommendations. It also reveals dishonest services, unfair trading, biased assessment, discriminatory actions, fraudulent behaviors, and un-true advertising. The continuing development of these technologies will help in the improvement of professional business behavior, sales, reputation of sellers, providers, products and services. Given the importance of reputation in this paper, we propose ontology for reputation. In the business world we can consider the reputation of a product or the reputation of a service or the reputation of an agent. In this paper we propose ontology for these entities that can help us unravel the components and conceptualize the components of reputation of each of the entities.

Elizabeth Chang, Farookh Khadeer Hussain, Tharam Dillon
Rule-Based Access Control for Social Networks

Web-based social networks (WBSNs) are online communities where participants can establish relationships and share resources across the Web with other users. In recent years, several WBSNs have been adopting Semantic Web technologies, such as FOAF, for representing users’ data and relationships, making it possible to enforce information interchange across multiple WBSNs. Despite its advantages in terms of information diffusion, this raised the need of giving content owners more control on the distribution of their resources, which may be accessed by a community far wider than they expected.

In this paper, we present an access control model for WBSNs, where policies are expressed as constraints on the type, depth, and trust level of existing relationships. Relevant features of our model are the use of certificates for granting relationships’ authenticity, and the

client-side

enforcement of access control according to a rule-based approach, where a subject requesting to access an object must demonstrate that it has the rights of doing that.

Barbara Carminati, Elena Ferrari, Andrea Perego
An OWL Copyright Ontology for Semantic Digital Rights Management

Digitalisation and the Internet have caused a content reproduction and distribution revolution with clear implications for copyright management. There are many Digital Rights Management (DRM) efforts that facilitate copyright management in closed domains but they find great difficulties when they are forced to interoperate in an open domain like the World Wide Web. In order to facilitate interoperation and automation, DRM systems can be enriched with domain formalisations like the Copyright Ontology. This ontology is implemented using the Description Logic variant of the Web Ontology Language (OWL-DL). This approach facilitates the implementation of efficient usages against licenses checking, which is reduced to description logics classification.

Roberto García, Rosa Gil
A Legal Ontology to Support Privacy Preservation in Location-Based Services

During the last years many laws have been promulgated in diverse countries to protect citizens’ privacy. This fact is due to the increase of privacy threats caused by the tendency of using information technologies in all scopes. Location Based Services (LBS) compose a situation where this privacy can be harmed. Even there exist mechanisms to protect this right in LBS, generally this services have not been developed over regulatory norms or if so, it has been in a partial way or interpreting those norms in a particular form. This situation could be a consequence of the lack of a common knowledge base representing the actual legislation in matters of privacy. In this paper an ontology of the main Spanish privacy norm is presented as well as the method used to construct it. The ontology is specifically aimed and applied to the preservation of privacy in LBS.

Hugo A. Mitre, Ana Isabel González-Tablas, Benjamín Ramos, Arturo Ribagorda
A Fuzzy Approach to Risk Based Decision Making

Decision making is a tough process. It involves dealing with a lot of uncertainty and projecting what the final outcome might be. Depending on the projection of the uncertain outcome, a decision has to be taken. In a peer-to-peer financial interaction, the trusting agent in order to analyze the Risk has to consider the possible likelihood of failure of the interaction and the possible consequences of failure to its resources involved in the interaction before concluding whether to interact with the probable trusted agent or not. Further it might also have to choose and decide on an agent to interact with from a set of probable trusted agents. In this paper we propose a Fuzzy Risk based decision making system that would assist the trusting agent to ease its decision making process.

Omar Khadeer Hussain, Elizabeth Chang, Farookh Khadeer Hussain, Tharam S. Dillon

Semantic Web and Querying

Header Metadata Extraction from Semi-structured Documents Using Template Matching

With the recent proliferation of documents, automatic metadata extraction from document becomes an important task. In this paper, we propose a novel template matching based method for header metadata extraction form semi-structured documents stored in PDF. In our approach, templates are defined, and the document is considered as strings with format. Templates are used to guide

finite state automaton

(FSA) to extract header metadata of papers. The testing results indicate that our approach can effectively extract metadata, without any training cost and available to some special situation. This approach can effectively assist the automatic index creation in lots of fields such as digital libraries, information retrieval, and data mining.

Zewu Huang, Hai Jin, Pingpeng Yuan, Zongfen Han
Query Terms Abstraction Layers

A problem with traditional information retrieval systems is that they typically retrieve information without an explicitly defined domain of interest to the user. Consequently, the system presents a lot of information that is of little relevance to the user. Ideally, the queries’

real intentions

should be exposed and reflected in the way the underlying retrieval machinery can deal with them. In this paper we propose using abstraction layers to differentiate on the query terms. We explain why we believe this differentiation of query terms is necessary and the potentials of this approach. The whole retrieval system is under development as part of a Semantic Web standardization project for the Norwegian oil and gas industry.

Stein L. Tomassen, Darijus Strasunskas
VQS – An Ontology-Based Query System for the SemanticLIFE Digital Memory Project

Ever increasing capacities of contemporary storage devices inspire the vision to accumulate (personal) information without the need of deleting old data over a long time-span. Hence the target of the SemanticLIFE project is to create a Personal Information Management system for a human lifetime data. One of the most important characteristics of the system is its dedication to retrieve information in a very efficient way. By adopting user demands regarding the reduction of ambiguities, our approach aims at a user-oriented and yet powerful enough system with a satisfactory query performance. In this paper, we introduce the query system of SemanticLIFE, the Virtual Query System, which uses emerging Semantic Web technologies to fulfill users’ requirements.

Hanh Huu Hoang, Amin Andjomshoaa, A. Min Tjoa

Ontologies

Software Design Process Ontology Development

Software design process has been followed and widely used to describe logical organisation of software using different types of models. However, when it comes to remote communication over software design, it is prone to miscommunication, misunderstanding or misinterpretation especially with ambiguous terms or people having different backgrounds and knowledge of the software design process. This motivates the use of unified knowledge representation of software design process i.e. software design process ontology for communications and coordination. The knowledge representation introduced here in the form of software design process ontology is based on a formal description of the software design process using the web ontology language OWL. Software design process knowledge is defined or captured in a formal and machine processable fashion. The software design process knowledge is then open and facilitates the sharing of software design among software engineers. We discuss software design process ontology development in this paper.

P. Wongthongtham, E. Chang, T. Dillon
Ontology Views: A Theoretical Perspective

Ontologies are the foundation of the Semantic Web (SW) and one of the keys necessary to its success. Conversely, ontology views hold the promise of; (a) provide a manageable portion of a larger ontology for the localized applications and users, (b) enable precise extraction of sub-onlogies of a larger ontology that commits to the main ontology, (c) enable localized customization and usage of the portion of a larger ontology and (d) enable interoperability between large ontology bases and applications. Therefore, it is interesting to look at ontology views and their desired properties, as there exists no agreed upon standard, methodology or formalism to specify, define and materialize ontology views. Thus, in this paper, we elaborate on our own research direction towards proposing a meaningful ontology view formalism and its associated semantics.

R. Rajugan, Elizabeth Chang, Tharam S. Dillon
OntoExtractor: A Fuzzy-Based Approach to Content and Structure-Based Metadata Extraction

This paper describes

OntoExtractor

a tool for extracting metadata from heterogeneous sources of information, producing a “quick-and-dirty” hierarchy of knowledge. This tool is specifically tailored for a quick classification of semi-structured data. By this feature,

OntoExtractor

is convenient for dealing with a web-based data source.

Paolo Ceravolo, Ernesto Damiani, Marcello Leida, Marco Viviani

Applications of Semantic Web

Towards Semantic Interoperability of Protein Data Sources

Several approaches for data interoperation identified by Karp have been implemented for biological databases. We extend Karp’s approach for interoperation not only to protein databases but also to knowledge bases and other information sources. This paper outlines algebra for protein data source composition based on our existing work of Protein Ontology (PO). In this paper we consider the case of establishing correspondence between various protein data sources using semantic relationships over the conceptual framework of PO. Here we provide specific set of relationships over PO framework to cover data semantics for integrating data information from diverse protein data sources. These relationships help in defining semantic query algebra for PO to efficiently reason and query the instance store.

Amandeep S. Sidhu, Tharam S. Dillon, Elizabeth Chang
QP-T: Query Pattern-Based RDB-to-XML Translation

This paper proposes a new query pattern-based relational schema-to-XML schema translation (QP-T) algorithm to resolve implicit referential integrity issue. Various translation methods have been introduced on structural aspects and/or semantic aspects. However, most of conventional methods consider only explicit referential integrities specified in relational schema. It causes several problems such as incorrect transformation, abnormal relational model transition, and so on. The QP-T algorithm analyzes query pattern and extract implicit referential integrities through equi-join between columns. The QP-T algorithm is based on a concept that columns related to equi-join in relational schema can have referential integrity. The most distinct contribution of QP-T algorithm is to enhance extraction of referential integrity relation information for translation. Therefore, the QP-T algorithm reflects not only explicit referential integrities but also implicit referential integrities during RDB-to-XML translation. The QP-T algorithm also prevents XML documents from incorrect conversion and increase translation accuracy.

Jinhyung Kim, Dongwon Jeong, Yixin Jing, Doo-Kwon Baik

Concepts for the Semantic Web

A Study on the Use of Multicast Protocol Traffic Overload for QCBT

Multicast requires reliability in peer-to-peer or multiple-to-multiple communication services and such demand for reliability becomes more and more an important factor to manage the whole network. Communication method for multicast is a way of communication for a transmitter that provides multicast data to every registered member in the transmitter’s group, and it can be classified into the traditional and the reliable communication methods in general. The traditional communication method is very fast in connection but quality of service is poor. In contrast, the reliable communication method provides good quality in service but its speed is somewhat poor. Thus to enhance such demerits, this paper proposes communication method of multicast by using QCBT (Quality of Service Core Based Tree ) method. In this paper, a fair and practical bandwidth is used for data packet transmission along with the use of QCBT. The bandwidth and data processing capability filters out the transmitted data from an QCBT router through transmission packet and upgrades multimedia data packet more effectively. Therefore, recipients in various levels receive the effective data packet and based on these facts, the study actualizes and evaluates efficiency of a router, which is able to transmit the fair bandwidth from QCBT router in a simulation.

Won-Hyuck Choi, Young-Ho Song
Semantic Granularity for the Semantic Web

In this paper we describe a framework for the application of semantic granularities to the Semantic Web. Given a data

source

and an ontology formalizing

qualities

which describe the source, we define a dynamic granularity system for the navigation of the repository according to different levels of detail, i.e., granularities. Semantic granularities summarize the degree of informativeness of the qualities, taking into account both the individuals populating the repository, which concur in the definition of the implicit semantics, and the ontology schema, which gives the formal semantics. The method adapts and extends to ontologies existing natural language processing techniques for topics generalization.

Riccardo Albertoni, Elena Camossi, Monica De Martino, Franca Giannini, Marina Monti
Maximum Rooted Spanning Trees for the Web

This paper focuses on finding maximum rooted spanning trees (MRSTs) for structured web search including hop constraints. We describe the meaning of structured web search and develop two binary integer linear programming models to find the best MRST. New methods for measuring the relevance among web objects are devised and used for structured web search. Some case studies are performed with real web sites and results are reported.

Wookey Lee, Seungkil Lim

Workshop on Context Aware Mobile Systems (CAMS)

CAMS 2006 PC Co-chairs’ Message

Context awareness is increasingly forming one of the key strategies for delivering effective information services in mobile contexts. The limited screen displays of many mobile devices mean that content must be carefully selected to match the user’s needs and expectations, and context provides one powerful means of performing such tailoring. Context aware mobile systems will almost certainly become ubiquitous – already in the United Kingdom affordable ‘smartphones’ include GPS location support. With this hardware comes the opportunity for ‘onboard’ applications to use location data to provide new services – until recently such systems could only be created with complex and expensive components. Furthermore, the current ‘mode’ of the phone (e.g. silent, meeting, outdoors), contents of the built-in calendar, etc. can all used to provide a rich context for the user’s immediate environment.

Annika Hinze, George Buchanan

Models of Context

An Investigation into a Universal Context Model to Support Context-Aware Applications

If a mobile device is to offer rich context-aware behaviour it must have a good knowledge of the world around us. This paper explores the concept of universal context model, able to represent any form of context information and therefore be an enabler to the full spectrum of context-aware applications. It explores how such a model may accurately represent – as far as practically possible – the multitude of different objects we encounter in our surrounding environment and their many states and interrelationships. Three key propositions are that the context model should be of an object-oriented nature, that location is most appropriately and flexibly represented as a relationship between two objects rather than being considered as a special type of object unto itself, and finally, that objects may be coupled with observer-dependent validity rules that determine if the object is visible within the model.

Jason Pascoe, Helena Rodrigues, César Ariza
A System for Context-Dependent User Modeling

We present a system for learning and utilizing context-dependent user models. The user models attempt to capture the interests of a user and link the interests to the situation of the user. The models are used for making recommendations to applications and services on what might interest the user in her current situation. In the design process we have analyzed several mock-ups of new mobile, context-aware services and applications. The mock-ups spanned rather diverse domains, which helped us to ensure that the system is applicable to a wide range of tasks, such as modality recommendations (e.g., switching to speech output when driving a car), service category recommendations (e.g., journey planners at a bus stop), and recommendations of group members (e.g., people with whom to share a car). The structure of the presented system is highly modular. First of all, this ensures that the algorithms that are used to build the user models can be easily replaced. Secondly, the modularity makes it easier to evaluate how well different algorithms perform in different domains. The current implementation of the system supports rule based reasoning and tree augmented naïve Bayesian classifiers (TAN). The system consists of three components, each of which has been implemented as a web service. The entire system has been deployed and is in use in the EU IST project MobiLife. In this paper, we detail the components that are part of the system and introduce the interactions between the components. In addition, we briefly discuss the quality of the recommendations that our system produces.

Petteri Nurmi, Alfons Salden, Sian Lun Lau, Jukka Suomela, Michael Sutterer, Jean Millerat, Miquel Martin, Eemil Lagerspetz, Remco Poortinga
Granular Context in Collaborative Mobile Environments

Our research targets collaborative environments with focus on mobility and teams. Teams comprise a number of people working on multiple projects and activities simultaneously. As mobile and wireless technology advances people are no longer bound to their offices. Team members are able to collaborate while on the move. Sharing context information thus becomes a vital part of collaborative environments. However, challenges such as heterogeneous devices, connectivity, and bandwidth arise due to the dynamic nature of distributed, mobile teams. We present a methodology for context modeling and employ a framework that reduces costs such as computing information and usage of network resources by transferring context at relevant levels of detail. At the same time, robustness of the system is improved by dealing with uncertain context information. Our framework is implemented on an OSGi container platform using Web services for communication means.

Christoph Dorn, Daniel Schall, Schahram Dustdar

Service Models

Context-Aware Services for Physical Hypermedia Applications

In this paper we present an approach for designing and deploying context-aware services in the context of physical hypermedia applications, those applications in which mobile users explore real and digital objects using the hypermedia paradigm. We show how to adapt the objects’ response to the user’s navigation context by changing the role these objects play in the user’s travel. We first motivate our research with a simple example and survey some related work; next we introduce the concept of travel object and show that physical objects might assume the role of different type of travel objects. We then present an architectural approach for context-aware services and describe its evolution into a software substrate for physical hypermedia services. We conclude by indicating some further work we are pursuing.

Gustavo Rossi, Silvia Gordillo, Cecilia Challiol, Andrés Fortier
QoS-Predictions Service: Infrastructural Support for Proactive QoS- and Context-Aware Mobile Services (Position Paper)

Today’s mobile data applications aspire to deliver services to a user

anywhere – anytime

while fulfilling his

Quality of Service

(QoS) requirements. However, the success of the service delivery heavily relies on the QoS offered by the underlying networks. As the services operate in a heterogeneous networking environment, we argue that the generic information about the networks’ offered-QoS may enable an

anyhow

mobile service delivery based on an intelligent (proactive) selection of ‘any’ network available in the user’s context (location and time).

Towards this direction, we develop a

QoS-predictions service provider

, which includes functionality for the acquisition of generic offered-QoS information and which, via a multidimensional processing and history-based reasoning, will provide predictions of the expected offered-QoS in a reliable and timely manner.

We acquire the generic QoS-information from distributed mobile services’ components quantitatively (actively and passively) measuring the applicationlevel QoS, while the reasoning is based on statistical data mining and pattern recognition techniques.

Katarzyna Wac, Aart van Halteren, Dimitri Konstantas

Data Handling

LiteMap: An Ontology Mapping Approach for Mobile Agents’ Context-Awareness

Mobile agents’ applications have to operate within environments having continuously changing execution conditions that are not easily predictable. They have to dynamically adapt to changes in their context resulting from other’s activities and resources variation. To be aware of their execution context, mobile agents require a formal and structured model of the context and a reasoning process for detecting suspect situations. In this work, we use formal ontologies to model the agents’ execution context as well as its composing elements. After each agent migration, a reasoning process is carried out upon these ontologies to detect meaningful environmental changes. This reasoning is based on semantic web techniques for mapping ontologies. The output of the mapping process is a set of semantic relations among the ontologies concepts that will be used by the agent to trigger a reconfiguration of its structure according to some adaptation policies.

Haïfa Zargayouna, Nejla Amara
Compressing GPS Data on Mobile Devices

In context-aware mobile systems, data on past user behaviour or use of a device can give critical information. The scale of this data may be large, and it must be quickly searched and retrieved. Compression is a powerful tool for both storing and indexing data. For text documents powerful algorithms using structured storage achieve high compression and rapid search and retrieval. Byte-stream techniques provide higher compression, but lack indexation and have slow retrieval.

Location is a common form of context frequently used in research prototypes of tourist guide systems, location-aware searching and adaptive hypermedia. In this paper, we present an exploration of record-based compression of Global Positioning System (GPS) data that reveals significant technical limitations on what can be achieved on mobile devices, and a discussion of the benefits of different compression techniques on GPS data.

Ryan Lever, Annika Hinze, George Buchanan
Seamless Service Adaptation in Context-Aware Middleware for U-HealthCare

In ubiquitous computing environment, intelligent context-aware service should be adapted seamlessly to context-aware middleware. Semantic matching between service and middleware is needed for this. To solve the match, ontology is used for context-aware service adaptation. If service description includes service profile, service input, output, service grounding etc, middleware can understand the description describe using some inferences. This paper proposes a seamless service adaptation agent for context-aware middleware in wearable computing environment.

Eun Jung Ko, Hyung Jik Lee, Jeun Woo Lee

Users and Uses

Using Laddering and Association Techniques to Develop a User-Friendly Mobile (City) Application

In this paper, we present an ongoing project on the concept of interaction among people via a mobile (city) application. The focus of our research lies on its usability and sociability, i.e., on all aspects that are related to community forming and development in a city context. In order to analyze them, we have developed a method that combines two techniques that are not common in usability studies: the association technique and the laddering method. We have called this combination the ‘connotation chain’ technique. In this paper, we will first introduce this technique showing its advantages and disadvantages in a usability study. Then, we will present the results of applying this technique to our specific objective and, finally, we will discuss its implications for the further development of the interface of this application.

Greet Jans, Licia Calvi
A Situation-Aware Mobile System to Support Fire Brigades in Emergency Situations

In a firefighter emergency mission it is essential for the members of a fire brigade to get an intelligent and reliable overview of the complete situation, presented according to the role of each member. In this paper we report on the design and development of a system to support a fire brigade on site with a set of mobile services that offers a role-based focus+context user interface. It provides the required overview over the emergency situation according to the user task and context, while life-saving information is emphasized. The implementation of a context-rule-based decision module enhances the visualization of required information. Interaction with the user interface is designed for use in the wild; which in this case comes down to providing a “fat finger” interface that allows firemen to interact with the user interface on site with his gloves on.

Kris Luyten, Frederik Winters, Karin Coninx, Dries Naudts, Ingrid Moerman
Context-Based e-Learning Composition and Adaptation

To be effective, a learning process must be adapted to the learner’s context. Such a context should be described at least from pedagogical, technological and learning perspectives. Current e-Learning approaches either fail to provide learning experiences within rich contexts, thus hampering the learning process, or provide extremely contextualized content that is highly coupled with context information, barring their reuse in some other context. In this paper we decouple context as much as possible from content so that the latter can be reused and adapted to context changes. This approach extends the LOM standard by enriching content context, thereby allowing e-Learning platforms to dynamically compose, reuse and adapt educative content provided by third parties (Learning Objects). Three context models are presented together with a multiagent-based e-Learning platform that composes and adapts extended Learning Objects according to learner’s context changes.

Maria G. Abarca, Rosa A. Alarcon, Rodrigo Barria, David Fuller

Filtering and Control

Context-Driven Data Filtering: A Methodology

The goal of this paper is the introduction of a methodology for designing context-driven data selection, that is the possibility to tailor the available, usually too rich, data to be held on portable mobile devices, according to

context

. First of all, we will introduce the concept of context and its model, a data structure that expresses knowledge on the user, the environment and the possible scenarios. We will then focus on the proposed methodology for selecting, by means of such information, the relevant data to be made available on a user device. An application of the proposed methodology is the possibility to select data of interest for portable devices, where computation, memory, power and connectivity resources are limited, and thus, tailororing the available, usually too rich, data according to context is a mandatory task.

Cristiana Bolchini, Elisa Quintarelli
A Contextual Attribute-Based Access Control Model

The emergence of ubiquitous mobile devices, such as MP3 players, cellular phones, PDAs, and laptops, has sparked the growth of rich, mobile applications. Moreover, these applications are increasingly “aware” of the user and her surrounding environment. Dynamic mobile environments are generating new requirements – such as allowing users to access real-time, customized services on-demand and with no prior registration – that are not currently addressed by existing approaches to authorization. We investigate using contextual information present in the user’s operating environment, such as a user’s location, for defining an authorization policy. More precisely, we have defined an access control model that uses contextual attributes to capture the dynamic properties of a mobile environment, including attributes associated with users, objects, transactions, and the environment. Our Contextual Attribute-Based Access Control model lends itself more naturally to a mobile environment where subjects and objects are dynamic. Our authorization model promotes the adoption of many revolutionary mobile applications by allowing for the specification of flexible access control policies.

Michael J. Covington, Manoj R. Sastry

Erratum

Erratum
Backmatter
Metadata
Title
On the Move to Meaningful Internet Systems 2006: OTM 2006 Workshops
Editors
Robert Meersman
Zahir Tari
Pilar Herrero
Copyright Year
2006
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-48276-5
Print ISBN
978-3-540-48273-4
DOI
https://doi.org/10.1007/11915072