Skip to main content

2004 | Buch

On the Move to Meaningful Internet Systems 2004: CoopIS, DOA, and ODBASE

OTM Confederated International Conferences, CoopIS, DOA, and ODBASE 2004, Agia Napa, Cyprus, October 25-29, 2004, Proceedings, Part II

herausgegeben von: Robert Meersman, Zahir Tari

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

A special mention for 2004 is in order for the new Doctoral Symposium Workshop where three young postdoc researchers organized an original setup and formula to bring PhD students together and allow them to submit their research proposals for selection. A limited number of the submissions and their approaches were independently evaluated by a panel of senior experts at the conference, and presented by the students in front of a wider audience. These students also got free access to all other parts of the OTM program, and only paid a heavily discounted fee for the Doctoral Symposium itself. (In fact their attendance was largely sponsored by the other participants!) If evaluated as successful, it is the intention of the General Chairs to expand this model in future editions of the OTM conferences and so draw in an audience of young researchers to the OnTheMove forum. All three main conferences and the associated workshops share the d- tributed aspects of modern computing systems, and the resulting applicati- pull created by the Internet and the so-called Semantic Web. For DOA 2004, the primary emphasis stayed on the distributed object infrastructure; for ODBASE 2004, it was the knowledge bases and methods required for enabling the use of formalsemantics;andforCoopIS2004themaintopicwastheinteractionofsuch technologies and methods with management issues, such as occurs in networked organizations. These subject areas naturally overlap and many submissions in factalsotreatenvisagedmutualimpactsamongthem.

Inhaltsverzeichnis

Frontmatter

Advanced Information Systems

Security Management Through Overloading Views

The model of overloading views is a facility allowing the programmer to separate some kinds of crosscutting concerns that occur during design, implementation and maintenance of database applications. In this paper we show how it can be used to manage data security. The model is based on updateable object views built within the stack-based approach to object-oriented query languages. After inserting the overloading view on top of the given population of objects all references to the objects come via the view. Thus the view can implement additional security semantics independently on the object implementation. Views allow one to add such new semantic to all the operations (retrieve, insert, update, delete) that can be performed on the objects. In our model overloading views are named encapsulated database entities that can be dynamically inserted, modified or deleted. Because virtual objects delivered by an overloading view are not distinguishable from stored objects, the overloading views model allows one to form a chain of views, where each next view adds new semantics (a specific concern) to the semantics introduces by the object implementation and previous views. In this way any new security requirement can be implemented independently from other requirements.

Radosław Adamus, Kazimierz Subieta
Paradigms for Decentralized Social Filtering Exploiting Trust Network Structure

Recommender systems, notably collaborative and hybrid information filtering approaches, vitally depend on neighborhood formation, i.e., selecting small subsets of most relevant peers from which to receive personal product recommendations. However, common similarity-based neighborhood forming techniques imply various drawbacks, rendering the conception of decentralized recommender systems virtually impossible. We advocate trust metrics and trust-driven neighborhood formation as an appropriate surrogate, and outline various additional benefits of harnessing trust networks for recommendation generation purposes. Moreover, we present an implementation of one suchlike trust-based recommender and perform empirical analysis to underpin its fitness when coupled with an intelligent, content-based filter.

Cai-Nicolas Ziegler, Georg Lausen
A Necessary Condition for Semantic Interoperability in the Large

With new standards like RDF or OWL paving the way for the much anticipated semantic web, a new breed of large scale semantic systems is about to appear. Even if research on semantic reconciliation methods is abundant, it is not clear how interoperable very large scale semantic systems can be. This paper represents a first effort towards analytically analyzing semantic interoperability in the large: By adapting a recent graph-theoretic framework, we examine the dynamics of large scale semantic systems and derive a necessary condition for fostering global semantic interoperability.

Philippe Cudré-Mauroux, Karl Aberer

Information Mining

Mining the Meaningful Compound Terms from Materialized Faceted Taxonomies

A materialized faceted taxonomy is an information source where the objects of interest are indexed according to a faceted taxonomy. This paper shows how from a materialized faceted taxonomy, we can mine an expression of the Compound Term Composition Algebra that specifies exactly those compound terms that have non-empty interpretation. The mined expressions can be used for encoding compactly (and subsequently reusing) the domain knowledge that is stored in existing materialized faceted taxonomies. Furthermore, expression mining is very crucial for reorganizing taxonomy-based sources which were not initially designed according to a clear faceted approach (like the directories of Google and Yahoo!), so as to have a semantically clear, and compact faceted structure. We analyze this problem and we give an analytical description of all algorithms needed for expression mining.

Yannis Tzitzikas, Anastasia Analyti
Heuristic Strategies for Inclusion Dependency Discovery

Inclusion dependencies (INDs) between databases are assertions of subset-relationships between sets of attributes (dimensions) in two relations. Such dependencies are useful for a number of purposes related to information integration, such as database similarity discovery and foreign key discovery.An exhaustive approach at discovering INDs between two relations suffers from the dimensionality curse, since the number of potential mappings of size k between the attributes of two relations is exponential in k. Levelwise (Apriori-like) approaches at discovery do not scale for this reason beyond a k of 8 to 10. Approaches modeling the similarity space as a hypergraph (with the hyperedges of the graph representing sets of related attributes) are promising, but also do not scale very well.This paper discusses approaches to scale discovery algorithms for INDs. The major obstacle to scalability is the exponentially growing size of the data structure representing potential INDs. Therefore, the focus of our solution is on heuristic techniques that reduce the number of IND candidates considered by the algorithm. Despite the use of heuristics, the accuracy of the results is good for real-world data.Experiments are presented assessing the quality of the discovery results versus the runtime savings. We conclude that the heuristic approach is useful and improves scalability significantly. It is particularly applicable for relations that have attributes with few distinct values.

Andreas Koeller, Elke A. Rundensteiner
Taming the Unstructured: Creating Structured Content from Partially Labeled Schematic Text Sequences

Numerous data sources such as classified ads in online newspapers, electronic product catalogs and postal addresses are rife with unstructured text content. Typically such content is characterized by attribute value sequences having a common schema. In addition each sequence is unstructured free text without any separators between the attribute values. Hidden Markov Models (HMMs) have been used for creating structured content from such text sequences by identifying and extracting attribute values occurring in them. Extant approaches to creating “structured content from text sequences” based on HMMs use either completely labeled or completely unlabeled training data. The HMMs resulting from these two dominant approaches present contrasting trade offs w.r.t. labeling effort and recall/precision of the extracted attribute values. In this paper we propose a HMM based algorithm that uses partially labeled training data for creating structured content from text sequences. By exploiting the observation that partially labeled sequences give rise to independent subsequences we compose the HMMs corresponding to these subsequences to create structured content from the complete sequence. An interesting aspect of our approach is that it gives rise to a family of HMMs spanning the trade off spectrum. We present experimental evidence of the effectiveness of our algorithm on real-life data sets and demonstrate that it is indeed possible to bootstrap structured content creation from schematic text data sources using HMMs that require limited labeling effort and do so without compromising on the recall/precision performance metrics.

Saikat Mukherjee, I. V. Ramakrishnan

Querying

A Global-to-Local Rewriting Querying Mechanism Using Semantic Mapping for XML Schema Integration

We have proposed a methodology for integrating local XML Schemas in [12]. An integrated global schema forms the basis for querying a set of local XML documents. In this paper, we discuss various strategies for rewriting the global query over the global schema into the sub-queries over local schemas. Their respective local schemas validate the sub-queries over the local XML documents. This requires the identification and use of mapping rules and relationships between the local schemas

Kalpdrum Passi, Eric Chaudhry, Sanjay Madria, Sourav Bhowmick
Querying Articulated Sources

In this study we address the problem of answering queries over information sources storing objects which are indexed by terms arranged in a taxonomy. We examine query languages of different expressivity and sources with different kinds of taxonomies. In the simplest kind, the taxonomy includes just term-to-term subsumption links. This case is used as a basis for further developments, in which we consider taxonomies consisting of term-to-queries links. An algorithm for query evaluation is presented for this kind of taxonomies, and it is shown that the addition of negation to the query language leads to intractability. Finally, query-to-query taxonomies are considered.

Carlo Meghini, Yannis Tzitzikas
Learning Classifiers from Semantically Heterogeneous Data

Semantically heterogeneous and distributed data sources are quite common in several application domains such as bioinformatics and security informatics. In such a setting, each data source has an associated ontology. Different users or applications need to be able to query such data sources for statistics of interest (e.g., statistics needed to learn a predictive model from data). Because no single ontology meets the needs of all applications or users in every context, or for that matter, even a single user in different contexts, there is a need for principled approaches to acquiring statistics from semantically heterogeneous data. In this paper, we introduce ontology-extended data sources and define a user perspective consisting of an ontology and a set of interoperation constraints between data source ontologies and the user ontology. We show how these constraints can be used to derive mappings from source ontologies to the user ontology. We observe that most of the learning algorithms use only certain statistics computed from data in the process of generating the hypothesis that they output. We show how the ontology mappings can be used to answer statistical queries needed by algorithms for learning classifiers from data viewed from a certain user perspective.

Doina Caragea, Jyotishman Pathak, Vasant G. Honavar

Ontology Processing

A General Method for Pruning OWL Ontologies

In the past, most ontologies have been developed essentially from scratch, but in the last decade several research projects have appeared that use large ontologies to create new ontologies in a semiautomatic (or assisted) way. When using a large ontology to create a more specific one, a key aspect is to delete, as much automatically as possible, the elements of the large ontology that are irrelevant for the specific domain. This activity is commonly performed by a pruning method. There are several approaches for pruning ontologies, and they differ in the kind of ontology that they prune and the way the relevant concepts are selected and identified.This paper adapts an existing pruning method to OWL ontologies, and extends it to deal with the instances of the ontology to prune. Furthermore, different ways of selecting relevant concepts are studied. The method has been implemented. We illustrate the method by applying it to a case study that prunes a spatial ontology based on the Cyc ontology.

Jordi Conesa, Antoni Olivé
Finding Compromises Between Local and Global Ontology Querying in Multiagent Systems

As Ontologic knowledge gets more and more important in agent-based systems, its handling becomes crucial for successful applications. In the context of agent-based applications, we propose a hybrid approach, in which part of the ontology is handled locally, using a “client component”, and the rest of the ontological knowledge is handled by an “ontology agent”, which is accessed by the other agents in the system through their client component. In this sort of “caching” scheme, most frequent ontologic queries tend to remain stored locally. We propose specific methods for representing, storing, querying and translating ontologies for effective use in the context of the “JITIK” system, which is a multiagent system for knowledge and information distribution. We report as well a working prototype implementing our proposal, and discuss some performance figures.

Hector Ceballos, Ramon Brena
Aligning Ontologies and Evaluating Concept Similarities

An innate characteristic of the development of ontologies is that they are often created by independent groups of expertise, which generates the necessity of merging and aligning ontologies covering overlapping domains. However, a central issue in the merging process is the evaluation of the differences between two ontologies, viz. the establishment of a similarity measure between their concepts. Many algorithms and tools have been proposed for merging of ontologies, but the majority of them disregard the structural properties of the source ontologies, focusing mostly on syntactic analysis. This article focuses on the alignment of ontologies through Formal Concept Analysis, a data analysis technique founded on lattice theory, and on the use of similarity measures to identify cross-ontology related concepts.

Kleber Xavier Sampaio de Souza, Joseph Davis

Multimedia

EMMA – A Query Algebra for Enhanced Multimedia Meta Objects

Enhanced Multimedia Meta Objects (EMMOs) are a novel approach to multimedia content modeling, combining media, semantic relationships between those media, as well as functionality on the media, such as rendering, into tradeable knowledge-enriched units of multimedia content. For the processing of EMMOs and the knowledge they contain, suitable querying facilities are required. In this paper, we present EMMA, an expressive query algebra that is adequate and complete with regard to the EMMO model. EMMA offers a rich set of formally-defined, orthogonal query operators that give access to all aspects of EMMOs, enable query optimization, and allow the representation of elementary ontology knowledge within queries. Thereby, EMMA provides a sound and adequate foundation for the realization of powerful EMMO querying facilities.

Sonja Zillner, Utz Westermann, Werner Winiwarter
Ontology for Nature-Scene Image Retrieval

This paper presents a framework for building an ontology to provide semantic interpretations in image contents. The novelty of this framework comes from building a MPEG-7 ontology for semantic representations of multimedia contents, and from integrating such ontology into an image retrieval system to enable fast, efficient image query and retrieval. The prototype system demonstrated the feasibility of embedding such ontology into an image retrieval system. Its main objective has been achieved by retrieving nature scene images using human readable keywords. Based on the experimental results, we believe that using our ‘bridging’ technique, the high-level non-machine readable human concepts can be seamlessly mapped to low-level machine processable data. This helps to improve the efficiency of our CBIR system compared to conventional methods.

Song Liu, Liang-Tien Chia, Syin Chan

Semantic Web Services

Comparing Approaches for Semantic Service Description and Matchmaking

Matching descriptions of user requirements against descriptions of service capabilities is crucial for the discovery of appropriate services for a given task. To improve the precision of approaches that consider only syntactical aspects of matchmaking (e.g. UDDI) several approaches for semantic matchmaking have been proposed. We compare two approaches with respect to their potentials for matchmaking between semantic descriptions of geoinformation services. The State-based Approach uses the Web Ontology Language and the Rule Markup Language to describe inputs, outputs, preconditions and effects. In the Algebraic Approach, abstract data types are specified to capture domain knowledge. The specific data types used in a service model referred to these shared concepts. In order to make the specifications executable and to enable matchmaking a functional programming language (Haskell) is used in this approach. For a scenario from the domain of disaster management, both approaches are tested for one specific type of match.

Sven Schade, Arnd Sahlmann, Michael Lutz, Florian Probst, Werner Kuhn
On Managing Changes in the Ontology-Based E-government

The increasing complexity of E-Government services demands a correspondingly larger effort for management. Today, many system management tasks are often performed manually. This can be time consuming and error-prone. Moreover, it requires a growing number of highly skilled personnel, making E-Government systems costly. In this paper, we show how the usage of semantic technologies for describing E-Government services can improve the management of changes. We have extended our previous work in ontology evolution, in order to take into account the specificities of ontologies that are used for the description of E-Government services. Even though we use the E-Government domain as an example, the approach is general enough to be applied in other domains.

Ljiljana Stojanovic, Andreas Abecker, Nenad Stojanovic, Rudi Studer

XML Processing

CLP(Flex): Constraint Logic Programming Applied to XML Processing

In this paper we present an implementation of a constraint solving module, CLP(Flex), for dealing with unification in an equality theory for terms with flexible arity function symbols. Then we present an application of CLP(Flex) to XML-processing where XML documents are abstracted by terms with flexible arity symbols. This gives a highly declarative model for XML processing yielding a substantial degree of flexibility in programming.

Jorge Coelho, Mário Florido
VSM: Mapping XML Document to Relations with Constraint

In this paper, a new efficient approach named virtual schema mapping (VSM) is presented. It is a formalized and automated approach to map the XML document into relations. With this approach, the functional dependencies and constraints in the relational schemata are remained and these schemata satisfy the 3NF or BCNF at the same time. At last the comprehensive experiments are conducted to assess all the technologies in question.

Zhongming Han, Shoujian Yu, Jiajin Le

Distributed Objects and Applications (DOA) 2004 International Conference

DOA 2004 International Conference (Distributed Objects and Applications) PC Co-Chairs’ Message

Distributed objects and applications have been an important element of research and industrial computing for over 20 years. Early research on RPC systems, asynchronous messaging, specialized distributed programming languages, and component architectures led to the industrial-strength distributed objects platforms such as CORBA, DCOM, and J2EE that became commonplace over the past decade. Continued research and evolution in these areas, along with the explosive growth of the Internet and World Wide Web, have now carried us into areas such as peer-to-peer computing, mobile applications, model-driven architecture, distributed real-time and embedded systems, grid computing, and web services. Distributed objects are not only today’s workhorse for mission-critical high-performance enterprise computing systems, but they also continue to serve as a research springboard into new areas of innovation.

Vinny Cahill, Steve Vinoski, Werner Vogels

Keynote

Cooperative Artefacts

Cooperative artefacts are physical objects, commonly associated with purposes other than computing, but instrumented with embedded computing, wireless communication, and sensors and actuators. Thus augmented, physical objects can monitor their state, share observations with other artefacts, and collectively model their situation and react to changes in the world. This enables software processes to be tightly coupled with physical activity, and to be embedded “where the action is”. This talk will discuss a conceptual framework for cooperative artefacts, present experience with the Smart-Its hardware/software toolkit for augmentation of artefacts, and consider the specific challenge of embedding spatial awareness in common artefacts.

Hans Gellersen

Performance

Performance Evaluation of JXTA Rendezvous

Project JXTA is the first peer-to-peer application development infrastructure, consisting of standard protocols and multi-language implementations. A JXTA peer network is a complex overlay, constructed on top of the physical network, with its own identification and routing scheme. JXTA networks depend on the performance of rendezvous peers, whose main role is to facilitate search and discovery of the peer group resources. This paper presents the evaluation of performance and scalability properties of JXTA rendezvous peers and the JXTA Rendezvous Network. The rendezvous peer performance is analyzed in respect to the peer group size, query rate, advertisement cache size, peer group structure and other variables. The results indicate significant improvements in the current rendezvous implementation and provide insight into the relationship between different search models in JXTA. This study identifies the performance and scalability issues of the JXTA rendezvous and discovers the effects of multiple rendezvous deployment in peer groups using distributed search models.

Emir Halepovic, Ralph Deters, Bernard Traversat
CORBA Components Collocation Optimization Enhanced with Local ORB-Like Services Support

Some current implementations of CORBA Component Model (CCM) are flawed with unreasonable communication overheads when components are in the same address space and in the same container. Alternative approaches have introduced mechanisms for direct local communication of such components. In these approaches collocated components do not communicate through ORB and therefore cannot use ORB services such as events, naming and transactions locally, unless they are programmed explicitly. This paper presents a new approach for communication of collocated components with local ORB-like services support. A unit inside each container is responsible for handling communication between components within or outside the container. Local requests are passed to the local components without ORB involvement. Local or remoteness of a request is determined from the IOR of the called component which has been logged by the relevant special unit upon the creation of the component in its container. Implementation results of our approach show a considerable reduction of local communication overheads.

Mohsen Sharifi, Adel Torkaman Rahmani, Vahid Rafe, Hossein Momeni
Late Demarshalling: A Technique for Efficient Multi-language Middleware for Embedded Systems

A major goal of middleware is to allow seamless software integration across programming languages. CORBA, for example, supports multiple languages by specifying communication standards and language-specific bindings. Although this approach works well for desktop systems, it is problematic for embedded systems, where strict memory limits discourage multiple middleware implementations. A common memory-efficient alternative allows sharing of middleware by exposing functionality through language-specific wrappers; for instance, middleware may be implemented in C++ but exposed to Java through the Java Native Interface (JNI). Like most language wrappers, however, JNI degrades performance, especially with aggregate data types.We present “late demarshalling”: a fast, memory-efficient technique for multi-language middleware. By passing arguments at the middleware message level as a packed stream and unpacking them after crossing the language boundary, we obtain both high performance and reduced memory footprint. We provide a proof-of-concept implementation for Java and C++ with measurements showing improved performance and footprint.

Gunar Schirner, Trevor Harmon, Raymond Klefstad

Quality of Service

Implementing QoS Aware Component-Based Applications

By QoS (Quality of Service), we often refer to a set of quality requirements on the collective behavior of one or more objects. These requirements enable the provision of better service to certain data flows. The developer can either increase the priority of a data flow or limit the priority of another data flow, in order to tune the proper “parameters” that support quality requirements. Nowadays, the use of contracts for software components is a novel way of guaranteeing their quality. It is a rather cumbersome task to design components that comply with contracts, because different problem dimensions or quality aspects have to be addressed at the same time. In this paper, we employ a simple methodology to show how we can design and develop QoS components that are based on Aspect-Oriented Programming and Model Transformation. We use a Tele-Medicine framework to show how we can embed to the final product a set of QoS contracts. We implement two such contracts that support QoS in communication and teleconferencing. We describe all the steps of the analysis, design and implementation in order to denote the advantages of using this novel way of weaving quality contracts into QoS applications.

Avraam Chimaris, George A. Papadopoulos
A Framework for QoS-Aware Model Transformation, Using a Pattern-Based Approach

A current trend in software engineering is the changing of software development from being code-centric to become model-centric. This entails many challenges. Traceability between models at different abstraction levels must be managed. Mechanisms for model transformation and code generation must be in place, and these must be able to produce the desired results in terms of derived models and code. A main consideration in this respect is obviously to produce something that provides the expected functionality; another key aspect is to deliver models and code that specify systems that will adhere to the required quality of the provided services. Thus, specification and consideration of quality of service (QoS) when deriving system models are significant. In this paper we describe an approach where QoS aspects are considered when performing model transformations. The approach is pattern-based and uses UML 2.0 [1] as basis for modeling. For specification of QoS, the current submission of the UML profile for modeling QoS [2] is used as the baseline. The transformation specification is aligned with currently available results from the ongoing standardization process of MOF QVT [3][4]. The framework provides mechanisms and techniques for considering QoS throughout a model- driven development process. A key proposal of the approach is to gradually resolve QoS requirements when performing model transformations. The paper also describes a QoS-aware execution platform for resolving QoS requirements at run-time.

Arnor Solberg, Jon Oldevik, Jan Øyvind Aagedal
Component-Based Dynamic QoS Adaptations in Distributed Real-Time and Embedded Systems

Large scale distributed real time and embedded (DRE) applications are complex entities that are often composed of different subsystems and have stringent Quality of Service (QoS) requirements. These subsystems are often developed separately by different developers increasingly using commercial off-the shelf (COTS) middleware. Subsequently, these subsystems need to be integrated, configured to communicate with each other, and distributed. However, there is currently no standard way of supporting these requirements in existing COTS middleware. While recently emerging component-based middleware provides standardized support for packaging, assembling, and deploying, there is no standard way to provision QoS required by the DRE applications. We have previously introduced a QoS encapsulation model, qoskets, as part of our QuO middleware framework that can dynamically adapt to resource constraints. In this paper we introduce implementing these QoS behaviors as components that can be assembled with other application components. The task of ensuring QoS then becomes an assembly issue. To do so we have componentized our QuO technology instead of integrating QuO into the middleware as a service. To date, we have demonstrated our approach of QoS provisioning in MICO, CIAO, and Boeing’s Prism component middleware. We present experimental results to evaluate the overhead incurred by these QoS provisioning components in the context of CIAO CCM. We use a simulated Unmanned Aerial Vehicle (UAV) application as an illustrative DRE application for the demonstration of QoS adaptations using qosket components.

Praveen K. Sharma, Joseph P. Loyall, George T. Heineman, Richard E. Schantz, Richard Shapiro, Gary Duzan

Adaptation

Dynamic Adaptation of Data Distribution Policies in a Shared Data Space System

Increasing demands for interconnectivity, adaptivity and flexibility are leading to distributed component-based systems (DCBS) where components may dynamically join and leave a system at run-time.Our research is aimed at the development of an architecture for middleware for DCBS such that the extra-functional properties of resulting systems can be easily tailored to different requirements. To this end, we proposed an architecture based on the shared data space paradigm. This architecture provides a suite of distribution strategies [16] supporting different application usage patterns.We showed that using different distribution strategies for different usage patterns improved overall performance [17]. As is the case with other middleware for DCBS, the configuration of the selected distribution policies was fixed before run-time.Consequently, these systems cannot adapt to changes in usage patterns that may be due to the joining of leaving of the components in the system.In this paper, we propose a mechanism for the dynamic adaptation of distribution policies to the evolving behaviour of applications. This architecture improves over existing architectures for distributed shared data spaces by providing a mechanism for self-management.We experimentally demonstrate the benefits that may be gained by dynamic adaptation of distribution policies.

Giovanni Russello, Michel Chaudron, Maarten van Steen
TRAP/J: Transparent Generation of Adaptable Java Programs

This paper describes TRAP/J, a software tool that enables new adaptable behavior to be added to existing Java applications transparently (that is, without modifying the application source code and without extending the JVM). The generation process combines behavioral reflection and aspect-oriented programming to achieve this goal. Specifically, TRAP/J enables the developer to select, at compile time, a subset of classes in the existing program that are to be adaptable at run time. TRAP/J then generates specific aspects and reflective classes associated with the selected classes, producing an adapt-ready program. As the program executes, new behavior can be introduced via interfaces to the adaptable classes. A case study is presented in which TRAP/J is used to introduce adaptive behavior to an existing audio-streaming application, enabling it to operate effectively in a lossy wireless network by detecting and responding to changing network conditions.

S. Masoud Sadjadi, Philip K. McKinley, Betty H. C. Cheng, R. E. Kurt Stirewalt
Application Adaptation Through Transparent and Portable Object Mobility in Java

This paper describes MobJeX, an adaptive Java based application framework that uses a combination of pre-processing and runtime support to provide transparent object mobility (including AWT and Swing user interface components) between workstations, PDAs and smartphones. Emphasis is placed on the mobility subsystem (MS), a mobile object transport mechanism providing a high level of transparency and portability from the perspective of the system and the developer. The MS is compared to its most similar predecessor FarGo, demonstrating the advantages of the MS in terms of transparency and portability. Furthermore, a series of laboratory tests are conducted in order to quantify the runtime performance of the MS and two other systems, FarGo and Voyager.

Caspar Ryan, Christopher Westhorpe
An Infrastructure for Development of Dynamically Adaptable Distributed Components

Dynamic adaptation has become an essential feature in distributed applications, mainly because current technology enables complex tasks to be performed by computers in application domains unsuited for service interruption. This paper presents an infrastructure that uses an interpreted language to provide simple but powerful features that enable coarse and fine-grained adaptations in component-based systems, using the CORBA Component Model (CCM) as a basis. To extend the static nature of CCM, we propose dynamic containers, which enable development of dynamically adaptable components that admit changes on component structure and implementation. The extended set of mechanisms for component manipulation can be used to create adaptation abstractions that simplify the programmer’s task. In this paper, we present a tool that provides support for the protocols and roles abstractions, which allows programmers to adapt running applications, establishing new interactions among its components.

Renato Maia, Renato Cerqueira, Noemi Rodriguez

Mobility

satin: A Component Model for Mobile Self Organisation

We have recently witnessed a growing interest in self organising systems, both in research and in practice. These systems re-organise in response to new or changing conditions in the environment. The need for self organisation is often found in mobile applications; these applications are typically hosted in resource-constrained environments and may have to dynamically reorganise in response to changes of user needs, to heterogeneity and connectivity challenges, as well as to changes in the execution context and physical environment. We argue that physically mobile applications benefit from the use of self organisation primitives. We show that a component model that incorporates code mobility primitives assists in building self organising mobile systems. We present satin, a lightweight component model, which represents a mobile system as a set of interoperable local components. The model supports reconfiguration, by offering code migration services. We discuss an implementation of the satin middleware, based on the component model and evaluate our work by adapting existing open source software as satin components and by building and testing a system that manages the dynamic update of components on mobile hosts.

Stefanos Zachariadis, Cecilia Mascolo, Wolfgang Emmerich
Caching Components for Disconnection Management in Mobile Environments

With the evolution of wireless communications, mobile hand-held devices such as personal digital assistants and mobile phones are becoming an alternative to classical wired computing. However, mobile computers suffer from several limitations such as their display size, CPU speed, memory size, battery power, and wireless link bandwidth. In addition, service continuity in mobile environments raises the problem of data availability during disconnections. In this paper, we present an efficient cache management for component-based services. Our ideas are illustrated by designing and implementing a cache management service for CORBA components conducted on the DOMINT platform. We propose deployment and replacement policies based on several meta-data of application components. A novel aspect is the service-oriented approach. A service is seen as a logical composition of components cooperating for performing one functionality of the application. Dependencies between services and between components are modelled in a hierarchical dependency graph.

Nabil Kouici, Denis Conan, Guy Bernard
SPREE: Object Prefetching for Mobile Computers

Mobile platforms combined with large databases promise new opportunities for mobile applications. However, mobile computing devices may experience frequent communication loss while in the field. In order to support database applications, mobile platforms are required to cache portions of the available data which can speed access over slow communication channels and mitigate communication disruptions. We present a new prefetching technique for databases in mobile environments based on program analysis. SPREE generates maps of a client program’s use of structured data to be used by our prefetching runtime system. We apply SPREE in the context of mobile programming for object structured databases demonstrating an effective way to prefetch/hoard over unreliable networks with speedups up to 80% over other techniques.

Kristian Kvilekval, Ambuj Singh
Class Splitting as a Method to Reduce Migration Overhead of Mobile Agents

Mobile agents were introduced as a new design paradigm for distributed systems to reduce network traffic as compared to client-server based approaches simply by moving code close to the data instead of moving large amount of data to the client. Although this thesis has been proved in many application scenarios, it was also shown that the performance of mobile agents suffers from too simple migration strategies in many other scenarios. This has lead to the development of a new migration protocol, named Kalong, which provides fine-grained transmission of code and data instead of viewing a mobile agent as a single transmission unit. In this paper we report on first results of the application of Kalong to improve the performance of mobile agents by splitting the code of mobile agents. First results show that by using this technique the number of bytes which have to be transferred can be reduced significantly.

Steffen Kern, Peter Braun, Christian Fensch, Wilhelm Rossak

Replication

Eager Replication for Stateful J2EE Servers

Replication has been widely used in J2EE servers for reliability and scalability. There are two properties which are important for a stateful J2EE application server. Firstly, the state of the server and the state of the backend databases should always be consistent. Secondly, each request from a client should be executed exactly once. In this paper, we propose a replication algorithm that provides both properties. We use passive replication where a primary server executes a request, and all state changed within the application server by this request is sent to the backup replicas at the end of the execution. An agreement protocol guarantees the consistency between the state of all replicas and the database. A client side communication stub automatically resubmits requests in case of failures, and unnecessary resubmissions are detected by the server replicas. We have implemented the algorithm and integrated it into the JBoss application server. A performance study using the ECPerf benchmark shows the feasibility of our approach.

Huaigu Wu, Bettina Kemme, Vance Maverick
Active Replication in CORBA: Standards, Protocols, and Implementation Framework

This paper presents a proposal for integrating in a single CORBA middleware platform two important OMG specifications: FT-CORBA, which provides fault-tolerance support for CORBA objects, and UMIOP, an unreliable multicast protocol devised for CORBA middleware. The integration model defines a middleware support which supplies the basis to a large spectrum of group communication properties for distributed objects. Our propositions create a framework for active replication that is implemented using only OMG standards. Algorithms for reliable and atomic multicasts needed for this replication technique are presented using theoretical concepts for expressing their main features. At last, our FT-CORBA and UMIOP integration is compared to related experiences described in the literature and other active replication protocols.

Alysson Neves Bessani, Joni da Silva Fraga, Lau Cheuk Lung, Eduardo Adílio Pelinson Alchieri
A Framework for Prototyping J2EE Replication Algorithms

In application server systems, such as J2EE, replication is an essential strategy for reliability and efficiency. Many J2EE implementations, both commercial and open-source, provide some replication support. However, the range of possible strategies is wide, and the choice of the best one, depending on the expected application profile, remains an open research question.To support research in this area, we introduce a framework for prototyping J2EE replication algorithms. In effect, it divides replication code into two layers: the framework itself, which is common to all replication algorithms, and a specific replication algorithm, which is “plugged in” to the framework. The division is defined by an API.The framework simplifies development in two ways. First, it keeps much of the complexity of modifying a J2EE implementation within the framework layer, which is implemented only once. Second, through the API, the replication algorithm sees a highly abstracted view of the components in the server. This frees the designer to concentrate on the important issues that are specific to a replication algorithm, such as communication. We have implemented the framework by extending the open-source J2EE server. Compared to an unmodified server, the framework adds a performance cost of about 22%. Thus, it is quite practical for the initial development and evaluation of replication algorithms. Several algorithms have already been implemented within the framework.

Özalp Babaoğlu, Alberto Bartoli, Vance Maverick, Simon Patarin, Jakša Vučković, Huaigu Wu

Scalability

A Distributed and Parallel Component Architecture for Stream-Oriented Applications

This paper introduces ThreadMill – a distributed and parallel component architecture for applications that process large volumes of streamed (time-sequenced) data, such as is the case e.g. in speech and gesture recognition applications.Many stream-oriented applications offer ample opportunity for enhanced performance via concurrent execution, exploring a wide variety of parallel paradigms, such as task, data and pipeline parallelism. ThreadMill addresses the challenges of development and evolution of parallel and distributed applications in this domain by offering a modeling formalism, a programming framework and a runtime infrastructure. Component development and reuse, and application evolution are facilitated by the isolation of communication, concurrency, and synchronization concerns promoted by ThreadMill. A direct consequence of the novel mechanisms introduced by ThreadMill is that applications composed of reusable components can be re-targeted, unchanged, and made to run efficiently on a variety of execution environments. These environments can range e.g. from a single machine with a single processor, to a cluster of heterogeneous computational nodes, to certain classes of supercomputers. Experimental results show an eight-fold speedup when using ten nodes of an AlphaServer DS20 cluster running a proof-of-concept 2D video-based tracker for hands and face of American Sign Language signers.

P. Barthelmess, C. A. Ellis
An Architecture for Dynamic Scalable Self-Managed Persistent Objects

This paper presents a middleware architecture and a generic orchestrating protocol for implementing persistent object operations for large scale dynamic systems in a self-managing manner. In particular, the proposed solution is fully distributed, allows dynamic changes in the environment, and nodes are neither assumed to be aware of the size of the system nor of its entire composition.The architecture includes two modules and three services. The modules are expected to be instantiated and executed among relatively small sets of nodes in the context of a single multi-object operation (operation that spans multiple persistent objects) and therefore, can be implemented using known classical distributed computing approaches. On the other hand, services are long lived abstractions that may involve all nodes and should be implemented using known peer-to-peer techniques. The main contribution of the paper is to provide, for the first time, an architecture that brings together several seemingly distinct research areas, namely distributed consensus, group membership, notification services (publish/subscribe), scalable conflict detection (or locking), and scalable persistent storage. All these components are orchestrated together in order to obtain (strong) consistency on an a priori unsafe system.This paper also promotes the use of oracles as a design principle in implementing the respective components of the architecture. Specifically, each of the modules and services are further decomposed into a “benign” part and an “oracle” part, which are specified in a functional manner. This makes the principles of our proposed solution independent of specific implementations and environment assumptions (e.g., it does not depend on any specific distributed hash tables or specific network timing assumptions, etc). The contribution of this paper is therefore largely conceptual, as it focuses on defining the right architectural abstractions and on their orchestration, rather than on the actual mechanisms that implement each of its components.

Emmanuelle Anceaume, Roy Friedman, Maria Gradinariu, Matthieu Roy
GRIDKIT: Pluggable Overlay Networks for Grid Computing

A ‘second generation’ approach to the provision of Grid middleware is now emerging which is built on service-oriented architecture and web services standards and technologies. However, advanced Grid applications have significant demands that are not addressed by present-day web services platforms. As one prime example, current platforms do not support the rich diversity of communication ‘interaction types’ that are demanded by advanced applications (e.g. publish-subscribe, media streaming, peer-to-peer interaction). In the paper we describe the Gridkit middleware which augments the basic service-oriented architecture to address this particular deficiency. We particularly focus on the communications infrastructure support required to support multiple interaction types in a unified, principled and extensible manner—which we present in terms of the novel concept of pluggable overlay networks.

Paul Grace, Geoff Coulson, Gordon Blair, Laurent Mathy, Wai Kit Yeung, Wei Cai, David Duce, Chris Cooper

Components

Enabling Rapid Feature Deployment on Embedded Platforms with JeCOM Bridge

A new class of embedded devices is emerging that has a mixture of traditional firmware (written in C/C++) with an embedded virtual machine (e.g., Java). For these devices, the main part of the application is usually written in C/C++ for efficiency and extensible features can be added on the virtual machine (even after product shipment). These late bound features need access to the C/C++ code and may in fact replace or extend functionality that was originally deployed in ROM. This paper describes the JeCOM bridge that dramatically simplifies development and deployment of such add-on features for the embedded devices and allows the features to be added without requiring the firmware to be reburned or reflashed. After being dynamically loaded onto the device’s Java virtual machine, the JeCOM bridge facilitates transparent bi-directional communication between the Java application and the underlying firmware. Our bridging approach focuses on embedded applications development and deployment, and makes several significant advances over traditional Java Native Interface or other fixed stub/skeleton COM/CORBA/RMI approaches. In particular, we address object discovery, object lifecycle management, and memory management for parameter passing. While the paper focuses on the specific elements and experiences with an HP proprietary infrastructure, the techniques developed are applicable to a wide range of mixed language and mixed distributed object-based systems.

Jun Li, Keith Moore
Checking Asynchronously Communicating Components Using Symbolic Transition Systems

Explicit behavioural interface description languages (BIDLs, protocols) are now recognized as a mandatory feature of component languages in order to address component reuse, coordination, adaptation and verification issues. Such protocol languages often deal with synchronous communication. However, in the context of distributed systems, components communicating asynchronously through mailboxes are much more relevant. In this paper, we advocate for the use of Symbolic Transition Systems as a protocol language which may deal also with this kind of communication. We then present how this generic formalism, specialized with different mailbox protocols, may be used to address verification issues related to the component mailboxes.

Olivier Maréchal, Pascal Poizat, Jean-Claude Royer
Configuring Real-Time Aspects in Component Middleware

This paper makes two contributions to the study of configuring real-time aspects into quality of service (QoS)-enabled component middleware for distributed real-time and embedded (DRE) systems. First, it compares and contrasts the integration of real-time aspects into DRE systems using conventional QoS-enabled distributed object computing (DOC) middleware versus QoS-enabled component middleware. Second, it presents experiments that evaluate real-time aspects configured in The ACE ORB (TAO) versus in the Component-Integrated ACE ORB (CIAO). Our results show that QoS-enabled component middleware can offer real-time performance that is comparable to DOC middleware, while giving greater flexibility to compose and configure key DRE system aspects.

Nanbor Wang, Chris Gill, Douglas C. Schmidt, Venkita Subramonian

Events and Groups

Programming Abstractions for Content-Based Publish/Subscribe in Object-Oriented Languages

Asynchronous event-based communication facilitates loose coupling and eases the integration of autonomous, heterogeneous components into complex systems. Many middleware platforms for event-based communication follow the publish/subscribe paradigm. Despite the usefulness of such systems, their programming support is currently limited. Usually, publish/subscribe systems only exhibit low-level programming abstractions to application developers. In this paper we investigate programming abstractions for content-based publish/subscribe middleware in object-oriented languages, how they can be integrated in applications, and their implications on middleware implementation. We focus on the definition of filters and their implementation, the handling of notifications and meta-data, and programming support for composite events. We have implemented the presented approach for our content-based publish/subscribe middleware Rebeca.

Andreas Ulbrich, Gero Mühl, Torben Weis, Kurt Geihs
A Practical Comparison Between the TAO Real-Time Event Service and the Maestro/Ensemble Group Communication System

In this paper we present the results of a practical experience on the evaluation of two message-passing middleware platforms for developing distributed applications, i.e. the ACE/TAO Real Time Event Channel (RTEC) and the Maestro/Ensemble group communication toolkit (M/E). In particular, we compare their functionalities and their performances in a simple yet meaningful deployment configuration. The functional comparison points out the different characteristics of the two systems. In particular, M/E simplifies the coding of applications with strong requirements in terms of group membership tracking and ordered message delivery guarantees, while RTEC provides users with unreliable message delivery between loosely coupled processes. The performance comparison shows that, under stressing conditions, M/E sacrifices throughput stability for enforcing reliable and ordered message delivery, while RTEC offers a more stable throughput of unordered messages sacrificing message delivery reliability under heavy load. In normal operating conditions, the two systems perform almost similarly.

Carlo Marchetti, Paolo Papa, Stefano Cimmino, Leonardo Querzoni, Roberto Baldoni, Emanuela Barbi
Evaluation of a Group Communication Middleware for Clustered J2EE Application Servers

Clusters have become the de facto platform to scale J2EE application servers. Each tier of the server uses group communication to maintain consistency between replicated nodes. JGroups is the most commonly used Java middleware for group communications in J2EE open source implementations. No evaluation has been done yet to evaluate the scalability of this middleware and its impact on application server scalability.We present an evaluation of JGroups performance and scalability in the context of clustered J2EE application servers. We evaluate the JGroups configuration used by popular software such as the Tomcat JSP server or JBoss J2EE server. We benchmark JGroups with different network technologies, protocol stacks and cluster sizes. We show, using JGroups default stack, that group communication performance using UDP/IP depends on the switch capability to handle multicast packets. Fast Ethernet can give better results than Gigabit Ethernet.We experiment with configurations using TCP/IP and show that current J2EE application server clusters up to 16 nodes (the largest configuration we tested) can scale much better with this protocol. We attribute the superiority of TCP/IP based group communications over UDP/IP multicast to a better flow control management and a better usage of the network switches available in cluster environments. Finally, we discuss architectural improvements for a better modularity and resource usage of JGroups channels.

Takoua Abdellatif, Emmanuel Cecchet, Renaud Lachaize

Ubiquity and Web

A Mobile Agent Infrastructure for QoS Negotiation of Adaptive Distributed Applications

QoS-aware distributed applications such as certain Multimedia and Ubiquitous Computing applications can benefit greatly from the provision of QoS guarantees from the underlying system and middleware infrastructure. They must avoid execution glitches that affect the user’s perception of the application output.Most research in QoS support for distributed systems focuses on three aspects of QoS management: admission control, resource reservation, and scheduling. However, in highly dynamic distributed environments, effective means for QoS negotiation and re-negotiation are also essential.We believe that mobile agents, due to its inherent flexibility and agility, can play an important role in this scenario, specially during the application adaptation process. We designed a mobile-agent-based infrastructure that provides services such as resource monitoring, QoS brokering, and QoS enforcement. Furthermore, our infrastructure offers a powerful mechanism for QoS negotiation.In this paper, we describe the architecture and prototype implementation of this infrastructure. First, we discuss the motivations and related works. We, then, present the architectural design and discuss implementation issues concerning the infrastructure prototype. Finally, we introduce a sample application called ReflectorAglet – a QoS-aware adaptive audio reflector – and present preliminary experimental results.

Roberto Speicys Cardoso, Fabio Kon
Model-Driven Dependability Analysis of WebServices

This paper focuses on the development of a principled methodology for the dependability analysis of composite Web services. The first step of the methodology involves a UML representation for the architecture specification of composite Web services. The proposed representation is built upon BPEL and introduces necessary extensions to support the second step of the methodology, which comprises the specification of properties, characterizing the failure behavior of the elements that constitute the composite Web services. The automated mapping of this extended UML model to Block Diagrams and Markov models is introduced as the third step of the methodology. A comparative analysis of the aforementioned dependability analysis techniques in terms of precision and complexity is also performed.

Apostolos Zarras, Panos Vassiliadis, Valérie Issarny
Dynamic Access Control for Ubiquitous Environments

Current ubiquitous computing environments provide many kinds of information. This information may be accessed by different users under varying conditions depending on various contexts (e.g. location). These ubiquitous computing environments also impose new requirements on security. The ability for users to access their information in a secure and transparent manner, while adapting to changing contexts of the spaces they operate in is highly desirable in these environments. This paper presents a domain-based approach to access control in distributed environments with mobile, distributed objects and nodes. We utilize a slightly different notion of an object’s “view”, by linking its context to the state information available to it for access control purposes. In this work, we tackle the problem of hiding sensitive information in insecure environments by providing objects in the system a view of their state information, and subsequently manage this view. Combining access control requirements and multilevel security with mobile and contextual requirements of active objects allow us to re-evaluate security considerations for mobile objects. We present a middleware-based architecture for providing access control in such an environment and view-sensitive mechanisms for protection of resources while both objects and hosts are mobile. We also examine some issues with delegation of rights in these environments. Performance issues are discussed in supporting these solutions, as well as an initial prototype implementation and accompanying results.

Jehan Wickramasuriya, Nalini Venkatasubramanian
Backmatter
Metadaten
Titel
On the Move to Meaningful Internet Systems 2004: CoopIS, DOA, and ODBASE
herausgegeben von
Robert Meersman
Zahir Tari
Copyright-Jahr
2004
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-30469-2
Print ISBN
978-3-540-23662-7
DOI
https://doi.org/10.1007/b102176