Skip to main content

Über dieses Buch

AccordingtoFrancoisBancillonandWonKim[SIGMODRECORD,Vol.19,No. 4, December 1990], object-oriented databases started in around 1983. Twen- seven years later this publication contains the proceedings of the Third Inter- tional Conference on Object-Oriented Databases (ICOODB 2010). Two questions arise from this – why only the third, and what is of interest in the ?eld of object-oriented databases in 2010? The ?rst question is easy – in the 1980s and 1990s there were a number of conferences supporting the c- munity – the International Workshops on Persistent Object Systems started by Malcolm Atkinson and Ron Morrison, the EDBT series, and the International Workshop on Database Programming Languages. These database-oriented c- ferences complimented other OO conferences including OOPSLA and ECOOP, but towards the end of the last century they dwindled in popularity and ev- tually died out. In 2008 the First International Conference on Object Databases was held in Berlin. In 2009 the second ICOODB conference was held at the ETH in Zurich as a scienti?c peer-reviewed conference. What is particular about ICOODB is that the conference series was est- lished to address the needs of both industry and researcherswho had an interest in object databases, in innovative ways to bring objects and databases together and in alternatives/extensions to relational databases. The ?rst conference set the mould for those to follow – a combination of theory and practice with one day focusing on the theory of object databases and the second focusing on their practical use and implementation.




Search Computing Challenges and Directions

Search Computing (SeCo) is a project funded by the European Research Council (ERC). It focuses on building the answers to complex search queries like “Where can I attend an interesting conference in my field close to a sunny beach?” by interacting with a constellation of cooperating search services, using ranking and joining of results as the dominant factors for service composition. SeCo started on November 2008 and will last 5 years. This paper will give a general introduction to the Search Computing approach and then focus on its query optimization and execution engine, the aspect of the project which is most tightly related to “objects and databases” technologies.
Stefano Ceri, Daniele Braga, Francesco Corcoglioniti, Michael Grossniklaus, Salvatore Vadacca

Searching the Web of Objects

We present a pragmatic approach to search the Web of Objects, that is, a Web where entities such as people or places are recognized and exploited. We outline a search architecture where information extraction and semantic technologies play key roles. This architecture has to cope with incompleteness as well as noise to expand the capabilities of current search engines. The main open problems for research are related with recognizing the entities in the query and ranking objects. We show some of these ideas through features or demos already available.
Ricardo Baeza-Yates

Unifying Remote Data, Remote Procedures, and Web Services

Most large-scale applications integrate remote services and/ or transactional databases. Yet building software that efficiently invokes distributed service or accesses relational databases is still quite difficult. Existing approaches to these problems are based on the Remote Procedure Call (RPC), Object-Relational Mapping (ORM), or Web Services (WS). RPCs have been generalized to support distributed object systems. ORM tools generally support a form of query sublanguage for efficient object selection, but it is not well-integrated with the host language. Web Services may seems to be a step backwards, yet document-oriented services and REST are gaining popularity. The last 20 years have produced a long litany of technologies based on these concepts, including ODBC, CORBA, DCE, DCOM, RMI, DAO, OLEDB, SQLJ, JDBC, EJB, JDO, Hibernate, XML-RPC, WSDL, Axis and LINQ. Even with these technologies, complex design patterns for service facades and/or bulk data transfers must be followed to optimize communication between client and server or client and database, leading to programs that are difficult to modify and maintain.
While significant progress has been made, there is no widely accepted solution or even agreement about what the solution should look like. In this talk I present a new unified approach to invocation of distributed services and data access. The solution involves a novel control flow construct that partitions a program block into remote and local computations, while efficiently managing the communication between them. The solution does not require proxies, an embedded query language, or constructions/decoding of service requests. The end result is a natural unified interface to distributed services and data, which can be added to any programming language.
William R. Cook

Keynote Panel “New and Old Data Stores”

The world of data management is changing. The linkage to service platforms, operation within scalable (cloud) platforms, object-relational bindings, NoSQL databases, and new approaches to concurrency control are all becoming hot topics both in academia and industry.
The name NoSQL databases attempts to label the emergence of such growing number of non-relational, distributed data stores that often did not attempt to provide ACID properties. ACID properties are the key attributes of classic relational database systems. Such “new data stores” differ from classic relational databases, they may not require fixed table schemas, and usually avoid join operations and typically scale horizontally.
The panel discusses the pros and cons of new data stores with respect to classical relational databases.
Ulf Michael Widenius, Michael Keith, Patrick Linskey, Robert Greene, Leon Guzenda, Peter Neubauer

Regular Papers

Revisiting Schema Evolution in Object Databases in Support of Agile Development

Based on a real-world case study in agile development, we examine issues of schema evolution in state-of-the-art object databases. In particular, we show how traditional problems and solutions discussed in the research literature do not match the requirements of modern agile development practices. To highlight these discrepancies, we present the approach to agile schema evolution taken in the case study and then focus on the aspects of backward/forward compatibility and object structures. In each case, we discuss the impact on managing software evolution and present approaches to dealing with these in practice.
Tilmann Zäschke, Moira C. Norrie

The Case for Object Databases in Cloud Data Management

With the emergence of cloud computing, new data management requirements have surfaced. Currently, these challenges are studied exclusively in the setting of relational databases. We believe that there exist strong indicators that the full potential of cloud computing data management can only be leveraged by exploiting object database technologies. Object databases are a popular choice for analytical data management applications which are predicted to profit most from cloud computing. Furthermore, objects and relationships might be useful units to model and implement data partitions, while, at the same time, helping to reduce join processing. Finally, the service-oriented view taken by cloud computing is in its nature a close match to object models. In this position paper, we examine the challenges of cloud computing data management and show opportunities for object database technologies based on these requirements.
Michael Grossniklaus

Query Optimization by Result Caching in the Stack-Based Approach

We present a new approach to optimization of query languages using cached results of previously evaluated queries. It is based on the stack-based approach (SBA) and object-oriented query language SBQL. SBA assumes description of semantics in the form of abstract implementation of query/programming language constructs. Pragmatic universality of SBQL and its precise, formal operational semantics make it possible to investigate various crucial issues related to this kind of optimization. Two main issues are: organization of the cache enabling fast retrieval of cached queries and development of fast methods to recognize consistency of queries and incremental altering of cached query results after database updates. This paper is focused on the first issue concerning optimal, fast and transparent utilization of the result cache, involving methods of query normalization enabling higher reuse of cached queries with preservation of original query semantics and decomposition of complex queries into smaller ones. We present experimental results of the optimization that demonstrate the effectiveness of our technique.
Piotr Cybula, Kazimierz Subieta

A Flexible Object Model and Algebra for Uniform Access to Object Databases

In contrast to their relational counterparts, object databases are more heterogeneous in terms of their architecture, data model and functionality. To this day, this heterogeneity poses substantial difficulties when it comes to benchmark or interoperate object databases. While standardisation proposals have been made in the past, they have had limited impact as neither industry nor research has fully adopted them. We believe that one reason for this lack of adoption is that these standards were too restrictive and thus not capable of dealing with the heterogeneity of object databases. In this paper, we propose a uniform interface for access to object databases that is based on a flexible object model and algebra.
Michael Grossniklaus, Alexandre de Spindler, Christoph Zimmerli, Moira C. Norrie

Data Model Driven Implementation of Web Cooperation Systems with Tricia

We present the data modeling concepts of Tricia, an open-source Java platform used to implement enterprise web information systems as well as social software solutions including wikis, blogs, file shares and social networks. Tricia follows a data model driven approach to system implementation where substantial parts of the application semantics are captured by domain-specific models (data model, access control model and interaction model). In this paper we give an overview of the Tricia architecture and development process and present the concepts of its data model: plugins, entities, properties, roles, mixins, validators and change listeners are motivated and described using UML class diagrams and concrete examples from Tricia projects. We highlight the benefits of this data modeling framework for application developers (expressiveness, modularity, reuse, separation of concerns) and show its impact on user-related services (content authoring, integrity checking, link management, queries and search, access control, tagging, versioning, schema evolution and multilingualism). This provides the basis for a comparison with other model based approaches to web information systems.
Thomas Büchner, Florian Matthes, Christian Neubert

iBLOB: Complex Object Management in Databases through Intelligent Binary Large Objects

New emerging applications including genomic, multimedia, and geo-spatial technologies have necessitated the handling of complex application objects that are highly structured, large, and of variable length. Currently, such objects are handled using filesystem formats like HDF and NetCDF as well as the XML and BLOB data types in databases. However, some of these approaches are very application specific and do not provide proper levels of data abstraction for the users. Others do not support random updates or cannot manage large volumes of structured data and provide their associated operations. In this paper, we propose a novel two-step solution to manage and query application objects within databases. First, we present a generalized conceptual framework to capture and validate the structure of application objects by means of a type structure specification. Second, we introduce a novel data type called Intelligent Binary Large Object (iBLOB) that leverages the traditional BLOB type in databases, preserves the structure of application objects, and provides smart query and update capabilities. The iBLOB framework generates a type structure specific application programming interface (API) that allows applications to easily access the components of complex application objects. This greatly simplifies the ease with which new type systems can be implemented inside traditional DBMS.
Tao Chen, Arif Khan, Markus Schneider, Ganesh Viswanathan

Object-Oriented Constraints for XML Schema

This paper presents an object-oriented representation of the core structural and constraint-related features of XML Schema. The structural features are represented within the limitations of object-oriented type systems including particles (elements and groups) and type hierarchies (simple and complex types and type derivations). The applicability of the developed representation is demonstrated through a collection of complex object-oriented queries. The main novelty is that features of XML Schema that are not expressible in object-oriented type systems such as range constraints, keys and referential integrity, and type derivation by restriction are specified in an object-oriented assertion language Spec#. An assertion language overcomes major problems in the object-oriented/XML mismatch. It allows specification of schema integrity constraints and transactions that are required to preserve those constraints. Most importantly, Spec# technology comes with automatic static verification of code with respect to the specified constraints. This technology is applied in the paper to transaction verification.
Suad Alagić, Philip A. Bernstein, Ruchi Jairath

Solving ORM by MAGIC:MApping GeneratIon and Composition

Object-relational mapping (ORM) technologies have been proposed as a solution for the impedance mismatch problem between object-oriented applications and relational databases. Existing approaches use special-purpose mapping languages or are tightly integrated with the programming language. In this paper, we present MAGIC, an approach using bidirectional query and update views, based on a generic metamodel and a generic mapping language. The mapping language is based on second-order tuple-generating dependencies and allows arbitrary restructuring between the application model and the database schema. Due to the genericity of our approach, the core part including mapping generation and mapping composition is independent of the modeling languages being employed. We show the formal basis of MAGIC and how queries including aggregation can be defined using an easy to use query API. The scalability of our approach is shown in the evaluation using the TPC benchmark.
David Kensche, Christoph Quix, Xiang Li, Sandra Geisler

Closing Schemas in Object-Relational Databases

Schema closure is a property that guarantees that no schema component has external references, that is, references to components that are not included in the schema. In the context of object-relational databases, schema closure implies that types, tables and views do not have references to components that are not included in the schema. In order to achieve schema closure, in this work two basic approaches known as enlargement closure and reduction closure are proposed. Enlargement closure includes in the schema every referenced component. Reduction closure, on the other hand, is based on the transformation of the components that have external references, eliminating these references to fulfill schema closure. In this work, both closure approaches and the algorithms to carry out the closure in each of them are described. These algorithms generate and incorporate the needed components, whether being types or views, to reach the schema closure making easier therefore the definition of external schemas. Finally, to illustrate the concepts proposed in this work, we explain how to carry out schema closure in SQL:2008.
Manuel Torres, José Samos, Eladio Garví

A Comparative Study of the Features and Performance of ORM Tools in a .NET Environment

Object Relational Mapping (ORM) tools are increasingly becoming important in the process of information systems development, but still their level of use is lower than expected, considering all the benefits they offer. In this paper, we have presented comparative analysis of the two most used ORM tools in .NET programming environment. The features, usage and performance of Microsoft Entity Framework and NHibernate were analyzed and compared from a software development point of view. Various query mechanisms were described and tested against conventional SQL query approach as a benchmark. The results of our experiments have shown that the widely accepted opinion that ORM introduces translation overhead to all persistence operations is not correct in the case of modern ORM tools in .NET environment. Therefore, at the end of this paper we have discussed some reasons for insufficiently widespread application of ORM technology.
Stevica Cvetković, Dragan Janković


Weitere Informationen

Premium Partner