Skip to main content

2003 | Buch

Advances in Databases and Information Systems

7th East European Conference, ADBIS 2003, Dresden, Germany, September 3-6, 2003. Proceedings

herausgegeben von: Leonid Kalinichenko, Rainer Manthey, Bernhard Thalheim, Uwe Wloka

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This volume contains 29 submitted and 2 invited papers presented at the tenth East-European Conference on Advances in Databases and Information Systems (ADVIS 2003), which took place in Dresden, Germany, September 3–6, 2003. An international program committee of 42 members from 24 countries s- ected these contributions from 86 submissions. Eight additional contributions were selected as short papers and have been published in a separate volume of local proceedings by the organizing institution. For the ?rst time, ADBIS also included an industrial program consisting of nine submitted presentations by representatives of commercial companies active in the database market. ADBIS 2003 was the tenth scienti?c event taking place under the acronym ADBIS, and thus marks a ?rst “jubilee” for this young, but by now we- established,seriesofconferences. ADBISwasfoundedbytheMoscowACMSIG- MOD Chapter in 1993. In 1994–1996 ADBIS was held in the form of workshops organized by the MOSCOW ACM SIGMOD Chapter in collaboration with the RussianFoundationforBasicResearch. Inthisperiod,anumberofinternational guests were invited to Moscow every year in order to improve and consolidate contacts between the national research community in Russia and the research community worldwide. International program committees for ADBIS were es- blished in 1995, contributing to the quality of the selection process and thus of the conference contributions in general. In 1996, following discussions with Dr.

Inhaltsverzeichnis

Frontmatter

Invited Lectures

Semantic Web Services: The Future of Integration!

The hype around current Web Service technology and standards proposals is mind-boggling. Promises are made about the ease, speed, reliability and persuasiveness of Web Service technology all centered on the three basic standard proposals SOAP [12], WSDL [15] and UDDI [13]. Web Service technology is praised as the integration technology of tomorrow. Addressing the integration space, more proposals are stacked onto the three basic ones covering composition like BPEL4WS [1], ebXML [7], BPML [2], transaction support like WS-C [14] and WS-T [16], security, and many, many more creating the unavoidable future Web Service technology jungle [4].

Christoph Bussler
Bioinformatics Databases: State of the Art and Research Perspectives

Bioinformatics or computational biology, i.e. the application of mathematical and computer science methods to solving problems in molecular biology that require large scale data, computation, and analysis, is a research area currently receiving a considerable attention. Databases play an essential role in molecular biology and consequently in bioinformatics. molecular biology data are often relatively cheap to produce, leading to a proliferation of databases: the number of bioinformatics databases accessible worldwide probably lies between 500 and 1.000. Not only molecular biology data, but also molecular biology literature and literature references are stored in databases. Bioinformatics databases are often very large (e.g. the sequence database GenBank contains more than 4 × 106 nucleotide sequences) and in general grows rapidly (e.g. about 8000 abstracts are added every month to the literature database PubMed). Bioinformatics databases are heterogeneous in their data, in their data modeling paradigms, in their management systems, and in the data analysis tools they supports. Furthermore, bioinformatics databases are often implemented, queried, updated, and managed using methods rarely applied for other databases. This presentation aims at introducing in current bioinformatics databases, stressing their aspects departing from conventional databases. A more detailed survey can be found in [1] upon which this presentation is built on.

François Bry, Peer Kröger

Compositional Development

Multirepresentation in Ontologies

The objective of this paper is to define an ontology language to support multiple representations of ontologies. In our research, we focus on the logic-based ontology languages. As a matter of fact, we will consider only languages that are based on description logics (DLs). At first, we propose a sub-language of DL as an ontology language. Furthermore we achieve multiple representations of ontological concepts by extending such sub-language through the use of stamping mechanism proposed in the context of multiple representation of spatial databases. The proposed language should offer a modest solution to the problem of multirepresentation ontologies.

Djamal Benslimane, Christelle Vangenot, Catherine Roussey, Ahmed Arara
Extension of Compositional Information Systems Development for the Web Services Platform

The use of Web services on the World Wide Web is expanding rapidly to make applications interoperable in information systems (IS). Web services providing interfaces to information and software components are convenient entities for producing their compositions having Web service appearances. At the same time, most large scale enterprise solutions that are deployed today are composed of a combination of different technologies that go together to compose many diverse applications. An approach for compositional information systems development in a multi-technological framework including Web service components is discussed. This paper proposes to extend the SYNTHESIS method for compositional information systems development (CISD) to the world of Web services. The CISD method is intended for correct composition of existing components semantically interoperable in the context of a specific application. Originally, the CISD method has been developed for the object-oriented platforms (like CORBA, RMI, J2EE). In the CISD, an ontological model and canonical object model (the SYNTHESIS language) are used for the unified representation of the new application (specification of requirements) and of the pre-existing components. Discovery of components relevant to the application and producing their compositions is provided in frame of the domain ontology and the canonical object model. To apply the CISD method for Web services, the mapping of WSDL specifications into the canonical model is required. The basic steps of the approach for the information system compositional development applying Web services are demonstrated.

Dmitry Briukhov, Leonid Kalinichenko, Iliya Tyurin
Domain Based Identification and Modelling of Business Component Applications

This paper presents a process for the design of domain specific business component applications to enforce the use of component based software technologies. Starting from the notion that the direct mapping of business processes to business components often leads to inadequate results, a methodology for identifying and refining business components based on the functional decomposition of an application domain will be presented. The usability of the resulting process will be shown with the example of a concept for strategic supply chain development, which extends the traditional frame of reference in strategic sourcing from a supplier-centric to a supply-chain-scope including the dynamic modelling of strategic supply-chains.

Antonia Albani, Alexander Keiblinger, Klaus Turowski, Christian Winnewisser

Advanced Query Processing

An Optimal Divide-Conquer Algorithm for 2D Skyline Queries

Skyline query processing is fundamental to many applications including multi-criteria decision making. In this paper, we will present an optimal algorithm for computing skyline in the two dimensional space. The algorithm has the progressive nature and adopts the divide-conquer paradigm. It can be shown that our algorithm achieves the minimum I/O costs, and is more efficient and scalable than the existing techniques. The experiment results demonstrated that our algorithm greatly improves the performance of the existing techniques.

Hai-Xin Lu, Yi Luo, Xuemin Lin
Hierarchical Data Cube for Range Queries and Dynamic Updates

A range query is a very popular and important operation on data cube in data warehouses. It performs an aggregation operation (e.g., SUM) over all selected cells of an OLAP data cube where the selection is specified by providing ranges of values for dimensions. Several techniques for range sum queries on data cubes have been introduced recently. However, they can incur update costs in the order of the size of the data cube. Since the size of a data cube is exponential in the number of its dimensions, updating the entire data cube can be very costly and not realistic. To solve this problem, an effective hierarchical data cube (HDC for short) is provided in this paper. The analytical and experimental results show that HDC is superior to other cubage storage structures for both dynamic updates and range queries.

Jianzhong Li, Hong Gao
Evaluation of Common Counting Method for Concurrent Data Mining Queries

Data mining queries are often submitted concurrently to the data mining system. The data mining system should take advantage of overlapping of the mined datasets. In this paper we focus on frequent itemset mining and we discuss and experimentally evaluate the implementation of the Common Counting method on top of the Apriori algorithm. The general idea of Common Counting is to reduce the number of times the common parts of the source datasets are scanned during the processing of the set of frequent pattern queries.

Marek Wojciechowski, Maciej Zakrzewicz

Transactions

taDOM: A Tailored Synchronization Concept with Tunable Lock Granularity for the DOM API

Storing, querying, and updating XML documents in multi-user environments requires data processing guarded by a transactional context to assure the well-known ACID properties, particularly with regard to isolate concurrent transactions.In this paper, we introduce the taDOM tree, an extended data model which considers organization of both attribute nodes and node values in a new way and allows fine-grained lock acquisition for XML documents. For this reason, we design a tailored lock concept using a combination of node locks, navigation locks, and logical locks in order to synchronize concurrent accesses to XML documents via the DOM API. Our synchronization concept supports user-driven tunable lock granularity and lock escalation to reduce the frequency of lock requests both aiming at enhanced transaction throughput. Therefore, the taDOM tree and the related lock modes are adjusted to the specific properties of the DOM API.

Michael P. Haustein, Theo Härder
Client-Side Dynamic Preprocessing of Transactions

In a client-server relational database system the response time and server throughput can be improved by outsourcing workload to clients. As extention of client-side caching techniques, we propose to preprocess database transactions at the client-side. A client operates on secondary data and supports only a low degree of isolation. The main objective is to provide a framework where the amount of preprocessing at clients is variable and adapts dynamically at run-time. Thereby, the overall goal is to maximize the systems performance, e.g. response time and throughput. We make use of a two-phase transaction protocol that verifies and reprocesses client computations if necessary. By using execution statistics we show how the amount of preprocessing can be partially predicted for each client. Within an experiment we show the correspondence between amount of preprocessing, update frequency and response time.

Steffen Jurk, Mattis Neiling

Retrieval from the Web

Using Common Schemas for Information Extraction from Heterogeneous Web Catalogs

The Web has become the world’s largest information source. Unfortunately, the main success factor of the Web, the inherent principle of distribution and autonomy of the participants, is also its main problem. When trying to make this information machine processable, common structures and semantics have to be identified. The goal of information extraction (IE) is exactly this, to transform text into a structural format. In this paper, we present a novel approach for information extraction developed as part of the XI3 project. Central to our approach is the assumption that we can obtain a better understanding of a text fragment if we consider its integration into higher-level concepts by exploiting text fragments from different parts of a source. In addition to previous approaches, we offer higher expressiveness of the extraction schema and an advanced method to deal with ambiguous texts. Our approach provides a way to use one extraction schema for multiple sources.

Richard Vlach, Wassili Kazakos
UCYMICRA: Distributed Indexing of the Web Using Migrating Crawlers

Due to the tremendous increase rate and the high change frequency of Web documents, maintaining an up-to-date index for searching purposes (search engines) is becoming a challenge. The traditional crawling methods are no longer able to catch up with the constantly updating and growing Web. Realizing the problem, in this paper we suggest an alternative distributed crawling method with the use of mobile agents. Our goal is a scalable crawling scheme that minimizes network utilization, keeps up with document changes, employs time realization, and is easily upgradeable.

Odysseas Papapetrou, Stavros Papastavrou, George Samaras

Indexing Techniques

Revisiting M-Tree Building Principles

The M-tree is a dynamic data structure designed to index metric datasets. In this paper we introduce two dynamic techniques of building the M-tree. The first one incorporates a multi-way object insertion while the second one exploits the generalized slim-down algorithm. Usage of these techniques or even combination of them significantly increases the querying performance of the M-tree. We also present comparative experimental results on large datasets showing that the new techniques outperform by far even the static bulk loading algorithm.

Tomš Skopal, Jaroslav Pokorný, Michal Krátký, Václav Snášel
Compressing Large Signature Trees

In this paper we present a new compression scheme for signature tree structures. Beyond the reduction of storage space, compression attains significant savings in terms of query processing. The latter issue is of critical importance when considering large collections of set valued data, e.g., in object-relational databases, where signature tree structures find important applications. The proposed scheme works on a per node basis, by reorganizing node entries according to their similarity, which results to sparse bit vectors that can be drastically compressed. Experimental results illustrate the efficiency gains due to the proposed scheme, especially for interesting real-world cases, like basket-market data or Web-server logs.

Maria Kontaki, Yannis Manolopoulos, Alexandros Nanopoulos

Active Databases and Workflows

A Conceptual Graphs Approach for Business Rules Modeling

Business knowledge is the major asset of business organizations, which empowers them not only to survive, but also make proactive decisions in rapidly changing business situation. Therefore mapping business knowledge to the computable form is the primary task for information systems research. Structuring business knowledge using business rules approach is convenient for both business representatives and system analysts. However, the question of modeling language is still open and different approaches are discussed in the literature. In this paper, conceptual graphs are considered as one of the suitable modeling languages. It is proposed to add a new element rule base to conceptual graphs element knowledge base for explicit business rules modeling in order to satisfy the important requirement of business rules systems – business rules must be addressed explicitly. The different business rules classification schemas are discussed with regard to the proposed business rule metamodel and it is shown how to map business rules regardless of their type to the uniform format.

Irma Valatkaite, Olegas Vasilecas
SnoopIB: Interval-Based Event Specification and Detection for Active Databases

Snoop is an event specification language developed for expressing primitive and composite events that are part of Event-Condition-Action (or ECA) rules. In Snoop, an event was defined to be an instantaneous, atomic (happens completely or not at all) occurrence of interest and the time of occurrence of the last event in an event expression was used as the time of occurrence for the entire event expression. The above detection-based semantics does not recognize multiple compositions of some operators – especially Sequence – in the intended way. In order to recognize all event operators, in all contexts, in the intended way, operator semantics need to include start time as well as end time for an event expression (i.e., interval-based semantics). In this paper, we formalize Snoop Interval-Based (SnoopIB), the occurrence of Snoop event operators and expressions using interval-based semantics. The algorithms for the detection of events using interval-based semantics introduce some challenges, as not all the events are known (especially their starting points).

Raman Adaikkalavan, Sharma Chakravarthy
Reasoning on Workflow Executions

This paper presents a new formalism for modelling workflows schemes which combines a control flow graph representation with simple (i.e., stratified), yet powerful DATALOG rules to express complex properties and constraints on executions. Both the graph representation and the DATALOG rules are mapped into a unique program in DATALOGev!, that is a recent extension of DATALOG for handling events. This mapping enables the designer to simulate the actual behavior of the modeled scheme by fixing an initial state and an execution scenario (i.e., a sequence of executions for the same workflow) and querying the state after such executions. As the scenario includes a certain amount of non-determinism, the designer may also verify under which conditions a given (desirable or undesirable) goal can be eventually achieved.

Gianluigi Greco, Antonella Guzzo, Domenico Saccà

Complex Value Storage

Optimization of Storage Structures of Complex Types in Object-Relational Database Systems

Modern relational DBMS use more and more object-relational features to store complex objects with nested structures and collection-valued attributes. Thus evolving towards object-relational database management systems. This paper presents results of the project “Object-Relational Database Features and Extensions: Model and Physical Aspects" of the Jena Database Group. It introduces an approach to optimize the physical representation of complex types with respect to the actual workload, mainly based on two concepts: First, different variants of physical representation of complex objects can be described and controlled by a new Physical Representation Definition Language (PRDL). Second a method based on workload capturing is suggested that allows to detect the need for physical restructuring, to evaluate alternative storage structures with respect to better performance and lower execution costs and to get well-founded improvement estimations.

Steffen Skatulla, Stefan Dorendorf
Hierarchical Bitmap Index: An Efficient and Scalable Indexing Technique for Set-Valued Attributes

Set-valued attributes are convenient to model complex objects occurring in the real world. Currently available database systems support the storage of set-valued attributes in relational tables but contain no primitives to query them efficiently. Queries involving set-valued attributes either perform full scans of the source data or make multiple passes over single-value indexes to reduce the number of retrieved tuples. Existing techniques for indexing set-valued attributes (e.g., inverted files, signature indexes or RD-trees) are not efficient enough to support fast access of set-valued data in very large databases.In this paper we present the hierarchical bitmap index—a novel technique for indexing set-valued attributes. Our index permits to index sets of arbitrary length and its performance is not affected by the size of the indexed domain. The hierarchical bitmap index efficiently supports different classes of queries, including subset, superset and similarity queries. Our experiments show that the hierarchical bitmap index outperforms other set indexing techniques significantly.

Mikołaj Morzy, Tadeusz Morzy, Alexandros Nanopoulos, Yannis Manolopoulos

Data Mining

Efficient Monitoring of Patterns in Data Mining Environments

In this article, we introduce a general framework for monitoring patterns and detecting interesting changes without continuously mining the data. Using our approach, the effort spent on data mining can be drastically reduced while the knowledge extracted from the data is kept up to date. Our methodology is based on a temporal representation for patterns, in which both the content and the statistics of a pattern are modeled. We divide the KDD process into two phases. In the first phase, data from the first period is mined and interesting rules and patterns are identified. In the second phase, using the data from subsequent periods, statistics of these rules are extracted in order to decide whether or not they still hold. We applied this technique in a case study on mining mail log data. Our results show that a minimal set of patterns reflecting the invariant properties of the dataset can be identified, and that interesting changes to the population can be recognized indirectly by monitoring a subset of the patterns found in the first phase.

Steffan Baron, Myra Spiliopoulou, Oliver Günther
FCBI: An Efficient User-Friendly Classifier Using Fuzzy Implication Table

In the past few years, exhaustive search method under the name of association rule mining has been widely used in the field of classification. However, such kind of methods usually produce too many crisp if-then rules and is not an efficient way to represent the knowledge, especially in real-life data mining application. In this paper, we propose a novel associative classification method called FCBI, i.e., Fuzzy Classification Based on Implication. This method partitions the original data set into fuzzy table without discretization on continuous attributes, the rule generation is performed in the relational database system by using fuzzy implication table. The unique features of this method include its high training speed and simplicity in implementation. Experiment results show that the classification rules generated are meaningful and explainable.

Chen Zheng, Li Chen
Dynamic Integration of Classifiers in the Space of Principal Components

Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. It was shown that, for an ensemble to be successful, it should consist of accurate and diverse base classifiers. However, it is also important that the integration procedure in the ensemble should properly utilize the ensemble diversity. In this paper, we present an algorithm for the dynamic integration of classifiers in the space of extracted features (FEDIC). It is based on the technique of dynamic integration, in which local accuracy estimates are calculated for each base classifier of an ensemble, in the neighborhood of a new instance to be processed. Generally, the whole space of original features is used to find the neighborhood of a new instance for local accuracy estimates in dynamic integration. In this paper, we propose to use feature extraction in order to cope with the curse of dimensionality in the dynamic integration of classifiers. We consider classical principal component analysis and two eigenvector-based supervised feature extraction methods that take into account class information. Experimental results show that, on some data sets, the use of FEDIC leads to significantly higher ensemble accuracies than the use of plain dynamic integration in the space of original features. As a rule, FEDIC outperforms plain dynamic integration on data sets, on which both dynamic integration works (it outperforms static integration), and considered feature extraction techniques are able to successfully extract relevant features.

Alexey Tsymbal, Mykola Pechenizkiy, Seppo Puuronen, David W. Patterson

Formal Query Semantics

Formal Semantics of the ODMG 3.0 Object Query Language

Formal semantics of OQL in terms of object algebra, such as quantification, mapping, selection, unnesting or partitioning, developed by the author is defined, and it is shown in multiple examples that OQL queries can be easily expressed by means of this algebra. As a result, an OQL query can be mechanically translated into the corresponding object algebra expression, which can be further optimized and executed.

Alexandre Zamulin

Spatial Aspects of IS

Similar Sub-trajectory Retrieval for Moving Objects in Spatio-temporal Databases

Moving objects’ trajectories play an important role in doing efficient retrieval in spatial-temporal databases. In this paper, we propose a spatio-temporal representation scheme for modeling the trajectory of moving objects. Our spatio-temporal representation scheme effectively describes not only the single trajectory of a moving object but also the multiple trajectories of two or more moving objects. For measuring similarity between two trajectories, we propose a new k-warping distance algorithm which enhances the existing time warping distance algorithm by permitting up to k replications for an arbitrary motion of a query trajectory. Our k-warping distance algorithm provides an approximate matching between two trajectories as well as an exact matching between them. Based on our k-warping distance algorithm, we also present a similarity measure scheme for both the single trajectory and the multiple trajectories in spatio-temporal databases. Finally, we show from our experiment that our similarity measure scheme based on the k-warping distance outperforms Li’s one (no-warping) and Shan’s one (infinite-warping) in terms of precision and recall measures.

Choon-Bo Shim, Jae-Woo Chang
Distance Join Queries of Multiple Inputs in Spatial Databases

Let a tuple of n objects obeying a query graph (QG) be called the n-tuple. The “D distance -value” of this n-tuple is the value of a linear function of distances of the n objects that make up this n-tuple, according to the edges of the QG. This paper addresses the problem of finding the Kn-tuples between n spatial datasets that have the smallest D distance -values, the so-called K-Multi-Way Distance Join Query (K-MWDJQ), where each set is indexed by an R-tree-based structure. This query can be viewed as an extension of K-Closest-Pairs Query (K-CPQ) [4] for n inputs. In addition, a recursive non-incremental branch-and-bound algorithm following a Depth-First search for processing synchronously all inputs without producing any intermediate result is proposed. Enhanced pruning techniques are also applied to the n R-trees nodes in order to reduce the total response time of the query, and a global LRU buffer is used to reduce the number of disk accesses. Finally, an experimental study of the proposed algorithm using real spatial datasets is presented.

Antonio Corral, Yannis Manolopoulos, Yannis Theodoridis, Michael Vassilakopoulos

XML Processing

Rule-Based Generation of XML DTDs from UML Class Diagrams

We present an approach of how to extract automatically an XML document structure from a conceptual data model that describes the content of a document. We use UML class diagrams as the conceptual model that can be represented in XML syntax (XMI). The algorithm we present in the paper is implemented as a set of rules that transform the UML class diagram into an adequate document type definition (DTD). The generation of the DTD from the semantic model corresponds with the logical XML database design with the DTD as the database schema description. Therefore, we consider many semantic issues, such as the dealing with relationships, how to express them in a DTD in order to minimize the loss of semantics. Since our algorithm is based on XSLT stylesheets, its transformation rules can be modified in a very flexible manner in order to consider different mapping strategies and requirements.

Thomas Kudrass, Tobias Krumbein
More Functional Dependencies for XML

In this paper, we present a new approach towards functional dependencies in XML documents based on homomorphisms between XML data trees and XML schema graphs. While this approach allows us to capture functional dependencies similar to those recently studied by Arenas/Libkin and by Lee/Ling/Low, it also gives rise to a further class of functional dependencies in XML documents. We address some essential differences between the two classes of functional dependencies under discussion resulting in different expressiveness and different inference rules. Examples demonstrate that both classes of functional dependencies appear quite naturally in practice and, thus, should be taken into consideration when designing XML documents.

Sven Hartmann, Sebastian Link

Multimedia Data Management

Towards Collaborative Video Authoring

It’s a long time since video post production became digital, yet the issue of collaborative video authoring has not been seriously investigated so far. In this paper we tackle this problem from a transactional point of view, that allows us to ensure consistent exchange and sharing of information between authors by taking into account application semantics. Our main contribution is the development of a new data model called concurrent video, especially intended for cooperative authoring environments. We demonstrate that the presented model provides efficient means of organizing and manipulating video data, at the same time enabling direct use of merging mechanisms, which constitute a formal basis for collaborative scenarios. Moreover, since the proposed approach is mainly media-independent, we argue that the results of our work are applicable to other types of stream data as well.

Boris Novikov, Oleg Proskurnin

Information Integration

Updatable XML Views

XML views can be used in Web applications to resolve incompatibilities among heterogeneous XML sources. They allow to reduce the amount of data that a user has to deal with and to customize an XML source. We consider virtual updatable views for a query language addressing an XML native database. The novelty of the presented mechanism is inclusion of information about intents of updates into view definitions. This information takes the form of procedures that overload generic view updating operations. The mechanism requires integration of queries with imperative (procedural) statements and with procedures. This integration is possible within the Stack-Based Approach to query languages, which is based on the classical concepts of programming languages such as the environment stack and the paradigm of naming/scop-ing/binding. In the paper, we present the view mechanism describing its syntax, semantics and discussing examples illustrating its possible applications.

Hanna Kozankiewicz, Jacek Leszczylowski, Kazimierz Subieta

Query Containment

Testing Containment of XPath Expressions in Order to Reduce the Data Transfer to Mobile Clients

Within mobile client-server applications which access a server-side XML database, XPath expressions play a central role in querying for XML fragments. Whenever the mobile client can use a locally stored previous query result in order to answer a new query instead of accessing the server-side database, this can significantly reduce the data transfer from the server to the client. In order to check whether or not a previous query result can be reused for a new XPath query, we present a containment test of two XPath queries which combines two steps. At first, we use the DTD in order to check whether all paths selected by one XPath expression are also selected by the other XPath expression. Then we right-shuffle predicate filters of both expressions and start a subsumption test on them.

Stefan Böttcher, Rita Steinmetz
Query Containment with Negated IDB Predicates

We present a method that checks Query Containment for queries with negated IDB predicates. Existing methods either deal only with restricted cases of negation or do not check actually containment but uniform containment, which is a sufficient but not necessary condition for containment. Additionally, our queries may also contain equality, inequality and order comparisons. The generality of our approach allows our method to deal straightforwardly with query containment under constraints. Our method is sound and complete both for success and for failure and we characterize the databases where these properties hold. We also state the class of queries that can be decided by our method.

Carles Farré, Ernest Teniente, Toni Urpí
Backmatter
Metadaten
Titel
Advances in Databases and Information Systems
herausgegeben von
Leonid Kalinichenko
Rainer Manthey
Bernhard Thalheim
Uwe Wloka
Copyright-Jahr
2003
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-39403-7
Print ISBN
978-3-540-20047-5
DOI
https://doi.org/10.1007/b12032