Skip to main content

2003 | Buch

Journal on Data Semantics I

herausgegeben von: Stefano Spaccapietra, Sal March, Karl Aberer

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the ?rst volume of the ?rst journal in the new LNCS Jo- nalSubline,theJournalonDataSemantics. Publishingajournalinabookseries might come as a surprise to customers, readers, and librarians, thus we would like to provide some background information and our motivation for introducing this new LNCS subline. As a consequence of the very tight interaction between the Lecture Notes in ComputerScienceseriesandtheinternationalcomputerscienceresearchand- velopment community, we receive quite a few proposals for new archive journals. From the successful launch of workshops or conferences and publication of their proceedings in the LNCS series, it might seem like a natural step to approach the publisher about launching a journal once this speci?c ?eld has gained a c- tain level of maturity and stability. Each year we receive about a dozen such proposals and even more informal inquiries. Like other publishers, it has been our experience that launching a new jo- nal and making it a long-term success is a hard job nowadays, due to a generally di?cult market situation, and library budget restrictions in particular. Because many of the proceedings in LNCS, and especially many of the LNCS postp- ceedings, apply the same strict reviewing and selection criteria as established journals, we started discussing with proposers of new journals the alternative of devoting a few volumes in LNCS to their ?eld, instead of going through the painful Sisyphean adventure of establishing a new journal on its own.

Inhaltsverzeichnis

Frontmatter
Formal Reasoning Techniques for Goal Models
Abstract
Over the past decade, goal models have been used in Computer Science in order to represent software requirements, business objectives and design qualities. Such models extend traditional AI planning techniques for representing goals by allowing for partially defined and possibly inconsistent goals. This paper presents a formal framework for reasoning with such goal models. In particular, the paper proposes a qualitative and a numerical axiomatization for goal modeling primitives and introduces label propagation algorithms that are shown to be sound and complete with respect to their respective axiomatizations. In addition, the paper reports on experimental results on the propagation algorithms applied to a goal model for a US car manufacturer.
Paolo Giorgini, John Mylopoulos, Eleonora Nicchiarelli, Roberto Sebastiani
Attribute-Based Semantic Reconciliation of Multiple Data Sources
Abstract
The main challenge in information integration is reconciling data semantics. Semantic reconciliation usually follows one of two approaches. Schema-based approaches assume instances belong to types and use schema information to reconcile types before reconciling attributes and relationships of these entity types. Attribute-based approaches use attribute names and nom-semantic information, such as ranges of attribute values, to seek similarities in attributes across sources. We suggest an attribute-based approach that uses semantic information. Two principles guide our method. First, reconciliation does not require that instances belong to specific types. Instead, sources can be reconciled by analyzing properties. Second, properties that appear different may be manifestations of the same higher-level property, enabling the treatment of inter-sources attributes, with different names and value ranges, as specializations of a more gemeral attribute. We discuss the approach, analyze its potential advantages, suggest a formalization, demonstrate its use with examples, and suggest directions for further research.
Jeffrey Parsons, Yair Wand
Data Quality in Web Information Systems
Abstract
Evaluation of data quality in web information systems provides support for a correct interpretation of the contents of web pages. Data quality dimensions proposed in the literature need to be adapted and extended to represent the characteristics of data in web pages, and in particular their dynamic aspects. The present paper proposes and discusses a model and a methodological framework to support data quality in web information systems. The design and a prototype implementation of a software module to publish quality information are also described.
Barbara Pernici, Monica Scannapieco
Reasoning about Anonymous Resources and Meta Statements on the Semantic Web
Abstract
Anonymous resources and meta statements are two of the more interesting features of RDF — an emerging standard for representing semantic information on the Web. Ironically, when RDF was standardized by W3C over three years ago [24], it came without a semantics. There is now growing understanding that a Semantic Web language without a semantics is an oxymoron, and a number of efforts are directed towards giving RDF a precise semantics [17,15,11]. In this paper we propose a simple semantics for anonymous resources and meta statements in F-logic [23] – a frame-based logic language, which is a popular formalism for representing and reasoning about semantic information on the Web [29,14,16,13,12].
The choice of F-logic (over RDF) as a basis for our semantics is motivated by the fact that F-logic provides a comprehensive solution for the problem of integrating frames, rules, inheritance, and deduction, and it has been shown to provide an effective inference service for RDF [13,28].
Guizhen Yang, Michael Kifer
IF-Map: An Ontology-Mapping Method Based on Information-Flow Theory
Abstract
In order to tackle the need of sharing knowledge within and across organisational boundaries, the last decade has seen researchers both in academia and industry advocating for the use of ontologies as a means for providing a shared understanding of common domains. But with the generalised use of large distributed environments such as the World Wide Web came the proliferation of many different ontologies, even for the same or similar domain, hence setting forth a new need of sharing–that of sharing ontologies. In addition, if visions such as the Semantic Web are ever going to become a reality, it will be necessary to provide as much automated support as possible to the task of mapping different ontologies. Although many efforts in ontology mapping have already been carried out, we have noticed that few of them are based on strong theoretical grounds and on principled methodologies. Furthermore, many of them are based only on syntactical criteria. In this paper we present a theory and method for automated ontology mapping based on channel theory, a mathematical theory of semantic information flow. We successfully applied our method to a large-scale scenario involving the mapping of several different ontologies of computer-science departments from various UK universities.
Yannis Kalfoglou, Marco Schorlemmer
OntoEdit: Multifaceted Inferencing for Ontology Engineering
Abstract
Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. The terms are used for concise communication across people and applications. Tools such as ontology editors facilitate the creation and maintenance of ontologies. OntoEdit is an ontology editor that has been developed keeping five main objectives in mind: 1. Ease of use. 2. Methodology-guided development of ontologies. 3. Ontology development with help of inferencing. 4. Development of ontology axioms. 5. Extensibility through plug-in structure.
York Sure, Juergen Angele, Steffen Staab
Distributed Description Logics: Assimilating Information from Peer Sources
Abstract
Due to the availability on the Internet of a wide variety of sources of information on related topics, the problem of providing seamless, integrated access to such sources has become (again) a major research challenge. Although this problem has been studied for several decades, there is a need for a more refined approach in those cases where the original sources maintain their own independent view of the world. In particular, we investigate those situations where there may not be a simple one-to-one mapping between the individuals in the domains of the various Information Sources.
Since Description Logics have already served successfully in information integration and as ontology languages, we extend this formalism with the ability to handle complex mappings between domains, through the use of so-called “bridge rules”. We investigate, among others, the exploitation of bridge rules to deduce new information, especially subsumption relationships between concepts in local information sources.
Alex Borgida, Luciano Serafini
On Using Conceptual Data Modeling for Ontology Engineering
Abstract
This paper tackles two main disparities between conceptual data schemes and ontologies, which should be taken into account when (re)using conceptual data modeling techniques for building ontologies. Firstly, conceptual schemes are intended to be used during design phases and not at the run-time of applications, while ontologies are typically used and accessed at run-time. To handle this first difference, we define a conceptual markup language (ORM-ML) that allows to represent ORM conceptual diagrams in an open, textual syntax, so that ORM schemes can be shared, exchanged, and processed at the run-time of autonomous applications. Secondly, unlike ontologies that are supposed to hold application-independent domain knowledge, conceptual schemes were developed only for the use of an enterprise application(s), i.e. “in-house” usage. Hence, we present an ontology engineering-framework that enables reusing conceptual modeling approaches in modeling and representing ontologies. In this approach we prevent application-specific knowledge to enter or to be mixed with domain knowledge. To end, we present DogmaModeler: an ontology-engineering tool that implements the ideas presented in the paper.
Mustafa Jarrar, Jan Demey, Robert Meersman
The DaQuinCIS Broker: Querying Data and Their Quality in Cooperative Information Systems
Abstract
In cooperative information systems, the quality of data exchanged and provided by different data sources is extremely important. A lack of attention to data quality can imply data of low quality to spread all over the cooperative system. At the same time, improvement can be based on comparing data, correcting them and disseminating high quality data. In this paper, a framework and a related architecture for managing data quality in cooperative information systems is proposed, as developed in the context of the DaQuinCIS research project. Then the focus concentrates (i) on an XML-based model for data and quality data, and (ii) on the design of a broker, which selects the best available data from different sources; such a broker also supports the improvement of data based on feedbacks to data sources. The broker is the basic component of the DaQuinCIS architecture.
Massimo Mecella, Monica Scannapieco, Antonino Virgillito, Roberto Baldoni, Tiziana Catarci, Carlo Batini
Backmatter
Metadaten
Titel
Journal on Data Semantics I
herausgegeben von
Stefano Spaccapietra
Sal March
Karl Aberer
Copyright-Jahr
2003
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-39733-5
Print ISBN
978-3-540-20407-7
DOI
https://doi.org/10.1007/b14200