Skip to main content

2006 | Buch

The Semantic Web: Research and Applications

3rd European Semantic Web Conference, ESWC 2006 Budva, Montenegro, June 11-14, 2006 Proceedings

herausgegeben von: York Sure, John Domingue

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Invited Talks

Where Does It Break? or: Why the Semantic Web Is Not Just “Research as Usual”

Work on the Semantic Web is all too often phrased as a technological challenge: how to improve the precision of search engines, how to personalise web-sites, how to integrate weakly-structured data-sources, etc. This suggests that we will be able to realise the Semantic Web by merely applying (and at most refining) the results that are already available from many branches of Computer Science. I will argue in this talk that instead of (just) a technological challenge, the Semantic Web forces us to rethink the foundations of many subfields of Computer Science. This is certainly true for my own field (Knowledge Representation), where the challenge of the Semantic Web continues to break many often silently held and shared assumptions underlying decades of research. With some caution, I claim that this is also true for other fields, such as Machine Learning, Natural Language Processing, Databases, and others. For each of these fields, I will try to identify silently held assumptions which are no longer true on the Semantic Web, prompting a radical rethink of many past results from these fields.

Frank van Harmelen
Toward Large-Scale Shallow Semantics for Higher-Quality NLP

Building on the successes of the past decade’s work on statistical methods, there are signs that continued quality improvement for QA, summarization, information extraction, and possibly even machine translation require more-elaborate and possibly even (shallow) semantic representations of text meaning. But how can one define a large-scale shallow semantic representation system and contents adequate for NLP applications, and how can one create the corpus of shallow semantic representation structures that would be required to train machine learning algorithms? This talk addresses the components required (including a symbol definition ontology and a corpus of (shallow) meaning representations) and the resources and methods one needs to build them (including existing ontologies, human annotation procedures, and a verification methodology). To illustrate these aspects, several existing and recent projects and applicable resources are described, and a research programme for the near future is outlined. Should NLP be willing to face this challenge, we may in the not-too-distant future find ourselves working with a whole new order of knowledge, namely (shallow) and doing so in increasing collaboration (after a 40-years separation) with specialists from the Knowledge Representation and reasoning community.

Eduard Hovy
Usability and the Semantic Web

In addition to its technical implications, the semantic web vision gives rise to some challenges concerning usability and interface design. What difficulties can arise when persons with little or no relevant training try to (a) formulate knowledge (e.g., with ontology editors or annotation tools) in such a way that it can be exploited by semantic web technologies; or (b) leverage semantic information while querying or browsing? What strategies have been applied in an effort to overcome these difficulties, and what are the main open issues that remain? This talk will address these questions, referring to examples and results from a variety of research efforts, including the project SemIPort, which concerns semantic methods and tools for information portals, and Halo 2, in which tools have been developed and evaluated that enable scientists to formalize and query college-level scientific knowledge.

Anthony Jameson

Ontology Alignment

Matching Hierarchical Classifications with Attributes

Hierarchical Classifications with Attributes are tree-like structures used for organizing/classifying data. Due to the exponential growth and distribution of information across the network, and to the fact that such information is usually clustered by means of this kind of structures, we assist nowadays to an increasing interest in finding techniques to define

mappings

among such structures. In this paper, we propose a new algorithm for discovering mappings across hierarchical classifications, which faces the matching problem as a problem of deducing relations between sets of logical terms representing the meaning of hierarchical classification nodes.

L. Serafini, S. Zanobini, S. Sceffer, P. Bouquet
An Iterative Algorithm for Ontology Mapping Capable of Using Training Data

We present a new iterative algorithm for ontology mapping where we combine standard string distance metrics with a structural similarity measure that is based on a vector representation. After all pairwise similarities between concepts have been calculated we apply well-known graph algorithms to obtain an optimal matching. Our algorithm is also capable of using existing mappings to a third ontology as training data to improve accuracy. We compare the performance of our algorithm with the performance of other alignment algorithms and show that our algorithm can compete well against the current state-of-the-art.

Andreas Heß
Community-Driven Ontology Matching

We extend the notion of ontology matching to

community-driven

ontology matching. Primarily, the idea is to enable Web communities to establish and reuse ontology mappings in order to achieve, within those communities, an adequate and timely domain representation, facilitated knowledge exchange, etc. Secondarily, the matching community is provided with the new practice, which is a public alignment reuse. Specifically, we present an approach to construction of a community-driven ontology matching system and discuss its implementation. An analysis of the system usage indicates that our strategy is promising. In particular, the results obtained justify feasibility and usefulness of the community-driven ontology mappings’ acquisition and sharing.

Anna V. Zhdanova, Pavel Shvaiko
Reconciling Concepts and Relations in Heterogeneous Ontologies

In the extensive usage of ontologies envisaged by the Semantic Web there is a compelling need for expressing mappings between the components of heterogeneous ontologies. These mappings are of many different forms and involve the different components of ontologies. State of the art languages for ontology mapping enable to express semantic relations between homogeneous components of different ontologies, namely they allow to map concepts into concepts, individuals into individuals, and properties into properties. Many real cases, however, highlight the necessity to establish semantic relations between heterogeneous components. For example to map a concept into a relation or vice versa. To support the interoperability of ontologies we need therefore to enrich mapping languages with constructs for the representation of

heterogeneous mappings

. In this paper, we propose an extension of Distributed Description Logics (DDL) to allow for the representation of mapping between concepts and relations. We provide a semantics of the proposed language and show its main logical properties.

Chiara Ghidini, Luciano Serafini
Empirical Merging of Ontologies — A Proposal of Universal Uncertainty Representation Framework

The significance of uncertainty representation has become obvious in the Semantic Web community recently. This paper presents our research on uncertainty handling in automatically created ontologies. A new framework for uncertain information processing is proposed. The research is related to OLE (Ontology LEarning) — a project aimed at bottom–up generation and merging of domain–specific ontologies. Formal systems that underlie the uncertainty representation are briefly introduced. We discuss the universal internal format of uncertain conceptual structures in OLE then and offer a utilisation example then. The proposed format serves as a basis for empirical improvement of initial knowledge acquisition methods as well as for general explicit inference tasks.

Vít Nováček, Pavel Smrž

Ontology Engineering

Encoding Classifications into Lightweight Ontologies

Classifications have been used for centuries with the goal of cataloguing and searching large sets of objects. In the early days it was mainly books; lately it has also become Web pages, pictures and any kind of electronic information items. Classifications describe their contents using natural language labels, which has proved very effective in manual classification. However natural language labels show their limitations when one tries to automate the process, as they make it very hard to reason about classifications and their contents. In this paper we introduce the novel notion of

Formal Classification

, as a graph structure where labels are written in a propositional concept language. Formal Classifications turn out to be some form of lightweight ontologies. This, in turn, allows us to reason about them, to associate to each node a normal form formula which univocally describes its contents, and to reduce document classification to reasoning about subsumption.

Fausto Giunchiglia, Maurizio Marchese, Ilya Zaihrayeu
A Method to Convert Thesauri to SKOS

Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3C’s Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.

Mark van Assem, Véronique Malaisé, Alistair Miles, Guus Schreiber
Ontology Engineering Revisited: An Iterative Case Study

Existing mature ontology engineering approaches are based on some basic assumptions that are often violated in practice, in particular in the Semantic Web. Ontologies often need to be built in a

decentralized

way, ontologies must be given to a community in a way such that individuals have

partial autonomy

over them and ontologies have a life cycle that involves an

iteration

back and forth between construction/modification and use. While recently there have been some initial proposals to consider these issues, they lack the appropriate rigor of mature approaches. i.e. these recent proposals lack the appropriate depth of methodological description, which makes the methodology usable, and they lack a proof of concept by a long-lived case study. In this paper, we revisit mature and new ontology engineering methodologies. We provide an elaborate methodology that takes decentralization, partial autonomy and iteration into account and we demonstrate its proof-of-concept in a real-world cross-organizational case study.

Christoph Tempich, H. Sofia Pinto, Steffen Staab

Ontology Evaluation

Towards a Complete OWL Ontology Benchmark

Aiming to build a complete benchmark for better evaluation of existing ontology systems, we extend the well-known Lehigh University Benchmark in terms of inference and scalability testing. The extended benchmark, named University Ontology Benchmark (UOBM), includes both OWL Lite and OWL DL ontologies covering a complete set of OWL Lite and DL constructs, respectively. We also add necessary properties to construct effective instance links and improve instance generation methods to make the scalability testing more convincing. Several well-known ontology systems are evaluated on the extended benchmark and detailed discussions on both existing ontology systems and future benchmark development are presented.

Li Ma, Yang Yang, Zhaoming Qiu, Guotong Xie, Yue Pan, Shengping Liu
Modelling Ontology Evaluation and Validation

We present a comprehensive approach to ontology evaluation and validation, which have become a crucial problem for the development of semantic technologies. Existing evaluation methods are integrated into one sigle framework by means of a formal model. This model consists, firstly, of a meta-ontology called

O

2

, that characterises ontologies as semiotic objects. Based on

O

2

and an analysis of existing methodologies, we identify three main types of measures for evaluation: structural measures, that are typical of ontologies represented as graphs; functional measures, that are related to the intended use of an ontology and of its components; and usability-profiling measures, that depend on the level of annotation of the considered ontology. The meta-ontology is then complemented with an ontology of ontology validation called

oQual

, which provides the means to devise the best set of criteria for choosing an ontology over others in the context of a given project. Finally, we provide a small example of how to apply

oQual

-derived criteria to a validation case.

Aldo Gangemi, Carola Catenacci, Massimiliano Ciaramita, Jos Lehmann
Benchmark Suites for Improving the RDF(S) Importers and Exporters of Ontology Development Tools

Interoperability is the ability of two or more systems to interchange information and to use the information that has been interchanged. Nowadays, interoperability between ontology development tools is low. Therefore, to assess and improve this interoperability, we propose to perform a benchmarking of the interoperability of ontology development tools using RDF(S) as the interchange language. This paper presents, on the one hand, the interoperability benchmarking that is currently in progress in Knowledge Web and, on the other, the benchmark suites defined and used in this benchmarking.

Raúl García-Castro, Asunción Gómez-Pérez

Ontology Evolution

Repairing Unsatisfiable Concepts in OWL Ontologies

In this paper, we investigate the problem of repairing unsatisfiable concepts in an OWL ontology in detail, keeping in mind the user perspective as much as possible. We focus on various aspects of the repair process – improving the explanation support to help the user understand the cause of error better, exploring various strategies to rank erroneous axioms (with motivating use cases for each strategy), automatically generating repair plans that can be customized easily, and suggesting appropriate axiom edits where possible to the user. Based on the techniques described, we present a preliminary version of an interactive ontology repair tool and demonstrate its applicability in practice.

Aditya Kalyanpur, Bijan Parsia, Evren Sirin, Bernardo Cuenca-Grau
Winnowing Ontologies Based on Application Use

The requirements of specific applications and services are often over estimated when ontologies are reused or built. This sometimes results in many ontologies being too large for their intended purposes. It is not uncommon that when applications and services are deployed over an ontology, only a few parts of the ontology are queried and used. Identifying which parts of an ontology are being used could be helpful to

winnow

the ontology, i.e., simplify or shrink the ontology to smaller, more fit for purpose size. Some approaches to handle this problem have already been suggested in the literature. However, none of that work showed how ontology-based applications can be used in the ontology-resizing process, or how they might be affected by it. This paper presents a study on the use of the AKT Reference Ontology by a number of applications and services, and investigates the possibility of relying on this

usage

information to winnow that ontology.

Harith Alani, Stephen Harris, Ben O’Neil
Resolving Inconsistencies in Evolving Ontologies

Changing a consistent ontology may turn the ontology into an inconsistent state. It is the task of an approach supporting ontology evolution to ensure an ontology evolves from one consistent state into another consistent state. In this paper, we focus on checking consistency of OWL DL ontologies. While existing reasoners allow detecting inconsistencies, determining why the ontology is inconsistent and offering solutions for these inconsistencies is far from trivial. We therefore propose an algorithm to select the axioms from an ontology causing the inconsistency, as well as a set of rules that ontology engineers can use to resolve the detected inconsistency.

Peter Plessers, Olga De Troyer

Ontology Learning

Automatic Extraction of Hierarchical Relations from Text

Automatic extraction of semantic relationships between entity instances in an ontology is useful for attaching richer semantic metadata to documents. In this paper we propose an SVM based approach to hierarchical relation extraction, using features derived automatically from a number of GATE-based open-source language processing tools. In comparison to the previous works, we use several new features including part of speech tag, entity subtype, entity class, entity role, semantic representation of sentence and WordNet synonym set. The impact of the features on the performance is investigated, as is the impact of the relation classification hierarchy. The results show there is a trade-off among these factors for relation extraction and the features containing more information such as semantic ones can improve the performance of the ontological relation extraction task.

Ting Wang, Yaoyong Li, Kalina Bontcheva, Hamish Cunningham, Ji Wang
An Infrastructure for Acquiring High Quality Semantic Metadata

Because metadata that underlies semantic web applications is gathered from distributed and heterogeneous data sources, it is important to ensure its quality (i.e., reduce duplicates, spelling errors, ambiguities). However, current infrastructures that acquire and integrate semantic data have only marginally addressed the issue of metadata quality. In this paper we present our metadata acquisition infrastructure, ASDI, which pays special attention to ensuring that high quality metadata is derived. Central to the architecture of ASDI is a verification engine that relies on several semantic web tools to check the quality of the derived data. We tested our prototype in the context of building a semantic web portal for our lab, KMi. An experimental evaluation comparing the automatically extracted data against manual annotations indicates that the verification engine enhances the quality of the extracted semantic metadata.

Yuangui Lei, Marta Sabou, Vanessa Lopez, Jianhan Zhu, Victoria Uren, Enrico Motta
Extracting Instances of Relations from Web Documents Using Redundancy

In this document we describe our approach to a specific subtask of ontology population, the extraction of instances of relations. We present a generic approach with which we are able to extract information from documents on the Web. The method exploits redundancy of information to compensate for loss of precision caused by the use of domain independent extraction methods. In this paper, we present the general approach and describe our implementation for a specific relation instance extraction task in the art domain. For this task, we describe experiments, discuss evaluation measures and present the results.

Viktor de Boer, Maarten van Someren, Bob J. Wielinga

Rules and Reasoning

Toward Multi-viewpoint Reasoning with OWL Ontologies

Despite of their advertisement as task independent representations, the reuse of ontologies in different contexts is difficult. An explanation for this is that when developing an ontology, a choice is made with respect to what aspects of the world are relevant. In this paper we deal with the problem of reusing ontologies in a context where only parts of the originally encoded aspects are relevant. We propose the notion of a viewpoint on an ontology in terms of a subset of the complete representation vocabulary that is relevant in a certain context. We present an approach of implementing different viewpoints in terms of an approximate subsumption operator that only cares about a subset of the vocabulary. We discuss the formal properties of subsumption with respect to a subset of the vocabulary and show how these properties can be used to efficiently compute different viewpoints on the basis of maximal sub-vocabularies that support subsumption between concept pairs.

Heiner Stuckenschmidt
Effective Integration of Declarative Rules with External Evaluations for Semantic-Web Reasoning

Towards providing a suitable tool for building the Rule Layer of the Semantic Web,

hex

-programs have been introduced as a special kind of logic programs featuring capabilities for higher-order reasoning, interfacing with external sources of computation, and default negation. Their semantics is based on the notion of answer sets, providing a transparent interoperability with the Ontology Layer of the Semantic Web and full declarativity. In this paper, we identify classes of

hex

-programs feasible for implementation yet keeping the desirable advantages of the full language. A general method for combining and evaluating sub-programs belonging to arbitrary classes is introduced, thus enlarging the variety of programs whose execution is practicable. Implementation activity on the current prototype is also reported.

Thomas Eiter, Giovambattista Ianni, Roman Schindlauer, Hans Tompits
Variable-Strength Conditional Preferences for Ranking Objects in Ontologies

We introduce conditional preference bases as a means for ranking objects in ontologies. Conditional preference bases consist of a description logic knowledge base and a finite set of variable-strength conditional preferences. They are inspired by Goldszmidt and Pearl’s approach to default reasoning from conditional knowledge bases in System

Z

 + 

. We define a notion of consistency for conditional preference bases, and show how consistent conditional preference bases can be used for ranking objects in ontologies. We also provide algorithms for computing the rankings. To give evidence of the usefulness of this approach in practice, we describe an application in the area of literature search.

Thomas Lukasiewicz, Jörg Schellhase
A Metamodel and UML Profile for Rule-Extended OWL DL Ontologies

In this paper we present a MOF compliant metamodel and UML profile for the Semantic Web Rule Language (SWRL) that integrates with our previous work on a metamodel and UML profile for OWL DL. Based on this metamodel and profile, UML tools can be used for visual modeling of rule-extended ontologies.

Saartje Brockmans, Peter Haase, Pascal Hitzler, Rudi Studer
Visual Ontology Cleaning: Cognitive Principles and Applicability

In this paper we connect two research areas, the Qualitative Spatial Reasoning and visual reasoning on ontologies. We discuss the logical limitations of the mereotopological approach to the visual ontology cleaning, from the point of view of its formal support. The analysis is based on three different spatial interpretations wich are based in turn on three different spatial interpretations of the concepts of an ontology.

Joaquín Borrego-Díaz, Antonia M. Chávez-González
Rules with Contextually Scoped Negation

Knowledge representation formalisms used on the Semantic Web adhere to a strict open world assumption. Therefore, nonmonotonic reasoning techniques are often viewed with scepticism. Especially negation as failure, which intuitively adopts a closed world view, is often claimed to be unsuitable for the Web where knowledge is notoriously incomplete. Nonetheless, it was suggested in the ongoing discussions around rules extensions for languages like RDF(S) or OWL to allow at least restricted forms of negation as failure, as long as negation has an explicitly defined, finite scope. Yet clear definitions of such “scoped negation” as well as formal semantics thereof are missing. We propose logic programs with

contexts

and

scoped negation

and discuss two possible semantics with desirable properties. We also argue that this class of logic programs can be viewed as a rule extension to a subset of RDF(S).

Axel Polleres, Cristina Feier, Andreas Harth

Searching and Querying

Beagle + + : Semantically Enhanced Searching and Ranking on the Desktop

Existing desktop search applications, trying to keep up with the rapidly increasing storage capacities of our hard disks, offer an incomplete solution for information retrieval. In this paper we describe our Beagle

 + + 

desktop search prototype, which enhances conventional full-text search with semantics and ranking modules. This prototype extracts and stores activity-based metadata explicitly as RDF annotations. Our main contributions are extensions we integrate into the Beagle desktop search infrastructure to exploit this additional contextual information for searching and ranking the resources on the desktop. Contextual information plus ranking brings desktop search much closer to the performance of web search engines. Initially disconnected sets of resources on the desktop are connected by our contextual metadata, PageRank derived algorithms allow us to rank these resources appropriately. First experiments investigating precision and recall quality of our search prototype show encouraging improvements over standard search.

Paul-Alexandru Chirita, Stefania Costache, Wolfgang Nejdl, Raluca Paiu
RDFBroker: A Signature-Based High-Performance RDF Store

Many approaches for RDF stores exist, most of them using very straight-forward techniques to store triples in or mapping RDF Schema classes to database tables. In this paper we propose an RDF store that uses a natural mapping of RDF resources to database tables that does not rely on RDF Schema, but constructs a schema based on the occurring signatures, where a signature is the set of properties used on a resource. This technique can therefore be used for arbitrary RDF data, i.e., RDF Schema or any other schema/ontology language on top of RDF is not required. Our approach can be used for both in-memory and on-disk relational database-based RDF store implementations.

A first prototype has been implemented and already shows a significant performance increase compared to other freely available (in-memory) RDF stores.

Michael Sintek, Malte Kiesel
Towards Distributed Information Retrieval in the Semantic Web: Query Reformulation Using the oMAP Framework

This paper introduces a general methodology for performing distributed search in the Semantic Web. We propose to define this task as a three steps process, namely

resource selection

,

query reformulation/ontology alignment

and

rank aggregation/data fusion

. For the second problem, we have implemented

oMAP

, a formal framework for automatically aligning OWL ontologies. In oMAP, different components are combined for finding suitable mapping candidates (together with their weights), and the set of rules with maximum matching probability is selected. Among these components, traditional terminological-based classifiers, machine learning-based classifiers and a new classifier using the structure and the semantics of the OWL ontologies are proposed. oMAP has been evaluated on international test sets.

Umberto Straccia, Raphaël Troncy
PowerAqua: Fishing the Semantic Web

The Semantic Web (SW) offers an opportunity to develop novel, sophisticated forms of question answering (QA). Specifically, the availability of distributed semantic markup on a large scale opens the way to QA systems which can make use of such semantic information to provide precise, formally derived answers to questions. At the same time the distributed, heterogeneous, large-scale nature of the semantic information introduces significant challenges. In this paper we describe the design of a QA system, PowerAqua, designed to exploit semantic markup on the web to provide answers to questions posed in natural language. PowerAqua does not assume that the user has any prior information about the semantic resources. The system takes as input a natural language query, translates it into a set of logical queries, which are then answered by consulting and aggregating information derived from multiple heterogeneous semantic sources.

Vanessa Lopez, Enrico Motta, Victoria Uren
Information Retrieval in Folksonomies: Search and Ranking

Social bookmark tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. The reason for their immediate success is the fact that no specific skills are needed for participating. At the moment, however, the information retrieval support is limited. We present a formal model and a new search algorithm for folksonomies, called

FolkRank

, that exploits the structure of the folksonomy. The proposed algorithm is also applied to find communities within the folksonomy and is used to structure search results. All findings are demonstrated on a large scale dataset.

Andreas Hotho, Robert Jäschke, Christoph Schmitz, Gerd Stumme

Semantic Annotation

DEMO – Design Environment for Metadata Ontologies

Efficient knowledge sharing and reuse—a pre-requisite for the realization of the Semantic Web vision—is currently impeded by the lack of standards for documenting and annotating ontologies with metadata information. We argue that the availability of metadata is a fundamental dimension of ontology reusability. Metadata information provides a basis for ontology developers to evaluate and adapt existing Semantic Web ontologies in new application settings, and fosters the development of support tools such as ontology repositories. However, in order for the metadata information to represent real added value to ontology users, it is equally important to achieve a common agreement on the terms used to describe ontologies, and to provide an appropriate technology infrastructure in form of tools being able to create, manage and distribute this information. In this paper we present DEMO, a framework for the development and deployment of ontology metadata. Besides OMV, the proposed core vocabulary for ontology metadata, the framework comprises an inventory of methods to collaboratively extend OMV in accordance to the requirements of an emerging community of industrial and academia users, and tools for metadata management.

Jens Hartmann, Elena Paslaru Bontas, Raúl Palma, Asunción Gómez-Pérez
An Environment for Semi-automatic Annotation of Ontological Knowledge with Linguistic Content

Both the multilingual aspects which characterize the (Semantic) Web and the demand for more easy-to-share forms of knowledge representation, being equally accessible by humans and machines, push the need for a more “linguistically aware” approach to ontology development. Ontologies should thus express knowledge by associating formal content with explicative linguistic expressions, possibly in different languages. By adopting such an approach, the intended meaning of concepts and roles becomes more clearly expressed for humans, thus facilitating (among others) reuse of existing knowledge, while automatic content mediation between autonomous information sources gets far more chances than otherwise. In past work we introduced OntoLing [7], a Protégé plug-in offering a modular and scalable framework for performing manual annotation of ontological data with information from different, heterogeneous linguistic resources. We present now an improved version of OntoLing, which supports the user with automatic suggestions for enriching ontologies with linguistic content. Different specific linguistic enrichment problems are discussed and we show how they have been tackled considering both algorithmic aspects and profiling of user interaction inside the OntoLing framework.

Maria Teresa Pazienza, Armando Stellato
Turning the Mouse into a Semantic Device: The seMouse Experience

The desktop is not foreign to the semantic way that is percolating broad areas of computing. This work reports on the experiences on turning the mouse into a semantic device. The mouse is configured with an ontology, and from then on, this ontology is used to annotate the distinct desktop resources. The ontology plays the role of a clipboard which can be transparently accessed by the file editors to either export (i.e. annotation) or import (i.e. authoring) metadata. Traditional desktop operations are now re-interpreted and framed by this ontology: copy&paste becomes annotation&authoring, and folder digging becomes property traversal. Being editor-independent, the mouse accounts for portability and maintainability to face the myriad of formats and editors which characterizes current desktops. This paper reports on the functionality, implementation, and user evaluation of this “semantic mouse”.

Jon Iturrioz, Sergio F. Anzuola, Oscar Díaz
Managing Information Quality in e-Science Using Semantic Web Technology

We outline a framework for managing information quality (IQ) in e-Science, using ontologies, semantic annotation of resources, and data bindings. Scientists define the quality characteristics that are of importance in their particular domain by extending an OWL DL IQ ontology, which classifies and organises these domain-specific quality characteristics within an overall quality management framework. RDF is used to annotate data resources, with reference to IQ indicators defined in the ontology. Data bindings — again defined in RDF — are used to represent mappings between data elements (e.g. defined in XML Schemas) and the IQ ontology. As a practical illustration of our approach, we present a case study from the domain of proteomics.

Alun Preece, Binling Jin, Edoardo Pignotti, Paolo Missier, Suzanne Embury, David Stead, Al Brown
Annotated RDF

There are numerous extensions of RDF that support temporal reasoning, reasoning about pedigree, reasoning about uncertainty, and so on. In this paper, we present

Annotated RDF

(or

aRDF

for short) in which RDF triples are annotated by members of a partially ordered set (with bottom element) that can be selected in any way desired by the user. We present a formal declarative semantics (model theory) for annotated RDF and develop algorithms to check consistency of

aRDF

theories and to answer queries to

aRDF

theories. We show that annotated RDF captures versions of all the forms of reasoning mentioned above within a single unified framework. We develop a prototype

aRDF

implementation and show that our algorithms work very fast indeed – in fact, in just a matter of seconds for theories with over 100,000 nodes.

Octavian Udrea, Diego Reforgiato Recupero, V. S. Subrahmanian
A Multilingual/Multimedia Lexicon Model for Ontologies

Ontology development is mostly directed at the representation of domain knowledge and much less at the representation of textual or image-based symbols for this knowledge, i.e., the multilingual and multimedia lexicon. To allow for automatic multilingual and multimedia knowledge markup, a richer representation of text and image features is needed. At present, such information is mostly missing or represented only in a very impoverished way. In this paper we propose an RDF/S-based lexicon model, which in itself is an ontology that allows for the integrated representation of domain knowledge and corresponding multilingual and multimedia features.

Paul Buitelaar, Michael Sintek, Malte Kiesel

Semantic Web Mining and Personalisation

Semantic Network Analysis of Ontologies

A key argument for modeling knowledge in ontologies is the easy reuse and re-engineering of the knowledge. However, current ontology engineering tools provide only basic functionalities for analyzing ontologies. Since ontologies can be considered as graphs, graph analysis techniques are a suitable answer for this need. Graph analysis has been performed by sociologists for over 60 years, and resulted in the vivid research area of Social Network Analysis (SNA).While social network structures currently receive high attention in the Semantic Web community, there are only very few SNA applications, and virtually none for analyzing the structure of ontologies.

We illustrate the benefits of applying SNA to ontologies and the Semantic Web, and discuss which research topics arise on the edge between the two areas. In particular, we discuss how different notions of centrality describe the core content and structure of an ontology. From the rather simple notion of degree centrality over betweenness centrality to the more complex eigenvector centrality, we illustrate the insights these measures provide on two ontologies, which are different in purpose, scope, and size.

Bettina Hoser, Andreas Hotho, Robert Jäschke, Christoph Schmitz, Gerd Stumme
Content Aggregation on Knowledge Bases Using Graph Clustering

Recently, research projects such as PADLR and SWAP have developed tools like Edutella or Bibster, which are targeted at establishing peer-to-peer knowledge management (P2PKM) systems. In such a system, it is necessary to obtain provide brief semantic descriptions of peers, so that routing algorithms or matchmaking processes can make decisions about which communities peers should belong to, or to which peers a given query should be forwarded.

This paper provides a graph clustering technique on knowledge bases for that purpose. Using this clustering, we can show that our strategy requires up to 58% fewer queries than the baselines to yield full recall in a bibliographic P2PKM scenario.

Christoph Schmitz, Andreas Hotho, Robert Jäschke, Gerd Stumme
Dynamic Assembly of Personalized Learning Content on the Semantic Web

This paper presents an ontology-based approach for automatic decomposition of learning objects (LOs) into reusable content units, and dynamic reassembly of such units into personalized learning content. To test our approach we developed TANGRAM, an integrated learning environment for the domain of Intelligent Information Systems. Relying on a number of ontologies, TANGRAM allows decomposition of LOs into smaller content units, which can be later assembled into new LOs personalized to the user’s domain knowledge, preferences, and learning styles. The focus of the presentation is on the ontologies themselves, in the context of user modeling and personalization. Furthermore, the paper presents the algorithm we apply to dynamically assemble content units into personalized learning content. We also discuss our experiences with dynamic content generation and point out directions for future work.

Jelena Jovanović, Dragan Gašević, Vladan Devedžić
Interactive Ontology-Based User Knowledge Acquisition: A Case Study

On the Semantic Web personalization technologies are needed to deal with user diversity. Our research aims at maximising the automation of acquisition of user knowledge, thus providing an effective solution for multi-faceted user modeling. This paper presents an approach to eliciting a user’s conceptualization by engaging in an ontology-driven dialog. This is implemented as an OWL-based domain-independent diagnostic agent. We show the deployment of the agent in a use case for personalized management of learning content, which has been evaluated in three studies with users. Currently, the system is being deployed in a cultural heritage domain for personalized recommendation of museum resources.

Lora Aroyo, Ronald Denaux, Vania Dimitrova, Michael Pye

Semantic Web Services

Matching Semantic Service Descriptions with Local Closed-World Reasoning

Semantic Web Services were developed with the goal of automating the integration of business processes on the Web. The main idea is to express the functionality of the services explicitly, using semantic annotations. Such annotations can, for example, be used for service discovery—the task of locating a service capable of fulfilling a business request. In this paper, we present a framework for annotating Web Services using description logics (DLs), a family of knowledge representation formalisms widely used in the Semantic Web.We show how to realise service discovery by matching semantic service descriptions, applying DL inferencing. Building on our previous work, we identify problems that occur in the matchmaking process due to the open-world assumption when handling incomplete service descriptions. We propose to use autoepistemic extensions to DLs (ADLs) to overcome these problems. ADLs allow for non-monotonic reasoning and for querying DL knowledge bases under local closed-world assumption. We investigate the use of epistemic operators of ADLs in service descriptions, and show how they affect DL inferences in the context of semantic matchmaking.

Stephan Grimm, Boris Motik, Chris Preist
The Web Service Modeling Language WSML: An Overview

The Web Service Modeling Language (WSML) is a language for the specification of different aspects of Semantic Web Services. It provides a formal language for the Web Service Modeling Ontology WSMO which is based on well-known logical formalisms, specifying one coherent language framework for the semantic description of Web Services, starting from the intersection of Datalog and the Description Logic

${\mathcal SHIQ}$

. This core language is extended in the directions of Description Logics and Logic Programming in a principled manner with strict layering. WSML distinguishes between conceptual and logical modeling in order to support users who are not familiar with formal logic, while not restricting the expressive power of the language for the expert user. IRIs play a central role in WSML as identifiers. Furthermore, WSML defines XML and RDF serializations for inter-operation over the Semantic Web.

Jos de Bruijn, Holger Lausen, Axel Polleres, Dieter Fensel
On the Semantics of Functional Descriptions of Web Services

Functional descriptions are a central pillar of Semantic Web services. Disregarding details on how to invoke and consume the service, they shall provide a black box description for determining the usability of a Web service for some request or usage scenario with respect to the provided functionality. The creation of sophisticated semantic matchmaking techniques as well as exposition of their correctness requires clear and unambiguous semantics of functional descriptions. As existing description frameworks like OWL-S and WSMO lack in this respect, this paper presents so-called

Abstract State Spaces

as a rich and language independent model of Web services and the world they act in. This allows giving a precise mathematical definition of the concept of Web Service and the semantics of functional descriptions. Finally, we demonstrate the benefit of applying such a model by means of a concrete use case: the

semantic analysis

of functional descriptions which allows to detect certain (un)desired semantic properties of functional descriptions. As a side effect, semantic analysis based on our formal model allows us to gain a formal understanding and insight in matching of functional descriptions during Web service discovery.

Uwe Keller, Holger Lausen, Michael Stollberg
A Minimalist Approach to Semantic Annotations for Web Processes Compositions

In this paper we propose a new approach to the automated composition of distributed processes described as semantic web services. Current approaches, such as those based on

owl-s

and

wsmo

, in spite of their expressive power, are hard to use in practice. Indeed, they require comprehensive and usually large ontological descriptions of the processes, and rather complex (and often inefficient) reasoning mechanisms. In our approach, we reduce to the minimum the usage of ontological descriptions of processes, so that we can perform a limited, but efficient and useful, semantic reasoning for composing web services. The key idea is to keep separate the procedural and the ontological descriptions, and to link them through semantic annotations. We define the formal framework, and propose a technique that can exploit simple reasoning mechanisms at the ontological level, integrated with effective reasoning mechanisms devised for procedural descriptions of web services.

Marco Pistore, Luca Spalazzi, Paolo Traverso
Protocol Mediation for Adaptation in Semantic Web Services

Protocol mediation enables interaction between communicating parties where there is a shared conceptual model of the intent and purpose of the communication, and where the mechanics of communication interaction vary. The communicating partners are using different protocols to achieve the same or similar ends. We present a description driven approach to protocol mediation which provides a more malleable approach to the integration of web services than the current rigid ‘plug-and-socket’ approach offered by description technologies such as WSDL. It enables the substitution of one service provider with another even though they use different interaction protocols. Our approach is centred on the identification of common domain specific protocol independent communicative acts; the description of abstract protocols which constrain the sequencing of communicative acts; and the description of concrete protocols that describe the mechanisms by which the client of a web service interface can utter and perceive communicative acts.

Stuart K. Williams, Steven A. Battle, Javier Esplugas Cuadrado

Semantic Wiki and Blogging

Ideas and Improvements for Semantic Wikis

We present an architecture for combining wikis containing hypertext with ontologies containing formal, structured information. A web-based ontology editor that supports collaborative work through versioning, transactions and management of simultaneous modifications is used for ontology evolution. In wiki pages, ontology information can be used to render dynamic content and answer user queries. Furthermore, query templates are introduced that simplify the use of queries for inexperienced users. The architecture allows easy integration with existing ontology frameworks and wiki engines. The usefulness of the approach is demonstrated by a prototypical implementation as well as a small case study.

Jochen Fischer, Zeno Gantner, Steffen Rendle, Manuel Stritt, Lars Schmidt-Thieme
WikiFactory: An Ontology-Based Application for Creating Domain-Oriented Wikis

Wikis play a leading role among the web publishing environments, being collaborative tools used for fast and easy writing and sharing of content. Although powerful and widely used, wikis do not support users in the aided generation of content specific for a given domain but they still require manual, time-consuming and error-prone interventions. On the other hand, semantic portals support users in browsing, searching and managing content related to a given domain, by exploiting ontologies. In this paper we propose a specific application of web ontologies, applied to the wikis: exploiting an ontological description of a domain in order to deploy a customized wiki for that specific domain. We describe the design of an ontology-based framework, named WikiFactory, that aids users to automatically generate a complex and complete wiki website related to a specific area of interest with few efforts. In order to show the applicability of our framework, we present a specific case study that describes the main WikiFactory capabilities in constructing the wiki website for a Computer Science Department in a University.

Angelo Di Iorio, Valentina Presutti, Fabio Vitali
Using Semantics to Enhance the Blogging Experience

Blogging, as a subset of the web as a whole, can benefit greatly from the addition of semantic metadata. The result — which we will call

Semantic Blogging

— provides improved capabilities with respect to search, connectivity and browsing compared to current blogging technology. Moreover, Semantic Blogging will allow new ways of convenient data exchange between the actors within the blogosphere — blog authors and blog users alike. This paper identifies

structural

and

content-related

metadata as the kinds of semantic metadata which are relevant in the domain of blogging. We present in detail the nature of these two kinds of metadata, and discuss an implementation for creating such metadata in a convenient and unobtrusive way for the user, how to publish it on the web, and how to best make use of it from the point of view of a blog consumer.

Knud Möller, Uldis Bojārs, John G. Breslin

Trust and Policies

WSTO: A Classification-Based Ontology for Managing Trust in Semantic Web Services

The aim of this paper is to provide a general ontology that allows the specification of trust requirements in the Semantic Web Services environment. Both client and Web Service can semantically describe their trust policies in two directions: first, each can expose their own guarantees to the environment, such as, security certification, execution parameters etc.; secondly, each can declare their trust preferences about other communication partners, by selecting (or creating) ‘

trust match criteria

’. A reasoning module can evaluate trust promises and chosen criteria, in order to select a set of Web Services that fit with all trust requirements. We see the trust-based selection problem of Semantic Web Services as a classification task. The class of selected Semantic Web Services (SWSs) will represent the set of all SWSs that fit both client and Web Service exposed trust requirements. We strongly believe that trust perception changes in different contexts, and strictly depends on the goal that the requester would like to achieve. For this reason, in our ontology we emphasize first class entities “goal”, “Web Service”

and

“user”, and the relations occurring among them. Our approach implies a centralized trust-based broker,

i.e.

an agent able to reason on trust requirements and to mediate between goal and Web Service semantic descriptions. We adopt IRS-III as our prototypical trust-based broker.

Stefania Galizia
Semantic Web Policies – A Discussion of Requirements and Research Issues

Policies are pervasive in web applications. They play crucial roles in enhancing security, privacy and usability of distributed services. There has been extensive research in the area, including the Semantic Web community, but several aspects still exist that prevent policy frameworks from widespread adoption and real world application. This paper discusses important requirements and open research issues in this context, focusing on policies in general and their integration into trust management frameworks, as well as on approaches to increase system cooperation, usability and user-awareness of policy issues.

P. A. Bonatti, C. Duma, N. Fuchs, W. Nejdl, D. Olmedilla, J. Peer, N. Shahmehri
Backmatter
Metadaten
Titel
The Semantic Web: Research and Applications
herausgegeben von
York Sure
John Domingue
Copyright-Jahr
2006
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-34545-9
Print ISBN
978-3-540-34544-2
DOI
https://doi.org/10.1007/11762256

Premium Partner