Skip to main content
main-content

Über dieses Buch

This volume contains papers from the technical program of the 6th European Semantic Web Conference (ESWC 2009), held from May 31 to June 4, 2009, in Heraklion, Greece. ESWC 2009 presented the latest results in research and applications of Semantic Web technologies. In addition to the technical research track, ESWC 2009 featured a tutorial program, a PhD symposium, a system demo track, a poster track, a number of collocated workshops, and for the ?rst time in the series a Semantic Web in-use track exploring the bene?ts of applying Semantic Web technology in real-life applications and contexts. Thetechnical researchpaper trackreceivedover250submissions.The review process was organized using a two-tiered system, where each submission was reviewed by at least three members of the Program Committee. Vice Program CommitteeChairsorganizedadiscussionbetweenreviewers,collectedadditional reviews when necessary and provided a metareview for each submission. During a physical Program Committee meeting, the Vice Program Committee Chairs together with the Program Chairs selected 45 research papers to be presented at the conference.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Tonight’s Dessert: Semantic Web Layer Cakes

Tim Berners-Lee’s “Semantic Web Layer Cake” is one of the most used figures in our field. It has changed numerous times over the years, as new technologies have come along, and as the field has changed. Despite this, it appears to be incomplete – lots happening in the field is not included. In this talk, we take a look at the layer cake as it has changed over time and see what it can tell us about current and future Semantic Web work.

James A. Hendler

Discovering and Building Semantic Models of Web Sources

To achieve widespread use of the Semantic Web depends on having a critical mass of Web data available with semantic annotations. Since there are a huge number of sources available today without any such annotations, the challenge is how to find and build semantic models for these sources. In this talk I will describe an integrated end-to-end approach that automatically discovers information-producing web sources, invokes and extracts the data from these sources, builds semantic models of the sources, and validates the results by comparing the data produced by the source with the model of the source. These techniques are implemented in a system called DEIMOS, which integrates a diverse set of technologies to completely automate this task. DEIMOS starts with a “seed” source and finds other similar sources online using data from a social networking web site. Next the system learns how to invoke these sources through experimentation and then extracts data from these sources with automatic wrapping techniques. Finally, DEIMOS learns a semantic model of a source, which identifies the semantic types of the data produced by a source as well as the function that maps the inputs to the outputs. I will describe the challenges in integrating the component technologies into a unified approach to discovering, extracting and modeling new online sources. I will also present an evaluation of the integrated system on three different domains to demonstrate that it can automatically discover and model new Web sources.

Craig A. Knoblock

Video Semantics and the Sensor Web

The most widespread way in which content-based access to video information is supported is through using a combination of video metadata (date, time, format, etc.) and user-generated description (user tags, ratings, reviews, etc.). This has had widespread usage and is the basis for navigation through video archives in systems such as YouTube, Open Video and the Internet Archive. However, there are limitations with this such as vocabulary issues, and authentication across the users who annotate content.

Alan F. Smeaton

Keys, Money and Mobile Phone

Across cultures, genders and generations, the mobile phone has become one of the most essential objects in peoples everyday life. In this talk we will go through the evidence on how, next to money and keys, the mobile phone is the third-most thing to be carried around by almost everybody, nearly all the time. By looking at the demands and needs of todays mobile users, we discuss how Semantic technologies can add value to phones as well as to mobile services.

Walking through selected projects of DOCOMO Euro-Labs, we advocate that the success of future mobile services will largely depend on their ability to maximize their value in varying context. Contextual Intelligence in devices, mobile applications and service platforms will be needed to manage different mobile terminals, personalize content and services or to narrow down possibly very large sets of applicable services in a given situation. With this vision of Contextual Intelligence in mind, we are exploiting technologies from the Semantic Web for the mobile domain. For instance, to extend location-based service with knowledge and reasoning on places, people and things.

Matthias Wagner

Research Track

Applications

Querying Trust in RDF Data with tSPARQL

Today a large amount of RDF data is published on the Web. However, the openness of the Web and the ease to combine RDF data from different sources creates new challenges. The Web of data is missing a uniform way to assess and to query the trustworthiness of information. In this paper we present tSPARQL, a trust-aware extension to SPARQL. Two additional keywords enable users to describe trust requirements and to query the trustworthiness of RDF data. Hence, tSPARQL allows adding trust to RDF-based applications in an easy manner. As the foundation we propose a trust model that associates RDF statements with trust values and we extend the SPARQL semantics to access these trust values in tSPARQL. Furthermore, we discuss opportunities to optimize the execution of tSPARQL queries.

Olaf Hartig

RadSem: Semantic Annotation and Retrieval for Medical Images

We present a tool for semantic medical image annotation and retrieval. It leverages the MEDICO ontology which covers formal background information from various biomedical ontologies such as the Foundational Model of Anatomy (FMA), terminologies like ICD-10 and RadLex and covers various aspects of clinical procedures. This ontology is used during several steps of annotation and retrieval: (1) We developed an ontology-driven metadata extractor for the medical image format DICOM. Its output contains,

e. g.

, person name, age, image acquisition parameters, body region,

etc

. (2) The output from (1) is used to simplify the manual annotation by providing intuitive visualizations and to provide a preselected subset of annotation concepts. Furthermore, the extracted metadata is linked together with anatomical annotations and clinical findings to generate a unified view of a patient’s medical history. (3) On the search side we perform query expansion based on the structure of the medical ontologies. (4) Our ontology for clinical data management allows us to link and combine patients, medical images and annotations together in a comprehensive result list. (5) The medical annotations are further extended by links to external sources like Wikipedia to provide additional information.

Manuel Möller, Sven Regel, Michael Sintek

Semanta – Semantic Email Made Easy

In this paper we present Semanta – a fully-implemented system supporting Semantic Email Processes, integrated into the existing technical landscape and using existing email transport technology. By applying Speech Act Theory, knowledge about these processes can be made explicit, enabling machines to support email users with correctly interpreting, handling and keeping track of email messages, visualizing email threads and workflows, and extracting tasks and appointments from email messages. Whereas complex theoretical models and semantics are hidden beneath a simple user interface, the enabled functionalities are clear for the users to see and take advantage of. The system’s evaluation proved that our experiment with Semanta has indeed been successful and that semantic technology can be applied as an extra layer to existing technology, thus bringing its benefits into everyday computer usage.

Simon Scerri, Brian Davis, Siegfried Handschuh, Manfred Hauswirth

The Sile Model — A Semantic File System Infrastructure for the Desktop

With the increasing storage capacity of personal computing devices, the problems of information overload and information fragmentation become apparent on users’ desktops. For the Web, semantic technologies aim at solving this problem by adding a machine-interpretable information layer on top of existing resources, and it has been shown that the application of these technologies to desktop environments is helpful for end users. Certain characteristics of the Semantic Web architecture that are commonly accepted in the Web context, however, are not desirable for desktops; e.g., incomplete information, broken links, or disruption of content and annotations. To overcome these limitations, we propose the sile model, an intermediate data model that combines attributes of the Semantic Web and file systems. This model is intended to be the conceptual foundation of the Semantic Desktop, and to serve as underlying infrastructure on which applications and further services, e.g., virtual file systems, can be built. In this paper, we present the sile model, discuss Semantic Web vocabularies that can be used in the context of this model to annotate desktop data, and analyze the performance of typical operations on a virtual file system implementation that is based on this model.

Bernhard Schandl, Bernhard Haslhofer

Evaluation and Benchmarking

Who the Heck Is the Father of Bob?

A Survey of the OWL Reasoning Infrastructure for Expressive Real-World Applications

Finding the optimal selection of an OWL reasoner and service interface for a specific ontology-based application is challenging. Over time it has become more and more difficult to match application requirements with service offerings from available reasoning engines, in particular with recent optimizations for certain reasoning services and new reasoning algorithms for different fragments of OWL. This work is motivated by real-world experiences and reports about interesting findings in the course of developing an ontology-based application. Benchmarking outcomes of several reasoning engines are discussed – especially with respect to accompanying sound and completeness tests. We compare the performance of various service and communication protocols in different computing environments. Hereby, it becomes apparent that these largely underrated components may have an enormous impact on the overall performance.

Marko Luther, Thorsten Liebig, Sebastian Böhm, Olaf Noppens

Benchmarking Fulltext Search Performance of RDF Stores

More and more applications use the RDF framework as their data model and RDF stores to index and retrieve their data. Many of these applications require both structured queries as well as fulltext search. SPARQL addresses the first requirement in a standardized way, while fulltext search is provided by store-specific implementations. RDF benchmarks enable developers to compare structured query performance of different stores, but for fulltext search on RDF data no such benchmarks and comparisons exist so far. In this paper, we extend the LUBM benchmark with synthetic scalable fulltext data and corresponding queries for fulltext-related query performance evaluation. Based on the extended benchmark, we provide a detailed comparison of fulltext search features and performance of the most widely used RDF stores. Results show interesting RDF store insights for basic fulltext queries (classic IR queries) as well as hybrid queries (structured and fulltext queries). Our results are not only valuable for selecting the right RDF store for specific applications, but also reveal the need for performance improvements for certain kinds of queries.

Enrico Minack, Wolf Siberski, Wolfgang Nejdl

A Heuristics Framework for Semantic Subscription Processing

The increasing adoption of semantic web technology in application scenarios with frequently changing data has imposed new requirements on the underlying tools. Reasoning algorithms need to be optimized for the processing of dynamic knowledge bases and semantic frameworks have to provide novel mechanisms for detecting changes of knowledge. Today, the latter is mostly realized by implementing simple polling mechanisms. However, this implies client-side post-processing of the received results, causes high response times and limits the overall throughput of the system. In this paper, we present a heuristics framework for realizing a subscription mechanism for dynamic knowledge bases. By analyzing similarities between published information and resulting notifications, heuristics can be employed to “guess” subsequent notifications. As testing the correctness of guessed notifications can be implemented efficiently, notifications can be delivered to the subscribers in an earlier processing phase and the system throughput can be increased. We experimentally evaluate our approach based on a concrete application scenario.

Martin Murth, Eva Kühn

Ontologies and Natural Language

Towards Linguistically Grounded Ontologies

In this paper we argue why it is necessary to associate linguistic information with ontologies and why more expressive models, beyond RDFS, OWL and SKOS, are needed to capture the relation between natural language constructs on the one hand and ontological entities on the other. We argue that in the light of tasks such as ontology-based information extraction, ontology learning and population from text and natural language generation from ontologies, currently available datamodels are not sufficient as they only allow to associate atomic terms without linguistic grounding or structure to ontology elements. Towards realizing a more expressive model for associating linguistic information to ontology elements, we base our work presented here on previously developed models (

LingInfo, LexOnto, LMF

) and present a new joint model for linguistic grounding of ontologies called

LexInfo

. LexInfo combines essential design aspects of LingInfo and LexOnto and builds on a sound model for representing computational lexica called LMF which has been recently approved as a standard under ISO.

Paul Buitelaar, Philipp Cimiano, Peter Haase, Michael Sintek

Frame Detection over the Semantic Web

In the past, research in ontology learning from text has mainly focused on entity recognition, taxonomy induction and relation extraction. In this work we approach a challenging research issue: detecting

semantic frames

from texts and using them to encode web ontologies. We exploit a new generation Natural Language Processing technology for frame detection, and we enrich the frames acquired so far with argument restrictions provided by a

super-sense

tagger and domain specializations. The results are encoded according to a Linguistic MetaModel, which allows a complete translation of lexical resources and data acquired from text, enabling custom transformations of the enriched frames into modular ontology components.

Bonaventura Coppola, Aldo Gangemi, Alfio Gliozzo, Davide Picca, Valentina Presutti

Word Sense Disambiguation for XML Structure Feature Generation

A common limit of most existing methods that manage XML structure information is that they do not handle the semantic meanings that might be associated to the markup tags. In this paper, we study how to map structure information available from XML elements into semantically related concepts in order to support the generation of XML semantic features of XML structural type. For this purpose, we define an unsupervised

word sense disambiguation

method to select the most appropriate meaning for each element contextually to its respective XML path. The proposed approach exploits conceptual relations provided by a lexical ontology such as WordNet and employs different notions of

sense relatedness

. Experiments with data from various application domains are discussed, showing that our approach can be effectively used to generate structural semantic features.

Andrea Tagarelli, Mario Longo, Sergio Greco

Ontology Alignment

Improving Ontology Matching Using Meta-level Learning

Despite serious research efforts, automatic ontology matching still suffers from severe problems with respect to the quality of matching results. Existing matching systems trade-off precision and recall and have their specific strengths and weaknesses. This leads to problems when the right matcher for a given task has to be selected. In this paper, we present a method for improving matching results by not choosing a specific matcher but applying machine learning techniques on an ensemble of matchers. Hereby we learn rules for the correctness of a correspondence based on the output of different matchers and additional information about the nature of the elements to be matched, thus leveraging the weaknesses of an individual matcher. We show that our method always performs significantly better than the median of the matchers used and in most cases outperforms the best matcher with an optimal threshold for a given pair of ontologies. As a side product of our experiments, we discovered that the majority vote is a simple but powerful heuristic for combining matchers that almost reaches the quality of our learning results.

Kai Eckert, Christian Meilicke, Heiner Stuckenschmidt

Ontology Integration Using Mappings: Towards Getting the Right Logical Consequences

We propose a general method and novel algorithmic techniques to facilitate the integration of independently developed ontologies using mappings. Our method and techniques aim at helping users understand and evaluate the semantic consequences of the integration, as well as to detect and fix potential errors. We also present

ContentMap

, a system that implements our approach, and a preliminary evaluation which suggests that our approach is both useful and feasible in practice.

Ernesto Jiménez-Ruiz, Bernardo Cuenca Grau, Ian Horrocks, Rafael Berlanga

Using Partial Reference Alignments to Align Ontologies

In different areas ontologies have been developed and many of these ontologies contain overlapping information. Often we would therefore want to be able to use multiple ontologies. To obtain good results, we need to find the relationships between terms in the different ontologies, i.e. we need to align them.

Currently, there already exist a number of ontology alignment systems. In these systems an alignment is computed from scratch. However, recently, some situations have occurred where a partial reference alignment is available, i.e. some of the correct mappings between terms are given or have been obtained. In this paper we investigate whether and how a partial reference alignment can be used in ontology alignment. We use partial reference alignments to partition ontologies, to compute similarities between terms and to filter mapping suggestions. We test the approaches on previously developed gold standards and discuss the results.

Patrick Lambrix, Qiang Liu

Semantic Matching Using the UMLS

Traditional ontology alignment techniques enable equivalence relationships to be established between concepts in two ontologies with some confidence value. With semantic matching, however, it is possible to identify not only equivalence (≡) relationships between concepts, but less general (

$\sqsubseteq$

) and more general relationships (

$\sqsupseteq$

). This is beneficial since more expressive relationships can be discovered between ontologies thus helping us to resolve heterogeneity between differing semantic representations at a finer level of granularity. This work concerns the application of semantic matching to the medical domain. We have extended the SMatch algorithm to function in the medical domain with the use of the UMLS metathesaurus as the background resource, hence removing its previous reliance on WordNet, which does not cover the medical domain in a satisfactory manner. We describe the steps required to extend the SMatch algorithm to the medical domain for use with UMLS. We test the accuracy of our approach on subsets of the FMA and MeSH ontologies, with both precision and recall showing the accuracy and coverage of different versions of our algorithm on each dataset.

Jetendr Shamdasani, Tamás Hauer, Peter Bloodsworth, Andrew Branson, Mohammed Odeh, Richard McClatchey

Ontology Engineering

Embedding Knowledge Patterns into OWL

We describe the design and use of the Ontology Pre-Processor Language (OPPL) as a means of embedding the use of Knowledge Patterns in OWL ontologies. We illustrate the specification of patterns in OPPL and discuss the advantages of its adoption by Ontology Engineers with respect to ontology generation, transformation, and maintainability. The consequence of the declarative specification of patterns will be their unambiguous description inside an ontology in OWL. Thus, OPPL enables an ontology engineer to work at the level of the pattern, rather than of the raw OWL axioms. Moreover, patterns can be analysed rigorously, so that the repercussions of their reuse can be better understood by ontology engineers and tools implementers. Thus the delivery of patterns with OPPL can provide a means of addressing the opacity and sustainability of OWL ontologies.

Luigi Iannone, Alan Rector, Robert Stevens

A Core Ontology of Knowledge Acquisition

Semantic descriptions of knowledge acquisition (KA) tools and resources enable machine reasoning about KA systems and can be used to automate the discovery and composition of KA services, thereby increasing interoperability among systems and reducing system design and maintenance costs. Whilst there are a few general-purpose ontologies available that could be combined for describing knowledge acquisition, albeit at an inadequate abstraction level, there is as yet no KA ontology based on Semantic Web technologies available. In this paper, we present OAK, a well-founded, modular, extensible and multimedia-aware ontology of knowledge acquisition which extends existing foundational and core Semantic Web ontologies. We start by using a KA tool development scenario to illustrate the complexity of the problem, and identify a number of requirements for OAK. After we present the ontology in detail, we evaluate it with respect to the identified requirements.

José Iria

ONTOCOM Revisited: Towards Accurate Cost Predictions for Ontology Development Projects

Reliable methods to assess the costs and benefits of ontologies are an important instrument to demonstrate the tangible business value of semantic technologies within enterprises, as an argument to encourage their wide-scale adoption. The economic aspects of ontologies have been investigated in previous work of ours. With

ONTOCOM

we proposed a cost estimation model for ontologies and ontology development projects. This paper revisits this model and presents its latest achievements. We report on a comprehensive calibration of

ONTOCOM

based on a considerably larger data set of 148 ontology development projects. The calibration used a combination of statistical methods, ranging from preliminary data analysis to regression and Bayes analysis, and resulted a significant improvement of the prediction quality of up to 50%. In addition, the availability of a representative data set allowed us to identify meaningful directions for customizing the generic cost model along particular types of ontologies, and ontology-like structures as those specific to the emerging Web 3.0. Last but not least, we developed a software tool that allows ontology development project managers to easily use and adapt and to systematically calibrate the model, thus facilitating its adoption in real-world projects.

Elena Simperl, Igor O. Popov, Tobias Bürger

Query Processing

Ranking Approximate Answers to Semantic Web Queries

We consider the problem of a user querying semistructured data such as RDF without knowing its structure. In these circumstances, it is helpful if the querying system can perform an approximate matching of the user’s query to the data and can rank the answers in terms of how closely they match the original query. Our approximate matching framework allows us to incorporate standard notions of approximation such as edit distance as well as certain RDFS inference rules, thereby capturing semantic as well as syntactic approximations. The query language we adopt comprises conjunctions of regular path queries, thus including extensions proposed for SPARQL to allow for querying paths using regular expressions. We provide an incremental query evaluation algorithm which runs in polynomial time and returns answers to the user in ranked order.

Carlos A. Hurtado, Alexandra Poulovassilis, Peter T. Wood

Tempus Fugit

Towards an Ontology Update Language

Ontologies are used to formally describe domains of interest. As domains change over time, the ontologies have to be updated accordingly. We advocate the introduction of an Ontology Update Language that captures frequent domain changes and hence facilitates regular updates to be made in ontologies. We thoroughly discuss the general design choices for defining such a language and a corresponding update framework. Moreover, we propose a concrete language proposal based on SPARQL Update and provide a reference implementation of the framework.

Uta Lösch, Sebastian Rudolph, Denny Vrandečić, Rudi Studer

Representing, Querying and Transforming Social Networks with RDF/SPARQL

As

social networks

are becoming ubiquitous on the Web, the Semantic Web goals indicate that it is critical to have a standard model allowing exchange, interoperability, transformation, and querying of social network data.

In this paper we show that RDF/SPARQL meet this

desiderata

. Building on developments of

social network analysis

,

graph databases

and

Semantic Web

, we present a social networks data model based on RDF, and a query and transformation language based on SPARQL meeting the above requirements. We study its expressive power and complexity showing that it behaves well, and present an illustrative prototype.

Mauro San Martín, Claudio Gutierrez

Applied Temporal RDF: Efficient Temporal Querying of RDF Data with SPARQL

Many applications operate on time-“sensitive” data. Some of these data are only valid for certain intervals (e.g., job-assignments, versions of software code), others describe temporal events that happened at certain points in time (e.g., a person’s birthday). Until recently, the only way to incorporate time into Semantic Web models was as a data type property. Temporal RDF, however, considers time as an additional dimension in data preserving the semantics of time.

In this paper we present a syntax and storage format based on named graphs to express temporal RDF. Given the restriction to preexisting RDF-syntax, our approach can perform any temporal query using standard SPARQL syntax only. For convenience, we introduce a shorthand format called

τ

-SPARQL for temporal queries and show how

τ

-SPARQL queries can be translated to standard SPARQL. Additionally, we show that, depending on the underlying data’s nature, the temporal RDF approach vastly reduces the number of triples by eliminating redundancies resulting in an increased performance for processing and querying. Last but not least, we introduce a new indexing approach method that can significantly reduce the time needed to execute time point queries (e.g., what happened on January 1st).

Jonas Tappolet, Abraham Bernstein

Reasoning

ReduCE: A Reduced Coulomb Energy Network Method for Approximate Classification

In order to overcome the limitations of purely deductive approaches to the tasks of classification and retrieval from ontologies, inductive (instance-based) methods have been proposed as efficient and noise-tolerant alternative. In this paper we propose an original method based on non-parametric learning: the

Reduced Coulomb Energy

(RCE) Network. The method requires a limited training effort but it turns out to be very effective during the classification phase. Casting retrieval as the problem of assessing the class-membership of individuals w.r.t. the query concepts, we propose an extension of a classification algorithm using RCE networks based on an

entropic

similarity measure for OWL. Experimentally we show that the performance of the resulting inductive classifier is comparable with the one of a standard reasoner and often more efficient than with other inductive approaches. Moreover, we show that new knowledge (not logically derivable) is induced and the likelihood of the answers may be provided.

Nicola Fanizzi, Claudia d’Amato, Floriana Esposito

Hybrid Reasoning with Forest Logic Programs

Open Answer Set Programming (OASP) is an attractive framework for integrating ontologies and rules. Although several decidable fragments of OASP have been identified, few reasoning procedures exist. In this paper, we provide a sound, complete, and terminating algorithm for satisfiability checking w.r.t. forest logic programs, a fragment where rules have a tree shape and allow for inequality atoms and constants. We further introduce f-hybrid knowledge bases, a hybrid framework where

$\mathcal SHOQ$

knowledge bases and forest logic programs co-exist, and we show that reasoning with such knowledge bases can be reduced to reasoning with forest logic programs only. We note that f-hybrid knowledge bases do not require the usual (weakly) DL-safety of the rule component, providing thus a genuine alternative approach to hybrid reasoning.

Cristina Feier, Stijn Heymans

SIM-DLA: A Novel Semantic Similarity Measure for Description Logics Reducing Inter-concept to Inter-instance Similarity

While semantic similarity plays a crucial role for human categorization and reasoning, computational similarity measures have also been applied to fields such as semantics-based information retrieval or ontology engineering. Several measures have been developed to compare concepts specified in various description logics. In most cases, these measures are either structural or require a populated ontology. Structural measures fail with an increasing expressivity of the used description logic, while several ontologies, e.g., geographic feature type ontologies, are not populated at all. In this paper, we present an approach to reduce inter-concept to inter-instance similarity and thereby avoid the canonization problem of structural measures. The novel approach, called SIM-DL

A

, reuses existing similarity functions such as co-occurrence or network measures from our previous SIM-DL measure. The required instances for comparison are derived from the completion tree of a slightly modified DL-tableau algorithm as used for satisfiability checking. Instead of trying to find one (clash-free) model, the new algorithm generates a set of proxy individuals used for comparison. The paper presents the algorithm, alignment matrix, and similarity functions as well as a detailed example.

Krzysztof Janowicz, Marc Wilkes

Decidability of $\mathcal{SHI}$ with Transitive Closure of Roles

This paper investigates a Description Logic, namely

$\mathcal{SHI}_+$

, which extends

$\mathcal{SHI}$

by adding transitive closure of roles. The resulting logic

$\mathcal{SHI}_+$

allows transitive closure of roles to occur not only in concept inclusion axioms but also in role inclusion axioms. We show that

$\mathcal{SHI}_+$

is decidable by devising a terminating, sound and complete algorithm for deciding satisfiability of concepts in

$\mathcal{SHI}_+$

with respect to a set of concept and role inclusion axioms.

Chan Le Duc

FO(ID) as an Extension of DL with Rules

There are many interesting Knowledge Representation questions surrounding rule languages for the Semantic Web. The most basic one is of course: which kind of rules should be used and how do they integrate with existing Description Logics? Similar questions have already been addressed in the field of Logic Programming, where one particular answer has been provided by the language of FO(ID). FO(ID) is an extension of first-order logic with a rule-based representation for inductive definitions. By offering a general integration of first-order logic and Logic Programs, it also induces a particular way of extending Description Logics with rules. The goal of this paper is to investigate this integration and discover whether there are interesting extensions of DL with rules that can be arrived at by imposing appropriate restrictions on the highly expressive FO(ID).

Joost Vennekens, Marc Denecker

A Tableau Algorithm for Handling Inconsistency in OWL

In Semantic Web, the knowledge sources usually contain inconsistency because they are constantly changing and from different view points. As is well known, as based on the description logic, OWL is lack of the ability of tolerating inconsistent or incomplete data. Recently, the research in handling inconsistency in OWL becomes more and more important. In this paper, we present a paraconsistent OWL called quasi-classical OWL to handle inconsistency with holding important inference rules such as modus tollens, modus ponens, and disjunctive syllogism. We propose a terminable, sound and complete tableau algorithm to implement paraconsistent reasoning in quasi-classical OWL. In comparison with other approaches to handle inconsistency in OWL, our approach enhances the ability of reasoning by integrating paraconsistent reasoning with important classical inference rules.

Xiaowang Zhang, Guohui Xiao, Zuoquan Lin

Search and Identities

How to Trace and Revise Identities

The Entity Name System (ENS) is a service aiming at providing globally unique URIs for all kinds of real-world entities such as persons, locations and products, based on descriptions of such entities. Because entity descriptions available to the ENS for deciding on entity identity—Do two entity descriptions refer to the same real-world entity?—are changing over time, the system has to revise its past decisions: One entity has been given two different URIs or two entities have been attributed the same URI. The question we have to investigate in this context is then: How do we propagate entity decision revisions to the clients which make use of the URIs provided by the ENS?

In this paper we propose a solution which relies on labelling the IDs with additional history information. These labels allow clients to locally detect deprecated URIs they are using and also merge IDs referring to the same real-world entity without needing to consult the ENS. Making update requests to the ENS only for the IDs detected as deprecated considerably reduces the number of update requests, at the cost of a decrease in uniqueness quality. We investigate how much the number of update requests decreases using ID history labelling, as well as how this impacts the uniqueness of the IDs on the client. For the experiments we use both artificially generated entity revision histories as well as a real case study based on the revision history of the Dutch and Simple English Wikipedia.

Julien Gaugaz, Jakub Zakrzewski, Gianluca Demartini, Wolfgang Nejdl

Concept Search

In this paper we present a novel approach, called Concept Search, which extends syntactic search, i.e., search based on the computation of string similarity between words, with semantic search, i.e., search based on the computation of semantic relations between concepts. The key idea of Concept Search is to operate on complex concepts and to maximally exploit the semantic information available, reducing to syntactic search only when necessary, i.e., when no semantic information is available. The experimental results show that Concept Search performs at least as well as syntactic search, improving the quality of results as a function of the amount of available semantics.

Fausto Giunchiglia, Uladzimir Kharkevich, Ilya Zaihrayeu

Semantic Wiki Search

Semantic wikis extend wiki platforms with the ability to represent structured information in a machine-processable way. On top of the structured information in the wiki, novel ways to search, browse, and present the wiki content become possible. However, while powerful query languages offer new opportunities for semantic search, the syntax of formal query languages is not adequate for end users. In this work we present an approach to semantic search that combines the expressiveness and capabilities of structured queries with the simplicity of keyword interfaces and faceted search. Users articulate their information need in keywords, which are translated into structured, conjunctive queries. This translation may result in multiple possible interpretations of the information need, which can then be selected and further refined by the user via facets. We have implemented this approach to semantic search as an extension to Semantic MediaWiki. The results of a user study in the SMW-based community portal

semanticweb.org

show the efficiency and effectiveness of the approach as well as its ease of use.

Peter Haase, Daniel Herzig, Mark Musen, Thanh Tran

Applying Semantic Social Graphs to Disambiguate Identity References

Person disambiguation monitors web appearances of a person by disambiguating information belonging to different people sharing the same name. In this paper we extend person disambiguation to incorporate the abstract notion of identity. This extension utilises semantic web technologies to represent the identity of the person to be found and the web resources to be disambiguated as semantic graphs. Our approach extracts a complete semantic social graph from distributed Web 2.0 services. Web resources containing possible person references are converted into semantic graphs describing available identity features. We disambiguate these web resources to identify correct identity references by performing random walks through the graph space, measuring the distances between the social graph and web resource graphs, and clustering similar web resources. We present a new distance measure called “Optimum Transitions” and evaluate the accuracy of our approach using the information retrieval measure f-measure.

Matthew Rowe

Semantic Web Architectures

Middleware for Automated Implementation of Security Protocols

We propose a middleware for automated implementation of security protocols for Web services. The proposed middleware consists of two main layers: the communication layer and the service layer. The communication layer is built on the SOAP layer and ensures the implementation of security and service protocols. The service layer provides the discovery of services and the authorization of client applications. In order to provide automated access to the platform services we propose a novel specification of security protocols, consisting of a sequential component, implemented as a WSDL-S specification, and an ontology component, implemented as an OWL specification. Specifications are generated using a set of rules, where information related to the implementation of properties such as cryptographic algorithms or key sizes, are provided by the user. The applicability of the proposed middleware is validated by implementing a video surveillance system.

Béla Genge, Piroska Haller

Can RDB2RDF Tools Feasibily Expose Large Science Archives for Data Integration?

Many science archive centres publish very large volumes of image, simulation, and experiment data. In order to integrate and analyse the available data, scientists need to be able to (i) identify and locate all the data relevant to their work; (ii) understand the multiple heterogeneous data models in which the data is published; and (iii) interpret and process the data they retrieve.

rdf

has been shown to be a generally successful framework within which to perform such data integration work. It can be equally successful in the context of scientific data, if it is demonstrably practical to expose that data as

rdf

.

In this paper we investigate the capabilities of

rdf

to enable the integration of scientific data sources. Specifically, we discuss the suitability of

sparql

for expressing scientific queries, and the performance of several triple stores and

rdbrdf

tools for executing queries over a moderately sized sample of a large astronomical data set. We found that more research and improvements are required into

sparql

and

rdbrdf

tools to efficiently expose existing science archives for data integration.

Alasdair J. G. Gray, Norman Gray, Iadh Ounis

A Flexible API and Editor for SKOS

We present a programmatic interface (SKOS API) and editor for working with the Simple Knowledge Organisation System SKOS. The SKOS API has been designed to work with SKOS models at a high level of abstraction to aid developers of applications that use SKOS. We describe a SKOS editor (SKOSEd) that is built on the Protege 4 framework using the OWL and SKOS API. As well as exploring the benets of the principled extensibility afforded by this approach, we also explore the limitations placed upon SKOS by restricting SKOSEd to OWL-DL.

Simon Jupp, Sean Bechhofer, Robert Stevens

An Ontology of Resources: Solving the Identity Crisis

The primary goal of the Semantic Web is to use URIs as a universal space to name anything, expanding from using URIs for webpages to URIs for “real objects and imaginary concepts,” as phrased by Berners-Lee. This distinction has often been tied to the distinction between information resources, like webpages and multimedia files, and non-information resources, which are everything from real people to abstract concepts like ‘the integers.’ Furthermore, the W3C has recommended not to use the same URI for information resources and non-information resources, and several communities like the Linked Data initiative are deploying this principle. The definition put forward by the W3C, that non-information resources are things whose “essential nature is information” is a difficult distinction at best. For example, would the text of Moby Dick be an information resource? While this problem could safely be ignored up until recently, with the rise of Linked Data and projects like OKKAM, it appears that this problem should be modelled formally. An ontology called IRW (Identity and Reference on the Web) of various types of resources and their relationships, both for the hypertext Web and the Semantic Web, is presented. It builds upon Information Object Lite (an extension of DOLCE Ultra Lite for describing information objects) and IRE (an earlier ontology of and aligns with other work in this area. This ontology can be used as a tool to make the Semantic Web more self-describing and to allow inference to be used to test for membership in various classes of resources.

Harry Halpin, Valentina Presutti

Semantic Web Services

Mining Semantic Descriptions of Bioinformatics Web Resources from the Literature

A number of projects (myGrid, BioMOBY, etc.) have recently been initiated in order to organise emerging bioinformatics Web Services and provide their semantic descriptions. They typically rely on manual curation efforts. In this paper we focus on a semi-automated approach to mine semantic descriptions from the bioinformatics literature. The method combines terminological processing and dependency parsing of journal articles, and applies information extraction techniques to profile Web services using informative textual passages, related ontological annotations and service descriptors. Service descriptors are terminological phrases reflecting related concepts (e.g. tasks, approaches, data) and/or specific roles (e.g. input/output parameters, etc.) of the associated resource classes (e.g. algorithms, databases, etc.). They can be used to facilitate subsequent manual description of services, but also for providing a semantic synopsis of a service that can be used to locate related services. We present a case-study involving full text articles from the BMC Bioinformatics journal. We illustrate the potential of natural language processing not only for mining descriptions of known services, but also for discovering new services that have been described in the literature.

Hammad Afzal, Robert Stevens, Goran Nenadic

Hybrid Adaptive Web Service Selection with SAWSDL-MX and WSDL-Analyzer

In this paper, we present an adaptive, hybrid semantic matchmaker for SAWSDL services, called SAWSDL-MX2. It determines three kinds of semantic service similarity with a given service request, that are logic-based, text-based and structural similarity. In particular, the degree of structural service similarity is computed by the WSDL-Analyzer tool [12] by means of XMLS tree edit distance measurement, string-based and lexical comparison of the respective XML-based WSDL services. SAWSDL-MX2 then learns the optimal aggregation of these different matching degrees over a subset of a test collection SAWSDL-TC1 based on a binary support vector machine-based classifier. Finally, we compare the retrieval performance of SAWSDL-MX2 with a non-adaptive matchmaker variant SAWSDL-MX1 [1] and the straight forward combination of its logic-based only variant SAWSDL-M0 with WSDL-Analyzer.

Matthias Klusch, Patrick Kapahnke, Ingo Zinnikus

Enhancing Service Selection by Semantic QoS

The increasing number of functionally similar services requires the existence of a non-functional properties selection process based on the Quality of Service (QoS). Thus, in this article, authors focus on the provision of a QoS model, an architecture and an implementation which enhance the selection process by the annotation of Service Level Agreement (SLA) templates with semantic QoS metrics. This QoS model is composed by a specification for annotating SLA templates files, a QoS conceptual model formed as a QoS ontology and selection algorithm. This approach, which is backward compatible, provides interoperability among customer-providers and a lightweight alternative. Finally, its applicability and benefits are shown by using examples of Infrastructure services.

Henar Muñoz Frutos, Ioannis Kotsiopoulos, Luis Miguel Vaquero Gonzalez, Luis Rodero Merino

Towards an Agent Based Approach for Verification of OWL-S Process Models

In this paper we investigate the transformation of OWL-S process models to ISPL - the system description language for MCMAS, a symbolic model checker for multi agent systems. We take the view that services can be considered as agents and service compositions as multi agent systems. We illustrate how atomic and composite processes in OWL-S can be encoded into ISPL using the proposed transformation rules for a restricted set of data types. As an illustrative example, we use an extended version of the BravoAir process model. We formalise certain interesting properties of the example in temporal-epistemic logic and present results from their verification using MCMAS.

Alessio Lomuscio, Monika Solanki

Leveraging Semantic Web Service Descriptions for Validation by Automated Functional Testing

Recent years have seen the utilisation of Semantic Web Service descriptions for automating a wide range of service-related activities, with a primary focus on service discovery, composition, execution and mediation. An important area which so far has received less attention is service validation, whereby advertised services are proven to conform to required behavioural specifications. This paper proposes a method for validation of service-oriented systems through automated functional testing. The method leverages ontology-based and rule-based descriptions of service inputs, outputs, preconditions and effects (IOPE) for constructing a stateful EFSM specification. The specification is subsequently utilised for functional testing and validation using the proven Stream X-machine (SXM) testing methodology. Complete functional test sets are generated automatically at an abstract level and are then applied to concrete Web services, using test drivers created from the Web service descriptions. The testing method comes with completeness guarantees and provides a strong method for validating the behaviour of Web services.

Ervin Ramollari, Dimitrios Kourtesis, Dimitris Dranidis, Anthony J. H. Simons

Tagging and Annotation

Neighborhood-Based Tag Prediction

We consider the problem of tag prediction in collaborative tagging systems where users share and annotate resources on the Web. We put forward HAMLET, a novel approach to automatically propagate tags along the edges of a graph which relates similar documents. We identify the core principles underlying tag propagation for which we derive suitable scoring models combined in one overall ranking formula. Leveraging these scores, we present an efficient top-

k

tag selection algorithm that infers additional tags by carefully inspecting neighbors in the document graph. Experiments using real-world data demonstrate the viability of our approach in large-scale environments where tags are scarce.

Adriana Budura, Sebastian Michel, Philippe Cudré-Mauroux, Karl Aberer

User Evaluation Study of a Tagging Approach to Semantic Mapping

A key aspect of semantic interoperability is the semantic mapping process itself. Traditionally, semantic mapping processes conducted by knowledge engineers have been proposed to bridge this gap. However, knowledge engineers alone are unlikely to cope with the ever increasing amount of mapping work required, especially as mappings themselves begin to be specialised for different contexts. One solution is to develop new mapping processes that enable users to participate in the mapping process themselves. In this paper we present an evaluation study of our user-driven tagging approach to the semantic mapping process. In our approach, users actively participate in generating mappings by categorising automatically generated candidate matches presented in natural language over a long time period. In the evaluation study three groups of users generated mappings between their personal ontologies and a sports ontology describing sports news content from RSS feeds. The mapping process was embedded within the users’ work environment as a Firefox browser extension. The study is discussed, focusing on whether the mapping process is unintrusive, engaging and simplified for the user. The evaluation results were promising and indicate that people with various levels of expertise could become active in the semantic mapping process.

Colm Conroy, Rob Brennan, Declan O’Sullivan, Dave Lewis

Fuzzy Annotation of Web Data Tables Driven by a Domain Ontology

We propose an automatic system for annotating accurately data tables extracted from the web. This system is designed to provide additional data to an existing querying system called MIEL, which relies on a common vocabulary used to query local relational databases. We will use the same vocabulary, translated into an OWL ontology, to annotate the tables. Our annotation system is unsupervised. It uses only the knowledge defined in the ontology to automatically annotate the entire content of tables, using an aggregation approach: first annotate cells, then columns, then relations between those columns. The annotations are fuzzy: instead of linking an element of the table with a precise concept of the ontology, the elements of the table are annotated with several concepts, associated with their relevance degree. Our annotation process has been validated experimentally on scientific domains (microbial risk in food, chemical risk in food) and a technical domain (aeronautics).

Gaëlle Hignette, Patrice Buche, Juliette Dibie-Barthélemy, Ollivier Haemmerlé

An Integrated Approach to Extracting Ontological Structures from Folksonomies

Collaborative tagging systems have recently emerged as one of the rapidly growing web 2.0 applications. The informal social classification structure in these systems, also known as folksonomy, provides a convenient way to annotate resources by allowing users to use any keyword or tag that they find relevant. In turn, the flat and non-hierarchical structure with unsupervised vocabularies leads to low search precision and poor resource navigation and retrieval. This drawback has created the need for ontological structures which provide shared vocabularies and semantic relations for translating and integrating the different sources. In this paper, we propose an integrated approach for extracting ontological structure from folksonomies that exploits the power of low support association rule mining supplemented by an upper ontology such as WordNet.

Huairen Lin, Joseph Davis, Ying Zhou

Reducing Ambiguity in Tagging Systems with Folksonomy Search Expansion

Search facilities are vital for folksonomy (or social tagging mechanism) based systems. Although these systems allow great malleability and adaptability, they also surfer from problems, such as ambiguity in the meaning of tags, flat organisation of tags and some degree of unstabilising factor on consensus about which tags best describe some certain Web resources. It has been argued that folksonomy structure can be enhanced by ontologies; however, as suggested by Hotho et al., a key question remains open: how to exploit the benefits of ontologies without bothering untrained users with its rigidity. In this paper, we propose an approach to address the problem of ambiguity in tagging systems by expanding folksonomy search with ontologies, which are completely transparent to users. Preliminary implementations and evaluations on the efficiency and the usefulness of such expansions are very promising.

Jeff Z. Pan, Stuart Taylor, Edward Thomas

Semantic Web In-Use Track

Ontology-Based Service Discovery Front-End Interface for GloServ

This paper describes an ontology-based service discovery front-end interface for GloServ. GloServ is a service discovery engine, which is an ontology-based distributed service discovery system that allows sophisticated querying of services. The working implementation of the front-end interface demonstrates how GloServ can be used for different types of web service discovery. The front-end generates a search form from the service class ontology. It also allows multiple services to be queried for in a single search by generating cascaded forms for combined service queries. It then converts the input to a GloServ query and displays the results to the user in a coherent manner. The use cases that are demonstrated with this implementation are service discovery for location-based services, tagged services and collaborative search with other users.

Knarig Arabshian, Christian Dickmann, Henning Schulzrinne

A Resource List Management Tool for Undergraduate Students Based on Linked Open Data Principles

Resource List Management tools help educators create and publish lists of resources relevant to students undertaking a particular module, course or assignment. Traditional approaches to online delivery of these lists have been limited by lack of interoperability and integration with other campus systems, poor take-up by instructors and brittle linking strategies which break as institutions shift suppliers for e-content from year to year. In this paper we present a Resource List Management tool that uses Semantic Web technologies and Linked Data principles to overcome these limitations and improve the interoperability of data contained within such systems.

Chris Clarke

SCOVO: Using Statistics on the Web of Data

Statistical data is present everywhere—from governmental bodies to economics, from life-science to industry. With the rise of the Web of Data, the need for sharing, accessing, and using this data has entered a new stage. In order to enable proprietary, closed-world formats, to enter the Web of Data, we propose a framework for modelling and publishing statistical data. To illustrate the usefulness of our approach we demonstrate its application in real-world statistical datasets.

Michael Hausenblas, Wolfgang Halb, Yves Raimond, Lee Feigenbaum, Danny Ayers

Media Meets Semantic Web – How the BBC Uses DBpedia and Linked Data to Make Connections

In this paper, we describe how the BBC is working to integrate data and linking documents across BBC domains by using Semantic Web technology, in particular Linked Data, MusicBrainz and DBpedia. We cover the work of BBC Programmes and BBC Music building Linked Data sites for all music and programmes related brands, and we describe existing projects, ongoing development, and further research we are doing in a joint collaboration between the BBC, Freie Universität Berlin and Rattle Research in order to use DBpedia as the controlled vocabulary and semantic backbone for the whole BBC.

Georgi Kobilarov, Tom Scott, Yves Raimond, Silver Oliver, Chris Sizemore, Michael Smethurst, Christian Bizer, Robert Lee

Creating Digital Resources from Legacy Documents: An Experience Report from the Biosystematics Domain

Digitized legacy document marked up with XML can be used in many ways, e.g., to generate RDF statements about the world described. A prerequisite for doing so is that the document markup is of sufficient quality. Since fully automated markup-generation methods cannot ensure this, manual corrections and cleaning are indispensable. In this paper, we report on our experiences from a digitization and markup project for a large corpus of legacy documents from the biosystematics domain, with a focus on the use of modern tools. The markup created covers both document structure and semantic details. In contrast to previous markup projects reported on in literature, our corpus consists of large publications that comprise many different semantic units, and the documents contain OCR noise and layout artifacts. A core insight is that digitization and automated markup on the one hand and manual cleaning and correction on the other hand should be tightly interleaved, and that tools supporting this integration yield a significant improvement.

Guido Sautter, Klemens Böhm, Donat Agosti, Christiana Klingenberg

Collaborative Ocean Resource Interoperability: Multi-use of Ocean Data on the Semantic Web

Earth Observations (EO) collect various characteristics of the objective environment using sensors which often have different measuring, spatial and temporal coverage. Making individual observational data interoperable becomes equally important when viewed in the context of its expensive and time-consuming EO operations. Interoperability will improve reusability of existing observations in both the broader context, and with other observations. As a demonstration of the potential offered by semantic web technology, we have used the National Oceanography Centre Southampton’s Ferrybox project (where suites of environmental sensors installed on commercial ships collect near real time data) to set up an ontology based reference model of a Collaborative Ocean, where relevant oceanographic resources, such as sensors and observations, can be semantically annotated by their stakeholders to produce RDF format metadata to facilitate data/resource interoperability in a distributed environment. We have also demonstrated an infrastructure where common semantic management activities are supported, including ontology management, semantic annotation, storage, and reuse (navigating, inference and query). Once the method and infrastructure are adopted by other related oceanographic projects to describe their resources and move their metadata onto the semantic web, it would be possible to see better interoperability within the Collaborative Ocean initiative to facilitate multiuse of ocean data, as well as making more EO data available on the semantic web.

Feng (Barry) Tao, Jon Campbell, Maureen Pagnani, Gwyn Griffiths

ONKI SKOS Server for Publishing and Utilizing SKOS Vocabularies and Ontologies as Services

Vocabularies are the building blocks of the Semantic Web providing shared terminological resources for content indexing, information retrieval, data exchange, and content integration. Most semantic web applications in practical use are based on lightweight ontologies and, more recently, on the Simple Knowledge Organization System (SKOS) data model being standardized by W3C. Easy and cost-efficient publication, integration, and utilization methods of vocabulary services are therefore highly important for the proliferation of the Semantic Web. This paper presents the ONKI SKOS Server for these tasks. Using ONKI SKOS, a SKOS vocabulary or a lightweight ontology can be published on the web as ready-to-use services in a matter of minutes. The services include not only a browser for human usage, but also Web Service and AJAX interfaces for concept finding, selecting and transporting resources from the ONKI SKOS Server to connected systems. Code generation services for AJAX and Web Service APIs are provided automatically, too. ONKI SKOS services are also used for semantic query expansion in information retrieval tasks. The idea of publishing ontologies as services is analogous to Google Maps. In our case, however, vocabulary services are provided and mashed-up in applications. ONKI SKOS was published in the beginning of 2008 and is to our knowledge the first generic SKOS server of its kind. The system has been used to publish and utilize some 60 vocabularies and ontologies in the National Finnish Ontology Service ONKI www.yso.fi.

Jouni Tuominen, Matias Frosterus, Kim Viljanen, Eero Hyvönen

Ontology Libraries for Production Use: The Finnish Ontology Library Service ONKI

This paper discusses problems of creating and using ontology library services in production use. One approach to a solution is presented with an online implementation—the Finnish Ontology Library Service ONKI— that is in pilot use on a national level in Finland. ONKI contributes to previous research on ontology libraries in many ways: First, mashup and web service support with various tools is provided for cost-efficient utilization of ontologies in indexing and search applications. Second, services covering the different phases of the ontology life cycle are provided. Third, the services are provided and used in real world applications on a national scale. Fourth, the ontology framework is being developed by a collaborative effort by organizations representing different application domains, such as health, culture, and business.

Kim Viljanen, Jouni Tuominen, Eero Hyvönen

Demo Track

SAscha: Supporting the Italian Public Cooperation System with a Rich Internet Application for Semantic Web Services

We present

SAscha

, a web browser-based application for authoring semantic Web Service descriptions by the SAWSDL standard. Even though it was conceived in a specific real domain scenario, i.e. the Italian framework for service interoperability, it is in fact a general-purpose SAWSDL tool that showcases a singular feature set including, but not limited to, simplicity of use, availability as an infrastructural service and XML Schema support. Although upper-level ontologies are expected to be stored in an ad-hoc repository,

SAscha

allows for OWL ontologies being stored anywhere locally or on the web.

Alessandro Adamou

Folksonomy Enrichment and Search

The Semantic Web community has expressed its interest on how the Semantic Web technology can be applied more efficiently in a manner that supports real world applications. Additionally, the popularity of social tagging systems has demonstrated a clear need for organisation and more flexible ways of querying the user contributed content. This work presents FLOR, a folksonomy enrichment algorithm, which exploits a variety of knowledge sources to apply structure on the user tagspaces. In addition, a query mechanism is presented demonstrating how the enriched folksonomies structures can be interrogated by transforming the user keyword queries on folksonomies to formal queries on semantic structures. The first prototype of the FLOR enrichment algorithm and a first instance of the query mechanism have been implemented and a demonstration is available online.

Sofia Angeletou, Marta Sabou, Enrico Motta

K-Tools: Towards Semantic Knowledge Management

This paper details the use of Semantic Web tools for supporting networked knowledge acquisition, search and sharing in large distributed organisations. The demonstration will showcase from a user perspective an application developed to aid knowledge management in large organisations, detailing the problems and technical solutions employed.

Sam Chapman, Vitaveska Lanfranchi, Ravish Bhagdev

The XMediaBox: Sensemaking through the Use of Knowledge Lenses

Sensemaking is the process of analysing complex situations in order to make informed decisions. Semantic Web technology can be effectively used to create new sensemaking systems that focus on concepts and knowledge instead of documents. We demonstrate how this is achieved using information extraction to acquire knowledge and create a semantic repository that can then be semantically searched. A domain ontology is used to support the creation of an analysis tree; the semantic visualisation enables knowledge discovery, a core aspect of sensemaking.

Aba-Sah Dadzie, José Iria, Daniela Petrelli, Lei Xia

Controlled Natural Language for Semantic Annotation

Sovereign

is a novel collection of resources for authoring(1), annotating(2) and accessing(3) knowledge on the Social Semantic Desktop. With respect to (2), the Sovereign Semantic Annotator allows the non-expert user in a novel way to semi-automatically author and annotate meeting minutes and status reports

simultaneously

using Controlled Natural Language(CNL). The metadata is captured as knowledge on the Social Semantic Desktop for later aggregation and access. The annotator is based on Controlled Language for Information Extraction (CLIE) technology. Furthermore, it is available as a plugin for a semantic note-taking application for the Social Semantic Desktop. We intend to present a working prototype of the Sovereign annotator for the Social Semantic Desktop at ESWC.

Brian Davis, Pradeep Varma, Siegfried Handschuh, Laura Dragan, Hamish Cunningham

Multilingual and Localization Support for Ontologies

This demo proposal aims at providing support for the localization of ontologies, and as a result at obtaining multilingual ontologies. We briefly present an advanced version of LabelTranslator, our system to localize ontology terms in different natural languages. The current version of the system differs from previous works reported in [1,2] in that it relies on a modular approach to store the linguistic information associated to ontology terms. Additionally, it uses a synchronization method to maintain the conceptual and linguistic information updated.

Mauricio Espinoza, Asunción Gómez-Pérez, Elena Montiel-Ponsoda

WSMX 1.0: A Further Step toward a Complete Semantic Execution Environment

The Web Service Execution Environment (WSMX) project is a continuously ongoing effort that aims at delivering a middleware covering all the Semantic Web Services life cycle. WSMX represents the reference implementation of the Semantically enabled Service Oriented Architecture (SESA) [1]. In this demonstration we aim to present the latest achievements that include: Web Service monitoring, Web Service ranking and Web Service grounding.

Federico Michele Facca, Srdjan Komazec, Ioan Toma

MoKi: The Enterprise Modelling Wiki

Enterprise modelling focuses on the construction of a structured description, the so-called

enterprise model

, which represents aspects relevant to the activity of an enterprise. Although it has become clearer recently that enterprise modelling is a collaborative activity, involving a large number of people, most of the enterprise modelling tools still only support very limited degrees of collaboration. Within this contribution we describe a tool for enterprise modelling, called

MoKi

(MOdelling wiKI), which supports agile collaboration between all different actors involved in the enterprise modelling activities.

MoKi

is based on a Semantic Wiki and enables actors with different expertise to develop an enterprise model not only using structural (formal) descriptions but also adopting more informal and semi-formal descriptions of knowledge.

Chiara Ghidini, Barbara Kump, Stefanie Lindstaedt, Nahid Mahbub, Viktoria Pammer, Marco Rospocher, Luciano Serafini

The Personal Knowledge Workbench of the NEPOMUK Semantic Desktop

In this paper we describe some of the personal information management features found in the PSEW prototype of the NEPOMUK Semantic Desktop project. Focussing on importing and adding new knowledge to the system as well as navigating, browsing, and searching the existing knowledge we show how a user can use NEPOMUK to integrate her information across application boundaries and create and discover new structures in her information space.

Gunnar Aastrand Grimnes, Leo Sauermann, Ansgar Bernardi

Utilizing Semantics in the Production of iTV Shows

This paper gives an overview of the semantic aspects of an advanced, semantics-based broadcasting production support system designed to enable the creation of interactive multi-channel television shows. The “Intelligent Media Framework” forms the middleware of this system and was developed in the context of the European integrated project LIVE. The envisaged “intelligence” is based on formal, machine understandable descriptions of the content and the events. We demonstrate the successful usage of the system by a broadcasting corporation in a field trial with several hundred end consumers, conducted during the Olympic Games in Beijing, August 2008.

Georg Güntner, Dietmar Glachs, Rupert Westenthaler

Knowledge Applications for Life Events: How the Dutch Government Informs the Public about Rights and Duties in the Netherlands

The Dutch government has several ontology driven applications on the internet that inform the public about rights and duties that are relevant for certain life events. Life events such as “New to Holland”, “The death of a close relative” and “Working and studying” are examples of events in life where you want to be informed about your rights and duties. In the demonstration we show how we make use of an ontology to determine a profile of the user and how we give him or her the information that is tailored to his/her specific situation.

Ronald Heller, Freek van Teeseling

CultureSampo: A National Publication System of Cultural Heritage on the Semantic Web 2.0

CultureSampo

is an application demonstration of a national level publication system of cultural heritage contents on the Web, based on ideas and technologies of the Semantic (Web and) Web 2.0. On the semantic side, the system presents new solutions to interoperability problems of dealing with multiple ontologies of different domains, and to problems of integrating multiple metadata schemas and cross-domain content into a homogeneous semantic portal. A novelty of the system is to use semantic models based on events and narrative process descriptions for modeling and visualizing cultural phenomena, and for semantic recommendations. On the Web 2.0 side,

CultureSampo

proposes and demonstrates a content creation process for collaborative, distributed ontology and content development including different memory organizations and citizens. The system provides the cultural heritage contents to end-users in a new way through multiple (nine) thematic perspectives, based on semantic visualizations. Furthermore,

CultureSampo

services are available for external web-applications to use through semantic AJAX widgets.

Eero Hyvönen, Eetu Mäkelä, Tomi Kauppinen, Olli Alm, Jussi Kurki, Tuukka Ruotsalo, Katri Seppälä, Joeli Takala, Kimmo Puputti, Heini Kuittinen, Kim Viljanen, Jouni Tuominen, Tuomas Palonen, Matias Frosterus, Reetta Sinkkilä, Panu Paakkarinen, Joonas Laitio, Katariina Nyberg

A Rule System for Querying Persistent RDFS Data

We present GiaBATA, a system for storing, aggregating, and querying Semantic Web data, based on declarative logic programming technology, namely on the

dlvhex

system, which allows us to implement a fully SPARQL compliant semantics, and on DLV

DB

, which extends the DLV system with persistent storage capabilities. Compared with off-the-shelf RDF stores and SPARQL engines, we offer more flexible support for rule-based RDFS and other higher entailment regimes by enabling custom reasoning via rules, and the possibility to choose the reference ontology on a per query basis. Due to the declarative approach, GiaBATA gains the possibility to apply well-known logic-level optimization features of logic programming (LP) and deductive database systems. Moreover, our architecture allows for extensions of SPARQL by non-standard features such as aggregates, custom built-ins, or arbitrary rulesets. With the resulting system we provide a flexible toolbox that embeds Semantic Web data and ontologies in a fully declarative LP environment.

Giovambattista Ianni, Thomas Krennwallner, Alessandra Martello, Axel Polleres

RaDON — Repair and Diagnosis in Ontology Networks

One of the major challenges in managing networked and dynamic ontologies is to handle inconsistencies in single ontologies, and inconsistencies introduced by integrating multiple distributed ontologies. Our RaDON system provides functionalities to repair and diagnose ontology networks by extending the capabilities of existing reasoners. The system integrates several new debugging and repairing algorithms, such as a relevance-directed algorithm to meet the various needs of the users.

Qiu Ji, Peter Haase, Guilin Qi, Pascal Hitzler, Steffen Stadtmüller

Supporting the Reuse of Global Unique Identifiers for Individuals in OWL/RDF Knowledge Bases

In the current state of Semantic Web knowledge representation, it is fairly common that individuals in ontologies have different identifiers even if they are meant to denote the same object in the real world. This makes data integration at the level of individuals quite difficult. In this demonstration, we show two plug-ins called Okkam4N and Okkam4P developed for the NeOn Toolkit and Protégé software respectively, which support the reuse of globally unique identifiers for individuals in OWL/RDF knowledge bases. The key idea is that users are given a chance to look for an existing URI in the publicly available Entity Name System for the corresponding individual in the local OWL/RDF knowledge bases. They are available and tested for the latest official release versions, the NeOn Toolkit 1.2 and Protégé 3.4.

Xin Liu, Heiko Stoermer, Paolo Bouquet, Shu Wang

Modeling and Enforcement of Business Policies on Process Models with Maestro

Business policies and rules govern and guide the business processes of an organization. Enterprises usually only document their business policies and rules in natural language. This makes the procedure of determining which business policies and rules apply to a certain process and their enforcement on this process very costly and cumbersome. We present a tool that supports formal specification of policies and rules and their automated enforcement on process models. We explain the research background underlying the tool, and we overview what will be demonstrated at ESWC.

Ivan Markovic, Sukesh Jain, Mahmoud El-Gayyar, Armin B. Cremers, Nenad Stojanovic

A Reasoning-Based Support Tool for Ontology Mapping Evaluation

In this paper we describe a web-based tool that supports the human in revising ontology alignments. Our tool uses logical reasoning as a basis for detecting conflicts in mappings and exploits these conflicts to propagate user decision. The proposed approach reduces the effort of the human expert and points to logical problems that are hard to find without support.

Christian Meilicke, Heiner Stuckenschmidt, Ondřej Šváb-Zamazal

Semanta – Semantic Email in Action

Semanta is a system supporting Semantic Email, implemented as an add-in to two popular Mail User Agents, using existing email transport technology and integrated with the Social Semantic Desktop. It enables machines to support email users with correctly interpreting, handling and keeping track of action items within email messages, visualizing email workflows, and extracting tasks and appointments from email messages.

Simon Scerri, Ioana Giurgiu, Brian Davis, Siegfried Handschuh

KiWi – A Platform for Semantic Social Software (Demonstration)

Semantic Wikis have demonstrated the power of combining Wikis with Semantic Web technology. The KiWi system goes beyond Semantic Wikis by providing a flexible and adaptable platform for building different kinds of Social Semantic Software, powered by Semantic Web technology. While the KiWi project itself is primarily focussed on the knowledge management domain, this demonstration shows how KiWi aspects like the Wiki Principles and Content Versatility can be used to build completely different kinds of Social Software applications. The first application we show is an “ordinary” Semantic Wiki system preloaded with content from a online news site. The second application called TagIT is a map-based system where locations and routes on a map can be “tagged” by users with textual descriptions, SKOS categories, and multimedia material. Both applicatons are built on top of the same KiWi platform and actually share the same content.

Sebastian Schaffert, Julia Eder, Szaby Grünwald, Thomas Kurz, Mihai Radulescu

Pattern-Based Annotation of HTML-Streams

Web pages containing RDFa markup facilitate a broad range of new agents that improve their usability for human readers. Unfortunately, there still exist only few web sites featuring such annotations. In this paper, we demonstrate Atheris, a system that annotates structured web pages by means of our web data extraction tool ViPER. Atheris runs inside a web proxy service, making it transparently available. Our approach enables the web browser—the mostly used web agent—to operate intelligently on the displayed page by providing a semantic view over previously ’meaningless’ data in order to support human readers.

Florian Schmedding, Max Schwaibold, Kai Simon

OntoComP: A Protégé Plugin for Completing OWL Ontologies

We describe

OntoComP

, a

Protégé

4 plugin that supports ontology engineers in completing OWL ontologies. More precisely,

OntoComP

supports an ontology engineer in checking whether an ontology contains all the relevant information about the application domain, and in extending the ontology appropriately if this is not the case. It acquires complete knowledge about the application domain efficiently by asking successive questions to the ontology engineer. By using novel techniques from Formal Concept Analysis, it ensures that, on the one hand, the interaction with the ontology engineer is kept to a minimum, and, on the other hand, the resulting ontology is complete in a certain well-defined sense.

Barış Sertkaya

Demo: HistoryViz – Visualizing Events and Relations Extracted from Wikipedia

HistoryViz provides a new perspective on a certain kind of textual data, in particular the data available in the Wikipedia, where different entities are described and put in historical perspective. Instead of browsing through pages each describing a certain topic, we can look at the relations between entities and events connected with the selected entities. The presented solution implemented in HistoryViz provides user with a graphical interface allowing viewing events concerning the selected person on a timeline and viewing relations to other entities as a graph that can be dynamically expanded.

Ruben Sipoš, Abhijit Bhole, Blaž Fortuna, Marko Grobelnik, Dunja Mladenić

Ontology Evolution with Evolva

Ontology evolution is a painstaking and time-consuming process, especially in information rich and dynamic domains. While ontology evolution refers both to the adaptation of ontologies (e.g., through additions or updates possibly discovered from external data sources) and the management of these changes, no existing tools offer both functionalities. The Evolva framework aims to be a blueprint for a comprehensive ontology evolution tool that would cover both tasks. Additionally, Evolva proposes the use of background knowledge sources to reduce user involvement in the ontology adaptation step. This demo focuses on the initial, concrete implementation of our framework.

Fouad Zablith, Marta Sabou, Mathieu d’Aquin, Enrico Motta

Cupboard – A Place to Expose Your Ontologies to Applications and the Community

In this demo, we present the Cupboard system for ontology publishing, sharing and reuse. This system is intended to support both ontology engineers and ontology users/practitioners. For the developers of ontologies, it offers a complete infrastructure to host their ontologies in online ontology spaces, providing mechanisms to describe, manage and effectively exploit these ontologies (through APIs). Furthermore, these ontologies are then exposed to the community, providing users with a complete, friendly environment to find, assess and reuse ontologies.

Mathieu d’Aquin, Holger Lewen

PhD Symposium

Effects of Using a Research Context Ontology for Query Expansion

This thesis investigates the question whether and how ontologies such as the ones currently evolving in the Semantic Web can serve as knowledge structures for the generation of query expansion terms in information retrieval systems. This issue is examined using a specific example domain, namely educational research. Initial results support the already well-researched finding that query expansion can increase recall. Subsequent experiments will focus on comparing the effectiveness of thesaurus and ontology-based query expansion, on identifying ontological relationships which are especially useful for the generation of query expansion terms in the domain of educational research, and on evaluating the usefulness of different ontological relationships as expansion terms for different types of queries, as well as for different query expansion modes.

Carola Carstens

Towards a Semantic Infrastructure for User Generated Mobile Services

This paper presents a research approach towards a semantic infrastructure for user generated mobile services. Building on the concept of semantic

microservices

, the aim of this work is to sufficiently lower the barrier for end-users in order to enable easy ad-hoc creation, customisation, and discovery of mobile services.

Marcin Davies

Relational Databases as Semantic Web Endpoints

This proposal explores the promotion of existing relational databases to Semantic Web Endpoints. It presents the benefits of ontology-based read and write access to existing relational data as well as the need for specialized, scalable reasoning over that data. We introduce our approach for translating SPARQL/Update operations to SQL, describe how scalable reasoning can be realized by using the power of the database system, and outline two case studies for evaluating our approach.

Matthias Hert

The Relevance of Reasoning and Alignment Incoherence in Ontology Matching

Ontology matching has become an important field of research over the last years. Although many different approaches have been proposed, only few of them are committed to a well defined semantics. As a consequence, the possibilities of reasoning are not exploited to their full extent. A reasoning based approach will not only improve ontology matching, but will also be necessary to solve certain problems that hinder the progress of the whole field. We focus on the notion of alignment incoherence to understand the capabilities of reasoning in ontology matching.

Christian Meilicke

Towards a Semantic Service Broker for Business Grid

The increasing number of infrastructure services requires the existence of mechanisms to discover and select services and resources, called service broker, based on customer requirements. This mechanism should improve the interoperability among customer-provider and be a backward compatible and light weight approach. The introduction of semantic annotations in service description (both functional and non-functional properties) as well as a conceptual model for business Grid can help to achieve them. Finally, the extension of the semantic Open Grid Service Architecture (S-OGSA) with the incorporation of the semantic service broker can incorporate the required capabilities to the Grid middleware.

Henar Muñoz Frutos

Evolva: A Comprehensive Approach to Ontology Evolution

Ontology evolution is increasingly gaining momentum in the area of Semantic Web research. Current approaches target the evolution in terms of either content, or change management, without covering both aspects in the same framework. Moreover, they are slowed down as they heavily rely on user input. We tackle the aforementioned issues by proposing Evolva, a comprehensive ontology evolution framework, which handles a complete ontology evolution cycle, and makes use of background knowledge for decreasing user input.

Fouad Zablith

A Context-Aware Approach for Integrating Semantic Web Technologies onto Mobile Devices

Semantic Web technologies such as RDF are usually incorporated in the infrastructure of desktop and web applications and can currently not be entirely deployed on mobile devices. Therefore, the unique opportunities and novel features the Semantic Web offers are not amenable in mobile application scenarios, in which context and context-awareness is essential. In this thesis, we propose a context-sensitive Semantic Web framework for mobile devices by which contextual data can be processed semantically and integrated with data sets and services from other communities. Mobile devices augmented with those capabilities can operate autonomously and independent in different environments.

Stefan Zander

Dealing with Inconsistencies in DL-Lite Ontologies

As a shared conceptualization of a particular domain, ontologies play an important role for the success of the Semantic Web. However it is often difficult to create an absolutely consistent ontology. Inconsistency can occur due to several reasons, such as modeling errors, migration or merging ontologies, and ontology evolution. So it is essential to study how to deal with inconsistent ontologies. Many approaches have been proposed to solve this problem. These approaches are mainly used for dealing with inconsistency in expressive Description Logics(DLs). In our work, we consider inconsistency handling in the

DL-Lite

family, which is a family of DLs that preserve tractable reasoning and are specifically tailored to deal with large amounts of data. Like other DLs, inconsistencies in

DL-Lite

can also easily occur because disjoint axioms are allowed.

Liping Zhou

Backmatter

Weitere Informationen