scroll identifier for mobile
main-content

Über dieses Buch

This volume contains papers from the technical program of the 7th Extended Semantic Web Conference (ESWC 2010), held from May 30 to June 3, 2010, in Heraklion, Greece. ESWC 2010 presented the latest results in research and applications of Semantic Web technologies. ESWC 2010 built on the success of the former European Semantic Web Conference series, but sought to extend its focus by engaging with other communities within and outside Information and Communication Technologies, in which semantics can play an important role. At the same time, ESWC has become a truly international conference. Semantics of Web content, enriched with domain theories (ontologies), data about Web usage, natural language processing, etc., will enable a Web that p- vides a qualitatively new level of functionality. It will weave together a large network of human knowledge and make this knowledge machine-processable. Various automated services, based on reasoning with metadata and ontologies, will help the users to achieve their goals by accessing and processing infor- tion in machine-understandable form. This network of knowledge systems will ultimately lead to truly intelligent systems, which will be employed for va- ous complex decision-making tasks. Research about Web semantics can bene?t from ideas and cross-fertilization with many other areas: arti?cial intelligence, natural language processing, database and information systems, information - trieval, multimedia, distributed systems, social networks, Web engineering, and Web science.

Inhaltsverzeichnis

A Model of User Preferences for Semantic Services Discovery and Ranking

Current proposals on Semantic Web Services discovery and ranking are based on user preferences descriptions that often come with insufficient expressiveness, consequently making more difficult or even preventing the description of complex user desires. There is a lack of a general and comprehensive preference model, so discovery and ranking proposals have to provide

preference descriptions whose expressiveness depends on the facilities provided by the corresponding technique, resulting in user preferences that are tightly coupled with the underlying formalism being used by each concrete solution. In order to overcome these problems, in this paper an abstract and sufficiently expressive model for defining preferences is presented, so that they may be described in an intuitively and user-friendly manner. The proposed model is based on a well-known query preference model from database systems, which provides highly expressive constructors to describe and compose user preferences semantically. Furthermore, the presented proposal is independent from the concrete discovery and ranking engines selected, and may be used to extend current Semantic Web Service frameworks, such as

wsmo

,

sawsdl

, or

owls-s

. In this paper, the presented model is also validated against a complex discovery and ranking scenario, and a concrete implementation of the model in

wsmo

is outlined.

José María García, David Ruiz, Antonio Ruiz-Cortés

Towards Practical Semantic Web Service Discovery

Service orientation is a promising paradigm for offering and consuming functionalities within and across organizations. Ever increasing acceptance of service oriented architectures in combination with the acceptance of the Web as a platform for carrying out electronic business triggers a need for automated methods to find appropriate Web services.

Various formalisms for discovery of semantically described services with varying expressivity and complexity have been proposed in the past. However, they are difficult to use since they apply the same formalisms to service descriptions and requests. Furthermore, an intersection-based matchmaking is insufficient to ensure applicability of Web services for a given request. In this paper we show that, although most of prior approaches provide a formal semantics, their pragmatics to describe requests is improper since it differs from the user intention. We introduce distinct formalisms to describe functionalities and service requests. We also provide the formal underpinning and implementation of a matching algorithm.

Martin Junghans, Sudhir Agarwal, Rudi Studer

iSeM: Approximated Reasoning for Adaptive Hybrid Selection of Semantic Services

We present an intelligent service matchmaker, called iSeM, for adaptive and hybrid semantic service selection that exploits the full semantic profile in terms of signature annotations in description logic

${\mathcal SH}$

and functional specifications in SWRL. In particular, iSeM complements its strict logical signature matching with approximated reasoning based on logical concept abduction and contraction together with information-theoretic similarity and evidential coherence-based valuation of the result, and non-logic-based approximated matching. Besides, it may avoid failures of signature matching only through logical specification plug-in matching of service preconditions and effects. Eventually, it learns the optimal aggregation of its logical and non-logic-based matching filters off-line by means of binary SVM-based service relevance classifier with ranking. We demonstrate the usefulness of iSeM by example and preliminary results of experimental performance evaluation.

Matthias Klusch, Patrick Kapahnke

Measures for Benchmarking Semantic Web Service Matchmaking Correctness

Semantic Web Services (SWS) promise to take service oriented computing to a new level by allowing to semi-automate time-consuming programming tasks. At the core of SWS are solutions to the problem of SWS matchmaking, i.e., the problem of filtering and ranking a set of services with respect to a service query. Comparative evaluations of different approaches to this problem form the base for future progress in this area. Reliable evaluations require informed choices of evaluation measures and parameters. This paper establishes a solid foundation for such choices by providing a systematic discussion of the characteristics and behavior of various retrieval correctness measures in theory and through experimentation.

Ulrich Küster, Birgitta König-Ries

Efficient Semantic Event Processing: Lessons Learned in User Interface Integration

Most approaches to application integration require an unambiguous exchange of events. Ontologies can be used to annotate the events exchanged and thus ensure a common understanding of those events. The domain knowledge formalized in ontologies can also be employed to facilitate more intelligent,

semantic

event processing, but at the cost of higher processing efforts.

When application integration and event processing are implemented on the user interface layer, performance is an important issue to ensure acceptable reactivity of the integrated system. In this paper, we analyze different architecture variants of implementing such an event exchange, and present an evaluation with regard to performance. An example of an integrated emergency management system is used to demonstrate those variants.

Heiko Paulheim

Usage Policies for Document Compositions

The availability of contents and information as linked data or Web services, i.e. over standardized interfaces, fosters the integration and reuse of data. One common form of information integration is the creation of composed documents, e.g. in form of dynamic Web pages. Service and data providers restrict allowed usage of their resources and and link it to obligations, e.g. only non-commercial usage is allowed and requires an attribution of the provider. These terms and conditions are currently typically available in natural language which makes checking, if a document composition is compliant with the policies of the used services, a tedious task. In order to make it easier for users to adhere to these usage policies, we propose to formalize them, which enables policy-aware tools that support the creation of compliant compositions. In this paper we propose an OWL model of document compositions and show how it can be used together with the policy language AIR to build a policy-aware document composition platform. We furthermore present a use case and illustrate how it can be realized with our approach.

Sebastian Speiser, Rudi Studer

The Impact of Multifaceted Tagging on Learning Tag Relations and Search

In this paper we present a model for multifaceted tagging, i.e. tagging enriched with contextual information. We present TagMe!, a social tagging front-end for Flickr images, that provides multifaceted tagging functionality: It enables users to attach tag assignments to a specific area within an image and to categorize tag assignments. Moreover, TagMe! maps tags and categories to DBpedia URIs to clearly define the meaning of freely-chosen words. Our experiments reveal the benefits of these additional tagging facets. For example, the exploitation of the facets significantly improves the performance of FolkRank-based search. Further, we demonstrate the benefits of TagMe! tagging facets for learning semantics within folksonomies.

Fabian Abel, Nicola Henze, Ricardo Kawase, Daniel Krause

OKBook: Peer-to-Peer Community Formation

Many systems exist for community formation in extensions of traditional Web environments but little work has been done for forming and maintaining communities in the more dynamic environments emerging from

and peer-to-peer networks. This paper proposes an approach for forming and evolving peer communities based on the sharing of choreography specifications (Interaction Models (IMs)). Two mechanisms for discovering IMs and collaborative peers are presented based on a meta-search engine and a dynamic peer grouping algorithm respectively. OKBook, a system allowing peers to publish, discover and subscribe or unsubscribe to IMs, has been implemented in accordance with our approach. For the meta-search engine, a strategy for integrating and re-ranking search results obtained from Semantic Web search engines is also described. This allows peers to discover IMs from their group members, thus reducing the burden on the meta-search engine. Our approach complies with principles of Linked Data and is capable of both contributing to and benefiting from the Web of data.

Xi Bai, Wamberto Vasconcelos, Dave Robertson

Acquiring Thesauri from Wikis by Exploiting Domain Models and Lexical Substitution

Acquiring structured data from wikis is a problem of increasing interest in knowledge engineering and Semantic Web. In fact, collaboratively developed resources are growing in time, have high quality and are constantly updated. Among these problems, an area of interest is extracting thesauri from wikis. A thesaurus is a resource that lists words grouped together according to similarity of meaning, generally organized into sets of synonyms. Thesauri are useful for a large variety of applications, including information retrieval and knowledge engineering. Most information in wikis is expressed by means of natural language texts and internal links among Web pages, the so-called wikilinks. In this paper, an innovative method for inducing thesauri from Wikipedia is presented. It leverages on the Wikipedia structure to extract concepts and terms denoting them, obtaining a thesaurus that can be profitably used into applications. This method boosts sensibly precision and recall if applied to re-rank a state-of-the-art baseline approach. Finally, we discuss how to represent the extracted results in RDF/OWL, with respect to existing good practices.

Claudio Giuliano, Alfio Massimiliano Gliozzo, Aldo Gangemi, Kateryna Tymoshenko

Efficient Semantic-Aware Detection of Near Duplicate Resources

Efficiently detecting near duplicate resources is an important task when integrating information from various sources and applications. Once detected, near duplicate resources can be grouped together, merged, or removed, in order to avoid repetition and redundancy, and to increase the diversity in the information provided to the user. In this paper, we introduce an approach for efficient semantic-aware near duplicate detection, by combining an indexing scheme for similarity search with the RDF representations of the resources. We provide a probabilistic analysis for the correctness of the suggested approach, which allows applications to configure it for satisfying their specific quality requirements. Our experimental evaluation on the RDF descriptions of real-world news articles from various news agencies demonstrates the efficiency and effectiveness of our approach.

Ekaterini Ioannou, Odysseas Papapetrou, Dimitrios Skoutas, Wolfgang Nejdl

Guarding a Walled Garden — Semantic Privacy Preferences for the Social Web

With increasing usage of Social Networks, giving users the possibility to establish access restrictions on their data and resources becomes more and more important. However, privacy preferences in nowaday’s Social Network applications are rather limited and do not allow to define policies with fine-grained concept definitions. Moreover, due to the walled garden structure of the Social Web, current privacy settings for one platform cannot refer to information about people on other platforms. In addition, although most of the Social Network’s privacy settings share the same nature, users are forced to define and maintain their privacy settings separately for each platform. In this paper, we present a semantic model for privacy preferences on Social Web applications that overcomes those problems. Our model extends the current privacy model for Social Platforms by semantic concept definitions. By means of these concepts, users are enabled to exactly define what portion of their profile or which resources they want to protect and which user category is allowed to see those parts. Such category definitions are not limited to one single platform but can refer to information from other platforms as well. We show how this model can be implemented as extension of the OpenSocial standard, to enable advanced privacy settings which can be exchanged among OpenSocial platforms.

Philipp Kärger, Wolf Siberski

Using Social Media for Ontology Enrichment

In order to support informal learning, we complement the formal knowledge represented by ontologies developed by domain experts with the informal knowledge emerging from social tagging. To this end, we have developed an ontology enrichment pipeline that can automatically enrich a domain ontology using: data extracted by a crawler from social media applications, similarity measures, the DBpedia knowledge base, a disambiguation algorithm and several heuristics. The main goal is to provide dynamic and personalized domain ontologies that include the knowledge of the community of users.

Paola Monachesi, Thomas Markus

Representing Distributed Groups with d g FOAF

Managing one’s memberships in different online communities increasingly becomes a cumbersome task. This is due to the increasing number of communities in which users participate and in which they share information with different groups of people like colleagues, sports clubs, groups with specific interests, family, friends, and others. These groups use different platforms to perform their tasks such as collaborative creation of documents, sharing of documents and media, conducting polls, and others. Thus, the groups are scattered and distributed over multiple community platforms that each require a distinct user account and management of the group. In this paper, we present

d

g

FOAF, an approach for distributed group management based on the well known Friend-of-a-Friend (FOAF) vocabulary. Our

d

g

FOAF approach is independent of the concrete community platforms we find today and needs no central server. It allows for defining communities across multiple systems and alleviates the community administration task. Applications of

d

g

FOAF range from access restriction to trust support based on community membership.

Felix Schwagereit, Ansgar Scherp, Steffen Staab

Semantics, Sensors, and the Social Web: The Live Social Semantics Experiments

The Live Social Semantics is an innovative application that encourages and guides social networking between researchers at conferences and similar events. The application integrates data from the Semantic Web, online social networks, and a face-to-face contact sensing platform. It helps researchers to find like-minded and influential researchers, to identify and meet people in their community of practice, and to capture and later retrace their real-world networking activities. The application was successfully deployed at two international conferences, attracting more than 300 users in total. This paper describes the Live Social Semantics application, with a focus on how data from Web 2.0 sources can be used to automatically generate Profiles of Interest. We evaluate and discuss the results of its two deployments, assessing the accuracy of profiles generated, the willingness to link to external social networking sites, and the feedback given through user questionnaires.

Martin Szomszor, Ciro Cattuto, Wouter Van den Broeck, Alain Barrat, Harith Alani

LESS - Template-Based Syndication and Presentation of Linked Data

Recently, the publishing of structured, semantic information as linked data has gained quite some momentum. For ordinary users on the Internet, however, this information is not yet very visible and (re-) usable. With LESS we present an

end-to-end approach

for the syndication and use of linked data based on the definition of templates for linked data resources and SPARQL query results. Such syndication templates are edited, published and shared by using a collaborative Web platform. Templates for common types of entities can then be combined with specific, linked data resources or SPARQL query results and integrated into a wide range of applications, such as personal homepages, blogs/wikis, mobile widgets etc. In order to

improve reliability and performance

of linked data, LESS caches versions either for a certain time span or for the case of inaccessibility of the original source. LESS supports the integration of information from various sources as well as any text-based output formats. This allows not only to generate HTML, but also diagrams, RSS feeds or even complete data mashups without any programming involved.

Sören Auer, Raphael Doehring, Sebastian Dietzold

Hierarchical Link Analysis for Ranking Web Data

On the Web of Data, entities are often interconnected in a way similar to web documents. Previous works have shown how PageRank can be adapted to achieve entity ranking. In this paper, we propose to exploit locality on the Web of Data by taking a layered approach, similar to hierarchical PageRank approaches. We provide justifications for a two-layer model of the Web of Data, and introduce DING (Dataset Ranking) a novel ranking methodology based on this two-layer model. DING uses links between datasets to compute dataset ranks and combines the resulting values with semantic-dependent entity ranking strategies. We quantify the effectiveness of the approach with other link-based algorithms on large datasets coming from the Sindice search engine. The evaluation which includes a user study indicates that the resulting rank is better than the other approaches. Also, the resulting algorithm is shown to have desirable computational properties such as parallelisation.

Renaud Delbru, Nickolai Toupikov, Michele Catasta, Giovanni Tummarello, Stefan Decker

A Node Indexing Scheme for Web Entity Retrieval

Now motivated also by the partial support of major search engines, hundreds of millions of documents are being published on the web embedding semi-structured data in RDF, RDFa and Microformats. This scenario calls for novel information search systems which provide effective means of retrieving relevant semi-structured information. In this paper, we present an “entity retrieval system” designed to provide entity search capabilities over datasets as large as the entire Web of Data. Our system supports full-text search, semi-structural queries and top-k query results while exhibiting a concise index and efficient incremental updates. We advocate the use of a node indexing scheme and show that it offers a good compromise between query expressiveness, query processing time and update complexity in comparison to three other indexing techniques. We then demonstrate how such system can effectively answer queries over 10 billion triples on a single commodity machine.

Renaud Delbru, Nickolai Toupikov, Michele Catasta, Giovanni Tummarello

Object Link Structure in the Semantic Web

Lots of RDF data have been published in the Semantic Web. The RDF data model, together with the decentralized linkage nature of the Semantic Web, brings object link structure to the worldwide scope. Object links are critical to the Semantic Web and the macroscopic properties of object links are helpful for better understanding the current Data Web. In this paper, we propose a notion of object link graph (OLG) in the Semantic Web, and analyze the complex network structure of an OLG constructed from the latest dataset (FC09) collected by the Falcons search engine. We find that the OLG has the scale-free nature and the approximate effective diameter of the graph is small compared to its scale, which are also consistent with the experimental result based on our last year’s dataset (FC08). The amount of RDF documents and objects by Falcons both doubled during the past year, but the object link graph remains the same density while the diameter is getting shrinking. We also repeat the complex network analysis on the two largest domain-specific subsets of FC09, namely Bio2RDF(FC09) and DBpedia(FC09). The results show that both Bio2RDF(FC09) and DBpedia(FC09) have low density in object links, which contribute to the low density of object links in FC09.

Weiyi Ge, Jianfeng Chen, Wei Hu, Yuzhong Qu

ExpLOD: Summary-Based Exploration of Interlinking and RDF Usage in the Linked Open Data Cloud

Publishing interlinked RDF datasets as links between data items identified using dereferenceable URIs on the web brings forward a number of issues. A key challenge is to understand the data, the schema, and the interlinks that are actually used both within and across linked datasets. Understanding actual

RDF usage

is critical in the increasingly common situations where terms from different vocabularies are mixed. In this paper we describe a tool, ExpLOD, that supports exploring summaries of RDF usage and interlinking among datasets from the Linked Open Data cloud. ExpLOD’s summaries are based on a novel mechanism that combines text labels and bisimulation contractions. The labels assigned to RDF graphs are hierarchical, enabling summarization at different granularities. The bisimulation contractions are applied to subgraphs defined via queries, providing for summarization of arbitrary large or small graph neighbourhoods. Also, ExpLOD can generate SPARQL queries from a summary. Experimental results, using several collections from the Linked Open Data cloud, compare the two summary creation approaches implemented by ExpLOD (graph-based vs. SPARQL-based).

Combining Query Translation with Query Answering for Efficient Keyword Search

Keyword search has been regarded as an intuitive paradigm for searching not only documents but also data, especially when the users are not familiar with the data and the query language. Two types of approaches can be distinguished. Answers to keywords can be computed by searching for matching subgraphs directly in the data. The alternative to this is keyword translation, which is based on searching the data schema for matching join graphs, which are then translated to queries. Answering these queries is performed in the later stage. While clear advantages have been shown for the approaches based on query translation, we observe that processing done during query translation has some overlaps with the processing needed for query answering. We propose a tight integration of query translation with query answering. Instead of using the schema, we employ a bisimulation-based structure index graph. Searching this index for matching subgraphs results not only in queries, but also candidate answers. We propose a set of algorithms which allow for an incremental process, where intermediate results computed during query translation can be reused for query answering. In experiments, we show that this integrated approach consistently outperforms the state of the art.

Improving the Performance of Semantic Web Applications with SPARQL Query Caching

The performance of triple stores is one of the major obstacles for the deployment of semantic technologies in many usage scenarios. In particular, Semantic Web applications, which use triple stores as persistence backends, trade performance for the advantage of flexibility with regard to information structuring. In order to get closer to the performance of relational database-backed Web applications, we developed an approach for improving the performance of triple stores by caching query results and even complete application objects. The selective invalidation of cache objects, following updates of the underlying knowledge bases, is based on analysing the graph patterns of cached SPARQL queries in order to obtain information about what kind of updates will change the query result. We evaluated our approach by extending the BSBM triple store benchmark with an update dimension as well as in typical Semantic Web application scenarios.

Michael Martin, Jörg Unbehauen, Sören Auer

An Unsupervised Approach for Acquiring Ontologies and RDF Data from Online Life Science Databases

In the Linked Open Data cloud one of the largest data sets, comprising of 2.5 billion triples, is derived from the Life Science domain. Yet this represents a small fraction of the total number of publicly available data sources on the Web. We briefly describe past attempts to transform specific Life Science sources from a plethora of open as well as proprietary formats into RDF data. In particular, we identify and tackle two bottlenecks in current practice: Acquiring ontologies to formally describe these data and creating “RDFizer” programs to convert data from legacy formats into RDF. We propose an unsupervised method, based on transformation rules, for performing these two key tasks, which makes use of our previous work on unsupervised wrapper induction for extracting labelled data from complete Life Science Web sites. We apply our approach to 13 real-world online Life Science databases. The learned ontologies are evaluated by domain experts as well as against gold standard ontologies. Furthermore, we compare the learned ontologies against ontologies that are “lifted” directly from the underlying relational schema using an existing unsupervised approach. Finally, we apply our approach to three online databases to extract RDF data. Our results indicate that this approach can be used to bootstrap and speed up the migration of life science data into the Linked Open Data cloud.

Saqib Mir, Steffen Staab, Isabel Rojas

Leveraging Terminological Structure for Object Reconciliation

It has been argued that linked open data is the major benefit of semantic technologies for the web as it provides a huge amount of structured data that can be accessed in a more effective way than web pages. While linked open data avoids many problems connected with the use of expressive ontologies such as the knowledge acquisition bottleneck, data heterogeneity remains a challenging problem. In particular, identical objects may be referred to by different URIs in different data sets. Identifying such representations of the same object is called object reconciliation. In this paper, we propose a novel approach to object reconciliation that is based on an existing semantic similarity measure for linked data. We adapt the measure to the object reconciliation problem, present exact and approximate algorithms that efficiently implement the methods, and provide a systematic experimental evaluation based on a benchmark dataset. As our main result, we show that the use of light-weight ontologies and schema information significantly improves object reconciliation in the context of linked open data.

Jan Noessner, Mathias Niepert, Christian Meilicke, Heiner Stuckenschmidt

Usability of Keyword-Driven Schema-Agnostic Search

A Comparative Study of Keyword Search, Faceted Search, Query Completion and Result Completion

The increasing amount of data on the Web bears potential for addressing complex information needs more effectively. Instead of keyword search and browsing along links between results, users can specify the needs in terms of complex queries and obtain precise answers right away. However, users might not always know the query language and more importantly, the schema underlying the data. Motivated by the burden facing the data Web search users in specifying complex information needs, we identify a particular class of search approaches that follow a paradigm that we refer to as

schema-agnostic

. Common to these search approaches is that no knowledge about the schema is required to specify complex information needs. We have conducted a systematic study of four popular approaches: (1) simple keyword search, (2) faceted search, (3) result completion, which is based on computing complex answers as candidate results for user provided keywords, and (4) query completion, which is based on computing structured queries as candidate interpretations of user provided keywords. We study these approaches from a process-oriented view to derive the main conceptual steps required for addressing complex information needs. Then, we perform an experimental study based on established conduct of a task-based evaluation to assess the effectiveness, efficiency and usability.

Thanh Tran, Tobias Mathäß, Peter Haase

Collaborative Semantic Points of Interests

The novel mobile application csxPOI (short for: collaborative, semantic, and context-aware points-of-interest) enables its users to collaboratively create, share, and modify semantic points of interest (POI). Semantic POIs describe geographic places with explicit semantic properties of a collaboratively created ontology. As the ontology includes multiple subclassifications and instantiations and as it links to DBpedia, the richness of annotation goes far beyond mere textual annotations such as tags. Users can search for POIs through the subclass hierarchy of the collaboratively created ontology. For example, a POI annotated as bakery can be found through the search string

shop

as it is a superclass of bakery. Data mining techniques are employed to cluster and thus improve the quality of the collaboratively created POIs.

Max Braun, Ansgar Scherp, Steffen Staab

Publishing Math Lecture Notes as Linked Data

We mark up a corpus of

${\mbox{\LaTeX}}$

lecture notes semantically and expose them as Linked Data in XHTML+MathML+RDFa. Our application makes the resulting documents interactively browsable for students. Our ontology helps to answer queries from students and lecturers, and paves the path towards an integration of our corpus with external sites.

Catalin David, Michael Kohlhase, Christoph Lange, Florian Rabe, Nikita Zhiltsov, Vyacheslav Zholudev

GoNTogle: A Tool for Semantic Annotation and Search

This paper presents GoNTogle, a tool which provides advanced document annotation and search facilities. GoNTogle allows users to annotate several document formats, using ontology concepts. It also produces automatic annotation suggestions based on textual similarity and previous document annotations. Finally, GoNTogle combines keyword and semantic-based search, offering advanced ontology query facilities.

Giorgos Giannopoulos, Nikos Bikakis, Theodore Dalamagas, Timos Sellis

A Software Tool for Visualizing, Managing and Eliciting SWRL Rules

SWRL rule are increasingly being used to represent knowledge on the Semantic Web. As these SWRL rule bases grows larger, managing the resulting complexity can become a challenge. Developers and end-users need rule management tools to tackle this complexity. We developed a rule management tool called Axiomé that aims to address this challenge. Axiomé support the paraphrasing of SWRL into simple English, the visualization of the structure both of individual rules and of rule bases, and supports the categorization of rules based on an analysis of their syntactic structure. It also supports the automatic generation of rule acquisition templates to facilitate rule elicitation. Axiomé is available as a plugin to the Protégé-OWL ontology development environment.

Saeed Hassanpour, Martin J. O’Connor, Amar K. Das

A Knowledge Infrastructure for the Dutch Immigration Office

The Dutch Immigration Office is replacing its existing paper based case system with a fully electronic system with integrated decision support based on ontologies. The new award winning architecture (Dutch Architecture award 2009) is based on the principle of separation of concerns: data, knowledge and process. The architecture of the application, but especially the architecture of the knowledge models for decision and process support, is explained and shown in the demonstration.

Ronald Heller, Freek van Teeseling, Menno Gülpers

Verifying and Validating Multi-layered Models with OWL FA Toolkit

This paper details the use of OWL FA Toolkit for verifying and validating multi-layered (meta-) modelling using ontologies described in OWL FA. We will show how OWL FA and its reasoner (OWL FA Toolkit) could benefit the software modeller on leveraging the software development life cycle through a practical use case.

Nophadol Jekjantuk, Jeff Z. Pan, Gerd Gröner

What’s New in WSMX?

The Web Service Execution Environment (WSMX) is the most complete implementation of a Semantic Execution Environment to support the automation of the Web service life-cycle. WSMX is a constantly evolving project. The demo will provide insight and justify the value of the newly introduced features: the Complex Event Processing engine, the Notification Broker engine and the Orchestration engine.

Srdjan Komazec, Federico Michele Facca

We have developed a prototype of a practical knowledge-driven semantic portal, OntoFrame S3, which provides various reasoning-based analysis services on academic research information. To realize this semantic portal, we developed and applied several Semantic Web and linguistic technologies. Through this demonstration, we will show how Semantic Web technologies can be utilized for information connection and fusion in the academic research information service sector and empowered by linguistic knowledge.

Seungwoo Lee, Mikyoung Lee, Pyung Kim, Hanmin Jung, Won-Kyung Sung

Rapid Prototyping a Semantic Web Application for Cultural Heritage: The Case of MANTIC

MANTIC is a Web application that integrates heterogenous and legacy data about the archeology of Milan (Italy); the application combines semantic Web and mashup technologies. Semantic Web models and technologies supports model-driven and standard-compliant data integration on the Web; the mashup approach supports a spatial and temporal aware form of information presentation. MANTIC shows that model-driven information integration applications for cultural heritage can be fast prototyped with limited deployment effort by combining semantic and mashup technologies. Instead, higher-level modeling aspects need a deep analysis and require domain expertise.

Glauco Mantegari, Matteo Palmonari, Giuseppe Vizzari

Hey! Ho! Let’s Go! Explanatory Music Recommendations with dbrec

In this demo paper, we present dbrec (

http://dbrec.net

), a music recommendation system using Linked Data, where recommendation are computed from DBpedia using an algorithm for

(

LDSD

). We describe how the system can be used to get recommendations for approximately 40,000 artists and bands, and in particular how it provides explanatory recommendations to the end-user. In addition, we discuss the research background of dbrec, including the

LDSD

algorithm and its related ontology.

Alexandre Passant, Stefan Decker

PossDL — A Possibilistic DL Reasoner for Uncertainty Reasoning and Inconsistency Handling

Uncertainty reasoning and inconsistency handling are two important problems that often occur in the applications of the Semantic Web. Possibilistic description logics provide a flexible framework for representing and reasoning with ontologies where uncertain and/or inconsistent information exists. Based on our previous work, we develop a possibilistic description logic reasoner. Our demo will illustrate functionalities of our reasoner for various reasoning tasks that possibilistic description logics can provide.

Guilin Qi, Qiu Ji, Jeff Z. Pan, Jianfeng Du

PoolParty: SKOS Thesaurus Management Utilizing Linked Data

Building and maintaining thesauri are complex and laborious tasks. PoolParty is a Thesaurus Management Tool (TMT) for the Semantic Web, which aims to support the creation and maintenance of thesauri by utilizing Linked Open Data (LOD), text-analysis and easy-to-use GUIs, so thesauri can be managed and utilized by domain experts without needing knowledge about the semantic web. Some aspects of thesaurus management, like the editing of labels, can be done via a wiki-style interface, allowing for lowest possible access barriers to contribution. PoolParty can analyse documents in order to glean new concepts for a thesaurus. Additionally a thesaurus can be enriched by retrieving relevant information from Linked Data sources and thesauri can be imported and updated via LOD URIs from external systems and also can be published as new linked data sources on the semantic web.

Thomas Schandl, Andreas Blumauer

DSMW: Distributed Semantic MediaWiki

DSMW is an extension to Semantic Mediawiki (SMW), it allows to create a network of SMW servers that share common semantic wiki pages. DSMW users can create communication channels between servers and use a publish-subscribe approach to manage the change propagation. DSMW synchronizes concurrent updates of shared semantic pages to ensure their consistency. It offers new collaboration modes to semantic wiki users and supports dataflow-oriented processes.

Hala Skaf-Molli, Gérôme Canals, Pascal Molli

TrOWL: Tractable OWL 2 Reasoning Infrastructure

The Semantic Web movement has led to the publication of thousands of ontologies online. These ontologies present and mediate information and knowledge on the Semantic Web. Tools exist to reason over these ontologies and to answer queries over them, but there are no large scale infrastructures for storing, reasoning, and querying ontologies on a scale that would be useful for a large enterprise or research institution. We present the TrOWL infrastructure for transforming, reasoning, and querying OWL2 ontologies which uses novel techniques such as Quality Guaranteed Approximations and Forgetting to achieve this goal.

Edward Thomas, Jeff Z. Pan, Yuan Ren

Making the Semantic Data Web Easily Writeable with RDFauthor

In this demo we present RDFauthor, an approach for authoring information that adheres to the RDF data model. RDFauthor completely hides syntax as well as RDF and ontology data model difficulties from end users and allows to edit information on arbitrary RDFa-annotated web pages. RDFauthor is based on extracting RDF triples from RDFa-annoted Web pages and transforming the RDFa-annotated HTML view into an editable form by using a set of authoring widgets. As a result, every RDFa-annotated web page can be made writeable, even if information originates from different sources.

Sebastian Tramp, Norman Heino, Sören Auer, Philipp Frischmuth

BioNav: An Ontology-Based Framework to Discover Semantic Links in the Cloud of Linked Data

We demonstrate BioNav, a system to efficiently discover potential novel associations between drugs and diseases by implementing Literature-Based Discovery techniques. BioNav exploits the wealth of the Cloud of Linked Data and combines the power of ontologies and existing ranking techniques, to support discovery requests. We discuss the formalization of a discovery request as a link-analysis and authority-based problem, and show that the top ranked target objects are in correspondence with the potential novel discoveries identified by existing approaches. We demonstrate how by exploiting properties of the ranking metrics, BioNav provides an efficient solution to the link discovery problem.

María-Esther Vidal, Louiqa Raschid, Natalia Márquez, Jean Carlo Rivera, Edna Ruckhaus

StarLion: Auto-configurable Layouts for Exploring Ontologies

The visualization of ontologies is a challenging task especially if they are large. We will demonstrate

StarLion

, a system providing exploratory visualizations which enhance the user understanding.

StarLion

combines many of the existing visualization methods with some novel features for providing better 2D layouts. Specifically, one distinctive feature of

StarLion

is the provision of Star-like graphs of variable radius whose layout is derived by a Force Directed Placement algorithm (

FDPA

StarLion

can also handle multiple namespaces, a very useful feature for assisting the understanding of interdependent ontologies. Another distinctive characteristic of

StarLion

is the provision of a novel method for configuring automatically the

FDPA

parameters based on layout quality metrics, and the provision of an interactive configuration method offered via an intuitive tool-bar.

Stamatis Zampetakis, Yannis Tzitzikas, Asterios Leonidis, Dimitris Kotzinos

Concept Extraction Applied to the Task of Expert Finding

The Semantic Web uses formal ontologies as a key instrument in order to add structure to the data, but building domain specific ontologies is still a difficult, time consuming and error-prone process since most information is currently available as free-text. Therefore the development of fast and cheap solutions for ontology learning from text is a key factor for the success and large scale adoption of the Semantic Web. Ontology development is primarily concerned with the definition of concepts and relations between them, so one of the fundamental research problems related to ontology learning is the extraction of concepts from text. To investigate this research problem we focus on the expert finding problem, i.e, the extraction of expertise topics and their assignment to individuals. The ontological concepts we extract are a person’s skills, knowledge, behaviours, and capabilities.

For increased efficiency, competitiveness and innovation, every company has to facilitate the identification of experts among its workforce. Even though this can be achieved by using the information gathered during the employment process and through self-assessment, a person’s competencies are likely to change over time. Information about people’s expertise is contained in documents available inside an organisation such as technical reports but also in publicly available resources, e.g., research articles, wiki pages, blogs, other user-generated content. The human effort required for competency management can be reduced by automatically identifying the experts and expertise topics from text. Our goal is to explore how existing technologies for concept extraction can be advanced and specialised for extracting expertise topics from text in order to build expertise profiles.

Georgeta Bordea

Towards Trust in Web Content Using Semantic Web Technologies

Since the amount of user-generated content has been sharply increasing in recent years, mainly due to Web 2.0 technology and effects of social networking, it is necessary to build mechanisms to assess the reliability of the content. On the web this notion of trust is a key ingredient for an effective manipulation of knowledge on a (world-wide) web scale. The web of trust has thus become an important research area both for web science and semantic web. In the PhD research we have laid out for us, we focus on the notion of trust and methods for representing and computing trust of users in the web content. This paper outlines the vision at the start of the PhD period on the research problem and the semantic web-based approach to solve that problem.

Qi Gao

The Semantic Gap of Formalized Meaning

Recent work in Ontology learning and Text mining has mainly focused on engineering methods to solve practical problem. In this thesis, we investigate methods that can substantially improve a wide range of existing approaches by minimizing the underlying problem: The Semantic Gap between formalized meaning and human cognition. We deploy OWL as a Meaning Representation Language and create a unified model, which combines existing NLP methods with Linguistic knowledge and aggregates disambiguated background knowledge from the Web of Data. The presented methodology here allows to study and evaluate the capabilities of such aggregated knowledge to improve the efficiency of methods in NLP and Ontology learning.

Sebastian Hellmann

A Contextualized Knowledge Framework for Semantic Web

This thesis focuses on developing an efficient framework for contextualized knowledge representation on Semantic Web. We point out the drawbacks of existing formalism for contexts that hinder an efficient implementation and propose a context formalism that enables the development of a framework with desired properties. Some of the future milestones for this thesis work are to (i) develop a proof theory for the logical framework based on Description Logics (DL) (ii) develop reasoning algorithms (iii) verify and compare the performance of these algorithms to existing distributed reasoning formalisms and (iv) implement the system.

Mathew Joseph

Computational and Crowdsourcing Methods for Extracting Ontological Structure from Folksonomy

This paper investigates the unification of folksonomies and ontologies in such a way that the resulting structures can better support exploration and search on the World Wide Web. First, an integrated computational method is employed to extract the ontological structures from folksonomies. It exploits the power of low support association rule mining supplemented by an upper ontology such as WordNet. Promising results have been obtained from experiments using tag datasets from Flickr and Citeulike. Next, a crowdsourcing method is introduced to channel online users’ search efforts to help evolve the extracted ontology.

Huairen Lin, Joseph Davis

Debugging the Missing Is-A Structure of Networked Ontologies

In parallel with the proliferation of ontologies and their use in semantically-enabled applications, the issue of finding and dealing with defects in ontologies has become increasingly important. Current work mostly targets detecting and repairing semantic defects in ontologies. In our work, we focus on another kind of severe defects, modeling defects, which require domain knowledge to detect and resolve. In particular, we are interested in detecting and repairing the missing structural relations (is-a hierarchy) in the ontologies. Our goal is to develop a system, which allows a domain expert to detect and repair the structure of ontologies in a semi-automatic way.

Qiang Liu, Patrick Lambrix

Global Semantic Graph as an Alternative Information and Collaboration Infrastructure

We propose the development of a Global Semantic Graph (GSG) as the foundation for future information and collaboration-centric applications and services. It would provide a single abstraction for storing, processing and communicating information based on globally interlinked semantic resources. The GSG adopts approaches and methods from the Semantic Web and thus facilitates a better information sharing abstraction.

Yan Shvartzshnaider

Scalable and Parallel Reasoning in the Semantic Web

The current state of the art regarding scalable reasoning consists of programs that run on a single machine. When the amount of data is too large, or the logic is too complex, the computational resources of a single machine are not enough. We propose a distributed approach that overcomes these limitations and we sketch a research methodology. A distributed approach is challenging because of the skew in data distribution and the difficulty in partitioning Semantic Web data. We present initial results which are promising and suggest that the approach may be successful.

Jacopo Urbani

Exploring the Wisdom of the Tweets: Towards Knowledge Acquisition from Social Awareness Streams

Although one might argue that little wisdom can be conveyed in messages of 140 characters or less, this PhD research sets out to explore if and what kind of knowledge can be acquired from different

aggregations of social awareness streams

. The expected contribution of this research is a network-theoretic model for defining, comparing and analyzing different kinds of social awareness streams and an experimental prototype to extract semantic models from them.

Claudia Wagner

Two Phase Description Logic Reasoning for Efficient Information Retrieval

Description Logics (DLs) [1] is family of logic languages designed to be a convenient means of knowledge representation. They can be embedded into FOL, but - contrary to the latter - they are decidable which gives them a great practical applicability. A DL knowledge base consists of two parts: the TBox (terminology box) and the ABox (assertion box). The TBox contains general background knowledge in the form of rules that hold in a specific domain. The ABox stores knowledge about individuals. For example, let us imagine an ontology about the structure of a university. The TBox might contain statements like “Every department has exactly one chair”, “Departments are responsible for at least 4 courses and for each course there is a department responsible for it”. In contrast, the ABox might state that “The Department of Computer Science is responsible for the course Information Theory” or that “Andrew is the chair of the the Department of Music”.

Zsolt Zombori

Backmatter

Weitere Informationen