Skip to main content

2014 | Buch

The Semantic Web: ESWC 2014 Satellite Events

ESWC 2014 Satellite Events, Anissaras, Crete, Greece, May 25-29, 2014, Revised Selected Papers

herausgegeben von: Valentina Presutti, Eva Blomqvist, Raphael Troncy, Harald Sack, Ioannis Papadakis, Anna Tordai

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-conference proceedings of the Satellite Events of the 11th International Conference on the Semantic Web, ESWC 2014, held in Anissaras, Crete, Greece, in May 2014. The volume contains 20 poster and 43 demonstration papers, selected from 113 submissions, as well as 12 best workshop papers selected from 60 papers presented at the workshop at ESWC 2014. Best two papers from AI Mashup Challenge are also included. The papers cover various aspects of the Semantic Web.

Inhaltsverzeichnis

Frontmatter

Best Workshop Papers

Frontmatter
Ontology Design Patterns: Improving Findability and Composition

Ontology Design Patterns (ODPs) are intended to guide non-experts in performing ontology engineering tasks successfully. While being the topic of significant research efforts, the uptake of these ideas outside the academic community is limited. This paper summarises issues preventing broader adoption of Ontology Design Patterns among practitioners, with an emphasis on finding and composing such patterns, and presents early results of work aiming to overcome these issues.

Karl Hammar
Lessons Learned — The Case of CROCUS: Cluster-Based Ontology Data Cleansing

Over the past years, a vast number of datasets have been published based on Semantic Web standards, which provides an opportunity for creating novel industrial applications. However, industrial requirements on data quality are high while the time to market as well as the required costs for data preparation have to be kept low. Unfortunately, many Linked Data sources are error-prone which prevents their direct use in productive systems. Hence, (semi-)automatic quality assurance processes are needed as manual ontology repair procedures by domain experts are expensive and time consuming. In this article, we present CROCUS – a pipeline for cluster-based ontology data cleansing. Our system provides a semi-automatic approach for instance-level error detection in ontologies which is agnostic of the underlying Linked Data knowledge base and works at very low costs. CROCUS has been evaluated on two datasets. The experiments show that we are able to detect errors with high recall. Furthermore, we provide an exhaustive related work as well as a number of lessons learned.

Didier Cherix, Ricardo Usbeck, Andreas Both, Jens Lehmann
Entity-Based Data Source Contextualization for Searching the Web of Data

To allow search on the Web of data, systems have to combine

data from multiple sources

. However, to effectively fulfill user information needs, systems must be able to “look beyond” exactly matching data sources and offer information from additional/contextual sources (

data source contextualization

). For this, users should be

involved in the source selection process – choosing which sources contribute to their search results

. Previous work, however, solely aims at source contextualization for “Web tables”, while relying on schema information and simple relational entities. Addressing these shortcomings, we exploit work from the field of data mining and show how to enable

Web data source contextualization

. Based on a real-world use case, we built a prototype contextualization engine, which we integrated in a system for searching the Web of data. We empirically validated the effectiveness of our approach – achieving performance gains of up to

$$29$$

% over the state-of-the-art.

Andreas Wagner, Peter Haase, Achim Rettinger, Holger Lamm
Setting the Course of Emergency Vehicle Routing Using Geolinked Open Data for the Municipality of Catania

Linked Open Data (LOD) has gained significant momentum over the past years as a best practice of promoting the sharing and publication of structured data on the semantic Web. Currently LOD is reaching significant adoption also in Public Administrations (PAs), where it is often required to be connected to existing platforms, such as GIS-based data management systems. Bearing on previous experience with the pioneering data.cnr.it, through Semantic Scout, as well as the Agency for Digital Italy recommendations for LOD in Italian PA, we are working on the extraction, publication, and exploitation of data from the Geographic Information System of the Municipality of Catania, referred to as SIT (“Sistema Informativo Territoriale”). The goal is to boost the metropolis towards the route of a modern Smart City by providing prototype integrated solutions supporting transport, public health, urban decor, and social services, to improve urban life. In particular a mobile application focused on real-time road traffic and public transport management is currently under development to support sustainable mobility and, especially, to aid the response to urban emergencies, from small accidents to more serious disasters. This paper describes the results and lessons learnt from the first work campaign, aiming at analyzing, reengineering, linking, and formalizing the Shape-based geo-data from the SIT.

Sergio Consoli, Aldo Gangemi, Andrea Giovanni Nuzzolese, Silvio Peroni, Diego Reforgiato Recupero, Daria Spampinato
Adapting Sentiment Lexicons Using Contextual Semantics for Sentiment Analysis of Twitter

Sentiment lexicons for sentiment analysis offer a simple, yet effective way to obtain the prior sentiment information of opinionated words in texts. However, words’ sentiment orientations and strengths often change throughout various contexts in which the words appear. In this paper, we propose a lexicon adaptation approach that uses the contextual semantics of words to capture their contexts in tweet messages and update their prior sentiment orientations and/or strengths accordingly. We evaluate our approach on one state-of-the-art sentiment lexicon using three different Twitter datasets. Results show that the sentiment lexicons adapted by our approach outperform the original lexicon in accuracy and F-measure in two datasets, but give similar accuracy and slightly lower F-measure in one dataset.

Hassan Saif, Yulan He, Miriam Fernandez, Harith Alani
RESTful or RESTless – Current State of Today’s Top Web APIs

Recent developments in the world of services on the Web show that both the number of available Web APIs as well as the applications built on top is constantly increasing. This trend is commonly attributed to the wide adoption of the REST architectural principles [

1

]. Still, the development of Web APIs is rather autonomous and it is up to the providers to decide how to implement, expose and describe the Web APIs. The individual implementations are then commonly documented in textual form as part of a webpage, showing a wide variety in terms of content, structure and level of detail. As a result, client application developers are forced to manually process and interpret the documentation. Before we can achieve a higher level of automation and can make any significant improvement to current practices and technologies, we need to reach a deeper understanding of their similarities and differences. Therefore, in this paper we present a thorough analysis of the most popular Web APIs through the examination of their documentation. We provide conclusions about common description forms, output types, usage of API parameters, invocation support, level of reusability, API granularity and authentication details. The collected data builds a solid foundation for identifying deficiencies and can be used as a basis for devising common standards and guidelines for Web API development.

Frederik Bülthoff, Maria Maleshkova
Ornithology Based on Linking Bird Observations with Weather Data

This paper presents the first results of a use case of Linked Data for eScience, where 0.5 million rows of bird migration observations over 30 years time span are linked with 0.1 million rows of related weather observations and a bird species ontology. Using the enriched linked data, biology researchers at the Finnish Museum of Natural History will be able to investigate temporal changes in bird biodiversity and how weather conditions affect bird migration. To support data exploration, the data is published in a SPARQL endpoint service using the RDF Data Cube model, on which semantic search and visualization tools are built.

Mikko Koho, Eero Hyvönen, Aleksi Lehikoinen
Protégé4US: Harvesting Ontology Authoring Data with Protégé

The inherent complexity of ontologies poses a number of cognitive and perceptual challenges for ontology authors. We investigate how users deal with the complexity of the authoring process by analysing how one of the most widespread ontology development tools (i.e. Protégé) is used. To do so, we build Protégé4US (Protégé for User Studies) by extending Protégé in order to generate log files that contain ontology authoring events. These log files not only contain data about the interaction with the environment, but also about OWL entities and axioms. We illustrate the usefulness of Protégé4US with a case study with 15 participants. The data generated from the study allows us to know more about how Protégé is used (e.g. most frequently used tabs), how well users perform (e.g. task completion times) and identify emergent authoring strategies, including moving down the class hierarchy or saving the current workspace before running the reasoner. We argue that Protégé4US is an valuable instrument to identify ontology authoring patterns.

Markel Vigo, Caroline Jay, Robert Stevens
Survey of Semantic Media Annotation Tools for the Web: Towards New Media Applications with Linked Media

Semantic annotation of media resources has been a focus in research since many years, the closing of the “semantic gap” being seen as key to significant improvements in media retrieval and browsing and enabling new media applications and services. However, current tools and services exhibit varied approaches which do not easily integrate and act as a barrier to wider uptake of semantic annotation of online multimedia. In this paper, we outline the Linked Media principles which can help form a consensus on media annotation approaches, survey current media annotation tools against these principles and present two emerging toolsets which can support Linked Media conformant annotation, closing with a call to future semantic media annotation tools and services to follow the same principles and ensure the growth of a Linked Media layer of semantic descriptions of online media which can be an enabler to richer future online media services.

Lyndon Nixon, Raphaël Troncy
First Experiments in Cultural Alignment Repair (Extended Version)

Alignments between ontologies may be established through agents holding such ontologies attempting at communicating and taking appropriate action when communication fails. This approach, that we call cultural repair, has the advantage of not assuming that everything should be set correctly before trying to communicate and of being able to overcome failures. We test here the adaptation of this approach to alignment repair, i.e., the improvement of incorrect alignments. For that purpose, we perform a series of experiments in which agents react to mistakes in alignments. The agents only know about their ontologies and alignments with others and they act in a fully decentralised way. We show that cultural repair is able to converge towards successful communication through improving the objective correctness of alignments. The obtained results are on par with a baseline of a priori alignment repair algorithms.

Jérôme Euzenat
Amending RDF Entities with New Facts

Linked and other Open Data poses new challenges and opportunities for the data mining community. Unfortunately, the large volume and great heterogeneity of available open data requires significant integration steps before it can be used in applications. A promising technique to explore such data is the use of association rule mining. We introduce two algorithms for enriching

Rdf

data. The first application is a suggestion engine that is based on mining

Rdf

predicates and supports manual statement creation by suggesting new predicates for a given entity. The second application is knowledge creation: Based on mining both predicates and objects, we are able to generate entirely new statements for a given data set without any external resources.

Ziawasch Abedjan, Felix Naumann
Predicting the Impact of Central Bank Communications on Financial Market Investors’ Interest Rate Expectations

In this paper, we design an automated system that predicts the impact of central bank communications on investors’ interest rate expectations. Our corpus is the Bank of England’s ‘

Monetary Policy Committee Minutes’

. Prior studies suggest that effective communications can mitigate a financial crisis; ineffective communications may exacerbate one. The system described here works in four phases. First, the system employs background knowledge from Wikipedia to identify salient aspects for central bank policy associated with economic growth, prices, interest rates and bank lending. These

economic aspects

are detected using the

TextRank

link analysis algorithm. A multinomial Naive Bayesian model then classifies sentences from central bank documents to these aspects. The second phase measures sentiment using a count of terms from the

General Inquirer

dictionary. The third phase employs Latent Dirichlet Allocation (LDA) to infer topic clusters that may act as intensifiers/diminishers of sentiment associated with the economic aspects. Finally, an ensemble tree combines the phases to predict the impact of the communications on financial market interest rates.

Andy Moniz, Franciska de Jong

AI MashUp Challenge

Frontmatter
Enriching Live Event Participation with Social Network Content Analysis and Visualization

During live events like conferences or exhibitions, people nowadays share their opinions, multimedia contents, suggestions, related materials, and reports through social networking platforms, such as Twitter. However, live events also feature inherent complexity, in the sense that they comprise multiple parallel sessions or happenings (e.g., in a conference you have several sessions in different rooms). The focus of this research is to improve the experience of (local or remote) attendees, by exploiting the contents shared on the social networks. The framework gathers in real time the tweets related to the event, analyses them and links them to the specific sub-events they refer to. Attendees have an holistic view on what is happening and where, so as to get help when deciding what sub-event to attend. To achieve its goal, the application consumes data from different data sources: Twitter, the official event schedule, plus domain specific content (for instance, in case of a computer science conference, DBLP and Google Scholar). Such data is analyzed through a combination of semantic web, crowdsourcing (e.g., by soliciting further inputs from attendees), and machine learning techniques (including NLP and NER) for building a rich content base for the event. The paradigm is shown at work on a Computer Science conference (WWW 2013)

Marco Brambilla, Daniele Dell’Aglio, Emanuele Della Valle, Andrea Mauri, Riccardo Volonterio
Linked Widgets Platform: Lowering the Barrier for Open Data Exploration

Despite a drastic increase in available Open and Linked Data, unmediated utilization of these data by end users is still relatively uncommon. Applications built on top of Open Data are typically domain-specific and discovering appropriate solutions that fit users’ rapidly shifting needs is a cumbersome process. In line with the Linked Data paradigm, end user tools should be based on openness, foster reusability, and be flexible enough to handle arbitrary data sources. We develop an open platform based on Semantic Web technologies that encourages developers and users to access, process, integrate, and visualize Open Data sources. To help users overcome technological barriers of adoption and get in touch with Open Data, we introduce the concept of Linked Widgets. By connecting Linked Widgets from different developers, users without programming skills can compose and share ad-hoc applications that combine Open Data sources in a creative manner.

Tuan-Dat Trinh, Peter Wetz, Ba-Lam Do, Amin Anjomshoaa, Elmar Kiesling, A Min Tjoa

Poster Track

Frontmatter
Annotating Ontologies with Descriptions of Vagueness

Vagueness is a common linguistic phenomenon manifested by predicates that lack clear applicability conditions and boundaries such as

High

,

Expert

or

Bad

. The usage of vague terminology in ontology entities can hamper the latter’s quality, primarily in terms of shareability and meaning explicitness. In this paper we present the Vagueness Ontology, a metaontology that enables the explicit identification and description of vague entities and their vagueness-related characteristics in ontologies, so as to make the latter’s meaning more explicit.

Panos Alexopoulos, Silvio Peroni, Boris Villazon-Terrazas, Jeff Z. Pan
What Are the Important Properties of an Entity?
Comparing Users and Knowledge Graph Point of View

Entities play a key role in knowledge bases in general and in the Web of Data in particular. Entities are generally described with a lot of properties, this is the case for DBpedia. It is, however, difficult to assess which ones are more “important” than others for particular tasks such as visualizing the key facts of an entity or filtering out the ones which will yield better instance matching. In this paper, we perform a reverse engineering of the Google Knowledge graph panel to find out what are the most “important” properties for an entity according to Google. We compare these results with a survey we conducted on 152 users. We finally show how we can represent and explicit this knowledge using the Fresnel vocabulary.

Ahmad Assaf, Ghislain A. Atemezing, Raphaël Troncy, Elena Cabrio
RuQAR: Reasoning Framework for OWL 2 RL Ontologies

This paper addresses the first release of the Rule-based Query Answering and Reasoning framework (RuQAR). The tool provides the ABox reasoning and query answering with OWL 2 RL ontologies executed by forward chaining rule reasoners. We describe current implementation and an experimental evaluation of RuQAR by performing reasoning on the number of benchmark ontologies. Additionally, we compare obtained results with inferences provided by HermiT and Pellet. The evaluation shows that we can perform the ABox reasoning with considerably better performance than DL-based reasoners.

Jaroslaw Bak, Maciej Nowak, Czeslaw Jedrzejek
An Investigation of HTTP Header Information for Detecting Changes of Linked Open Data Sources

Data on the Linked Open Data (LOD) cloud changes frequently. Applications that operate on local caches of Linked Data need to be aware of these changes. In this way they can update their cache to ensure operating on the most recent version of the data. Given the HTTP basis recommended in the Linked Data guidelines, the native way of detecting changes would be to use HTTP header information, such as the

Last-Modified

field. However, it is uncertain to which degree this field is currently supported on the LOD cloud and how reliable the provided information is. In this paper, we analyse a large-scale dataset obtained from the LOD cloud by weekly crawls over almost two years. On these weekly snapshots, we observed that for only 15 % of the Linked Data resources the HTTP header field

Last-Modified

is actually available and that the date provided for the last modification aligns in only 8 % with the observed changes of the data itself.

Renata Dividino, André Kramer, Thomas Gottron
A Semantic Approach to Support Cross Border e-Justice

The possibility to file and exchange legal procedures between European Member States is essential to increase cross-border relations in a pan-European e-Justice area. In this paper an overview of the e-Delivery platform developed within the e-CODEX project, as well as the semantic solution conceived to transmit business documents within a scenario characterized by different languages and different legal systems, are described.

Enrico Francesconi, Ginevra Peruginelli, Ernst Steigenga, Daniela Tiscornia
Rapid Deployment of a RESTful Service for Data Collected by Oceanographic Research Cruises

The Rolling Deck to Repository (R2R) program has the mission to capture, catalog, and describe the underway environmental sensor data from US oceanographic research vessels and submit the data to public long-term archives. Information about vessels, sensors, cruises, datasets, people, organizations, funding awards, logs, reports, etc. is published online as Linked Open Data, accessible through a SPARQL endpoint. In response to user feedback, we are developing a RESTful service based on the Elda open-source Java package to facilitate data access. Our experience shows that constructing a simple portal with limited schema elements in this way can significantly reduce development time and maintenance complexity compared to PHP or Servlet based approaches.

Linyun Fu, Robert A. Arko
TMR: A Semantic Recommender System Using Topic Maps on the Items’ Descriptions

Recommendation systems have become increasingly popular these days. Their utility has been proved to filter and to suggest items archived at web sites to the users. Even though recommendation systems have been developed for the past two decades, existing recommenders are still inadequate to achieve their objectives and must be enhanced to generate appealing personalized recommendations effectively. In this paper we present TMR, a context-independent tool based on topic maps that works with item’s descriptions and reviews to provide suitable recommendations to users. TMR takes advantage of lexical and semantic resources to infer users’ preferences and thus the recommender is not restricted by the syntactic constraints imposed on some existing recommenders. We have verified the correctness of TMR using a popular benchmark dataset.

Angel Luis Garrido, Sergio Ilarri
The Normalized Freebase Distance

In this paper, we propose the Normalized Freebase Distance (NFD), a new measure for determing semantic concept relatedness that is based on similar principles as the Normalized Web Distance (NWD). We illustrate that the NFD is more effective when comparing ambiguous concepts.

Fréderic Godin, Tom De Nies, Christian Beecks, Laurens De Vocht, Wesley De Neve, Erik Mannens, Thomas Seidl, Rik Van de Walle
Predicting SPARQL Query Performance

We address the problem of predicting SPARQL query performance. We use machine learning techniques to learn SPARQL query performance from previously executed queries. We show how to model SPARQL queries as feature vectors, and use

k

-nearest neighbors regression and Support Vector Machine with the nu-SVR kernel to accurately (

$$R^2$$

value of 0.98526) predict SPARQL query execution time.

Rakebul Hasan, Fabien Gandon
Linked Data Finland: A 7-star Model and Platform for Publishing and Re-using Linked Datasets

The idea of Linked Data is to aggregate, harmonize, integrate, enrich, and publish data for re-use on the Web in a cost-efficient way using Semantic Web technologies. We concern two major hindrances for re-using Linked Data: It is often difficult for a re-user to (1) understand the characteristics of the dataset and (2) evaluate the quality the data for the intended purpose. This paper introduces the “Linked Data Finland” platform LDF.fi addressing these issues. We extend the famous 5-star model of Tim Berners-Lee, with the sixth star for providing the dataset with a schema that explains the dataset, and the seventh star for validating the data against the schema. LDF.fi also automates data publishing and provides data curation tools. The first prototype of the platform is available on the web as a service, hosting tens of datasets and supporting several applications.

Eero Hyvönen, Jouni Tuominen, Miika Alonen, Eetu Mäkelä
Data Cleansing Consolidation with PatchR

The Linking Open Data (LOD) initiative is turning large resources of publicly available structured data from various domains into interlinked RDF(S) facts to constitute the so-called “Web of Data”.

Magnus Knuth, Harald Sack
SPARQL-MM - Extending SPARQL to Media Fragments

Interconnecting machine readable data with multimedia assets and fragments has recently become a common practice. But specific retrieval techniques for the so called Semantic Multimedia data are still lacking. On our poster we present SPARQL-MM, a function set that extends SPARQL to Media Fragment facilities by introducing spatio-temporal filter and aggregation functions.

Thomas Kurz, Sebastian Schaffert, Kai Schlegel, Florian Stegmaier, Harald Kosch
A Companion Screen Application for TV Broadcasts Annotated with Linked Open Data

Increasingly, European citizens consume television content together with devices connected to the Internet where they can look up related information. In parallel, growing amounts of Linked Open Data are being published on the Web, including rich metadata about its cultural heritage. Linked Data and semantic technologies could enable broadcasters to achieve added value for their content at low cost through the re-use of existing and extracted metadata. We present on-going work in the LinkedTV project, whose goal is to achieve seamless interlinking between TV and Web content on the basis of semantic annotations: two scenarios validated by user trials - Linked News and the Hyperlinked Documentary - and a companion screen application which provides related information for those programs during viewing.

Lyndon Nixon, Lotte Belice Baltussen, Lilia Perez Romero, Lynda Hardman
A Semantic Web Based Core Engine to Efficiently Perform Sentiment Analysis

In this paper we present a domain-independent framework that creates a sentiment analysis model by mixing Semantic Web technologies with natural language processing approaches (This work is supported by the project PRISMA SMART CITIES, funded by the Italian Ministry of Research and Education under the program PON.). Our system, called Sentilo, provides a core sentiment analysis engine which fully exploits semantics. It identifies the holder of an opinion, topics and sub-topics the opinion is referred to, and assesses the opinion trigger. Sentilo uses an OWL opinion ontology to represent all this information with an RDF graph where holders and topics are resolved on Linked Data. Anyone can plug its own opinion scoring algorithm to compute scores of opinion expressing words and come up with a combined scoring algorithm for each identified entities and the overall sentence.

Diego Reforgiato Recupero, Sergio Consoli, Aldo Gangemi, Andrea Giovanni Nuzzolese, Daria Spampinato
Balloon Synopsis: A Modern Node-Centric RDF Viewer and Browser for the Web

Nowadays, the RDF data model is a crucial part of the Semantic Web. Especially web developers favour RDF serialization formats like RDFa and JSON-LD. However, the visualization of large portions of RDF data in an appealing way is still a cumbersome task. RDF visualizers in general are not targeting the Web as usage scenario or simply display the complex RDF graph directly rather than applying a human friendly facade.

Balloon Synopsis

tries to overcome these issues by providing an easy-to-use RDF visualizer based on HTML and JavaScript. For an ease integration, it is implemented as jQuery-plugin offering a node-centric RDF viewer and browser with automatic Linked Data enhancement in a modern tile design.

Kai Schlegel, Thomas Weißgerber, Florian Stegmaier, Christin Seifert, Michael Granitzer, Harald Kosch
Ranking Entities in a Large Semantic Network

We present two knowledge-rich methods for ranking entities in a semantic network. Our approach relies on the DBpedia knowledge base for acquiring fine-grained information about entities and their semantic relations. Experiments on a benchmarking dataset show the viability of our approach.

Michael Schuhmacher, Simone Paolo Ponzetto
Improving the Online Visibility of Touristic Service Providers by Using Semantic Annotations

The vast majority of people use the Internet to search for various products and services including those touristic. Now more than ever it becomes critical for touristic businesses to have a strong online presence. In order to achieve this goal it is however essential that multiple communication channels and technologies are properly used. In particular having semantic annotations on the website that can be understood by search engines is extremely important. In this paper we present our ongoing effort on using Linked Data technologies to improve the online visibility of touristic service providers from Innsbruck and its surroundings. We show which technologies are relevant, how they can be applied in our real world pilot and we measure the impact of using such technologies.

Ioan Toma, Corneliu Stanciu, Anna Fensel, Ioannis Stavrakantonakis, Dieter Fensel
LiFR: A Lightweight Fuzzy DL Reasoner

In this paper we present LiFR, a lightweight DL reasoner capable of performing in resource-constrained devices, that implements a fuzzy extension of Description Logic Programs. Preliminary evaluation against two existing fuzzy DL reasoners and within a real-world use case has shown promising results.

Dorothea Tsatsou, Stamatia Dasiopoulou, Ioannis Kompatsiaris, Vasileios Mezaris
LUMO: The LinkedTV User Model Ontology

This paper introduces the LinkedTV User Model Ontology (LUMO), developed within the LinkedTV EU project. LUMO aims to semantically represent user-pertinent information in the networked media domain and to enable personalization and contextualization of concepts and content via semantic reasoning. The design principles of LUMO and its connection to relevant ontologies and known vocabularies is described.

Dorothea Tsatsou, Vasileios Mezaris

Demo Track

Frontmatter
Making Use of Linked Data for Generating Enhanced Snippets

We enhance an existing search engine’s snippet (i.e. excerpt from a web page determined at query-time in order to efficiently express how the web page may be relevant to the query) with linked data (LD) in order to highlight non trivial relationships between the information need of the user and LD resources related to the result page. To do this, we introduce a multi-step unsupervised co-clustering algorithm so as to use the textual data associated with the resources for discovering additional relationships. Next, we use a 3-way tensor to mix these new relationships with the ones available from the LD graph. Then, we apply a first PARAFAC tensor decomposition [

5

] in order to (i) select the most promising nodes for a 1-hop extension, and (ii) build the enhanced snippet. A video demonstration is available online (

http://liris.cnrs.fr/drim/projects/ensen/

).

Mazen Alsarem, Pierre-Édouard Portier, Sylvie Calabretto, Harald Kosch
Durchblick - A Conference Assistance System for Augmented Reality Devices

We present Durchblick, a conference assistance system for Augmented Reality devices. We demonstrate a prototype which can deliver context-sensitive event information and recommendations via Google Glass. This prototype incorporates semantic data from user-specific and public sources to build user profiles, maintains rich context information and employs event processing as well as recommender systems to proactively select and present relevant information.

Anas Alzoghbi, Peter M. Fischer, Anna Gossen, Peter Haase, Thomas Hornung, Beibei Hu, Georg Lausen, Christoph Pinkel, Michael Schmidt
Publication of RDF Streams with Ztreamy

There is currently an interest in the Semantic Web community for the development of tools and techniques to process RDF streams. Implementing an effective RDF stream processing system requires to address several aspects including stream generation, querying, reasoning, etc. In this work we focus on one of them: the distribution of RDF streams through the Web. In order to address this issue, we have developed Ztreamy, a scalable middleware which allows to publish and consume RDF streams through HTTP. The goal of this demo is to show the functionality of Ztreamy in two different scenarios with actual, heterogeneous streaming data.

Jesús Arias Fisteus, Norberto Fernández García, Luis Sánchez Fernández, Damaris Fuentes-Lorenzo
rdf:SynopsViz – A Framework for Hierarchical Linked Data Visual Exploration and Analysis

The purpose of data visualization is to offer intuitive ways for information perception and manipulation, especially for non-expert users. The Web of Data has realized the availability of a huge amount of datasets. However, the volume and heterogeneity of available information make it difficult for humans to manually explore and analyse large datasets. In this paper, we present

rdf:SynopsViz

, a tool for hierarchical charting and visual exploration of Linked Open Data (LOD). Hierarchical LOD exploration is based on the creation of multiple levels of hierarchically related groups of resources based on the values of one or more properties. The adopted hierarchical model provides effective information abstraction and summarization. Also, it allows efficient -on the fly- statistic computations, using aggregations over the hierarchy levels.

Nikos Bikakis, Melina Skourla, George Papastefanatos
Boosting QAKiS with Multimedia Answer Visualization

We present an extension of QAKiS, a system for Question Answering over DBpedia language specific chapters, that allows to complement textual answers with multimedia content from the linked data, to provide a richer and more complete answer to the user. For the demo, English, French and German DBpedia chapters are the RDF data sets to be queried using a natural language interface. Beside the textual answer, QAKiS output embeds (i) pictures from Wikipedia Infoboxes, (ii) OpenStreetMap, to visualize maps for questions asking about a place, and (iii) YouTube, to visualize pertinent videos (e.g. movie trailers).

Elena Cabrio, Vivek Sachidananda, Raphaël Troncy
Painless URI Dereferencing Using the DataTank

If we want a broad adoption of Linked Data, the barrier to conform to the Linked Data principles need to be as low as possible. One of the Linked Data principles is that URIs should be dereferenceable. This demonstrator shows how to set up The DataTank and configure a Linked Data repository, such as a turtle file or SPARQL endpoint, in it. Different content-types are acceptable and the response in the right format is generated.

Pieter Colpaert, Ruben Verborgh, Erik Mannens, Rik Van de Walle
CORNER: A Completeness Reasoner for SPARQL Queries Over RDF Data Sources

With the increased availability of data on the Semantic Web, the question whether data sources offer data of appropriate quality for a given purpose becomes an issue. With CORNER, we specifically address the data quality aspect of completeness. CORNER supports SPARQL BGP queries and can take RDFS ontologies into account in its analysis. If a query can only be answered completely by a combination of sources, CORNER rewrites the original query into one with SPARQL

SERVICE

calls, which assigns each query part to a suitable source, and executes it over those sources. CORNER builds upon previous work by Darari et al. [

1

] and is implemented using standard Semantic Web frameworks.

Fariz Darari, Radityo Eko Prasojo, Werner Nutt
AnnoMarket – Multilingual Text Analytics at Scale on the Cloud

AnnoMarket is an open platform for cloud-based text analytics services and language resources acquisition. Providers of text analytics services and language resources can deploy and monetize their components via the platform, while users can utilize such available resources in multiple languages and in various domains in an on-demand, pay-as-you-go manner. The AnnoMarket platform is deployed on the Amazon Web Services cloud and it provides free text analytics and language acquisition services to the general public.

Marin Dimitrov, Hamish Cunningham, Ian Roberts, Petar Kostov, Alex Simov, Philippe Rigaux, Helen Lippell
Modelling OWL Ontologies with Graffoo

In this paper we introduce

Graffoo

, i.e., a graphical notation to develop OWL ontologies by means of

yEd

, a free editor for diagrams.

Riccardo Falco, Aldo Gangemi, Silvio Peroni, David Shotton, Fabio Vitali
Graphium Chrysalis: Exploiting Graph Database Engines to Analyze RDF Graphs

We present Graphium Chrysalis, a tool to visualize the main graph invariants that characterize RDF graphs, i.e., graph properties that are independent of the graph representation such as, vertex and edge counts, in- and out-degree distribution, and in-coming and out-going

$$h$$

-index. Graph invariants characterize a graph and impact on the cost of the core graph-based tasks, e.g., graph traversal and sub-graph pattern matching, affecting time and space complexity of main RDF reasoning and query processing tasks. During the demonstration of Graphium Chrysalis, attendees will be able to observe and analyze the invariants that describe graphs of existing RDF benchmarks. Additionally, we will show the expressiveness power of state-of-the-art graph database engine APIs (e.g., Neo4j or Sparksee) (Sparksee was previously known as DEX), when main graph invariants are computed against RDF graphs.

Alejandro Flores, Maria-Esther Vidal, Guillermo Palma
SemLAV: Querying Deep Web and Linked Open Data with SPARQL

SemLAV allows to execute SPARQL queries against the Deep Web and Linked Open Data data sources. It implements the mediator-wrapper architecture based on view definitions over remote data sources. SPARQL queries are expressed using a mediator schema vocabulary, and SemLAV selects relevant data sources and

rank

them. The ranking strategy is designed to deliver results quickly based only on view definitions,

i.e.

, no statistics, nor probing on sources are required. In this demonstration, we validate the effectiveness of SemLAV approach with real data sources from social networks and Linked Open Data. We show in different setups that materializing only a subset of ranked relevant views is enough to deliver significant part of expected results.

Pauline Folz, Gabriela Montoya, Hala Skaf-Molli, Pascal Molli, Maria-Esther Vidal
Towards a Semantic Web Platform for Finite Element Simulations

Finite Element (FE) simulations are present in many different branches of science. The growth in the complexity of FE models and their associated costs bring the demand to facilitate the construction, reuse and reproducibility of FE models. This work demonstrates how Semantic Web technologies can be used to represent FE simulation data, improving the reproducibility and automation of FE simulations.

André Freitas, Kartik Asooja, Swapnil Soni, Marggie Jones, Panagiotis Hasapis, Ratnesh Sahay
A Demonstration of a Natural Language Query Interface to an Event-Based Semantic Web Triplestore

Natural language semantic-web queries can be treated as expressions of the lambda calculus and evaluated directly with respect to an event-based triplestore using only basic triple retrieval operations. This facilitates the accommodation of complex NL constructs.

Richard A. Frost, Jonathon Donais, Eric Mathews, Wale Agboola, Rob Stewart
Kuphi – an Investigation Tool for Searching for and via Semantic Relations

In this work, we present a new process-oriented approach for information retrieval called

Kuphi

. It is intended for investigating entities and their semantic relations to other entities in text documents. We extend the traditional search capabilities which are based on the bag-of-words model in the following way: Starting with a keyword search for a specific entity, the user can not only search for appearances of this entity in the text documents; she can also search via user-specified relations of the queried entity to other entities for these associated entitites in the text. The user has the possibility to search indirectly for manifestations of these relations. Due to cross-lingual annotation, we allow the query language to be different from the language of the documents. We demonstrate our approach with DBpedia as knowledge base and news texts gathered from RSS feeds.

Michael Färber, Lei Zhang, Achim Rettinger
FAGI-tr: A Tool for Aligning Geospatial RDF Vocabularies

In this paper, we present FAGI-tr, a tool for aligning RDF vocabularies with respect to their geospatial aspect. The tool provides a framework for (a) loading a source and a target geospatial RDF dataset, (b) identifying vocabularies for representing geospatial RDF data, (c) selecting, from both datasets, the representations to be considered for processing, (d) selecting a target vocabulary and transforming all geospatial triples from both datasets into the respective format and (e) outputting the two datasets for further processing. The outcome of the process is datasets that follow exactly the same vocabulary and, also, are cleansed from possible duplicate triples containing geospatial metadata, which is the case when an RDF dataset adopts more than one vocabularies to describe spatial data. The tool is tested with DBpedia data and performs rather efficiently.

Giorgos Giannopoulos, Thomas Maroulis, Dimitrios Skoutas, Nikos Karagiannakis, Spiros Athanasiou
SparqlFilterFlow: SPARQL Query Composition for Everyone

SparqlFilterFlow provides a visual interface for the composition of SPARQL queries, in particular

SELECT

and

ASK

queries. It is based on the intuitive and empirically well-founded filter/flow model that has been extended to address the unique specifics of SPARQL and RDF. In contrast to related work, no structured text input is required but the queries can be created entirely with graphical elements. This allows even users without expertise in Semantic Web technologies to create complex SPARQL queries with only little training. SparqlFilterFlow is implemented in C#, supports a large number of SPARQL constructs and can be applied to any SPARQL endpoint.

Florian Haag, Steffen Lohmann, Thomas Ertl
Visualizing RDF Data Cubes Using the Linked Data Visualization Model

Data Cube represents one of the basic means for storing, processing and analyzing statistical data. Recently, the RDF Data Cube Vocabulary became a W3C recommendation and at the same time interesting datasets using it started to appear. Along with them appeared the need for compatible visualization tools. The Linked Data Visualisation Model is a formalism focused on this area and is implemented by Payola, a framework for analysis and visualization of Linked Data. In this paper, we present capabilities of LDVM and Payola to visualize RDF Data Cubes as well as other statistical datasets not yet compatible with the Data Cube Vocabulary. We also compare our approach to CubeViz, which is a visualization tool specialized on RDF Data Cube visualizations.

Jiří Helmich, Jakub Klímek, Martin Nečaský
Prod-Trees: Semantic Search for Earth Observation Products

Access to Earth Observation products remains difficult for end users in most domains. Although various search engines have been developed, they neither satisfy the needs of scientific communities for advanced search of EO products, nor they use standardized vocabularies reusable from other organizations. To address this, we present the Prod-Trees platform, a semantically-enabled search engine for EO products enhanced with EO-netCDF, a new standard for accessing Earth Observation products.

M. Karpathiotaki, K. Dogani, M. Koubarakis, B. Valentin, P. Mazzetti, M. Santoro, S. Di Franco
UnifiedViews: An ETL Framework for Sustainable RDF Data Processing

We present UnifiedViews, an Extract-Transform-Load (ETL) framework that allows users to define, execute, monitor, debug, schedule, and share ETL data processing tasks, which may employ custom plugins created by users. UnifiedViews differs from other ETL frameworks by natively supporting RDF data and ontologies. We are persuaded that UnifiedViews helps RDF/Linked Data consumers to address the problem of sustainable RDF data processing; we support such statement by introducing list of projects and other activities where UnifiedViews is successfully exploited.

Tomáš Knap, Maria Kukhar, Bohuslav Macháč, Petr Škoda, Jiří Tomeš, Ján Vojt
SmarT INsiGhts (STING) - An Intelligence Application for Querying Heterogeneous Databases

This paper presents an application implemented at Inland Revenue, New Zealand that enables users to access data seamlessly from different types of databases. It was developed to provide our investigators the ability to get comprehensive information about a business entity or a group of entities. The solution presented so far has been implemented using relational, semantic and document databases. It allows its users to pose fairly complex queries on the mentioned databases and retrieve results without having to know the specific query languages. Up to this point, it supports the presentation of results in graph and tabular formats.

Vikash Kumar, Ganesh Selvaraj, Andy Shin, Paulo Gottgtroy
OLAP4LD – A Framework for Building Analysis Applications Over Governmental Statistics

Although useful governmental statistics have been published as Linked Data, there are query processing and data pre-processing challenges to allow citizens exploring such multidimensional datasets in pivot tables. In this demo paper we present OLAP4LD, a framework for developers of applications over Linked Data sources reusing the RDF Data Cube Vocabulary. Our demonstration will let visiting developers and dataset publishers explore European statistics with the Linked Data Cubes Explorer (LDCX), will explain how LDCX makes use of OLAP4LD, and will show common dataset modelling errors.

Benedikt Kämpgen, Andreas Harth
The ProtégéVOWL Plugin: Ontology Visualization for Everyone

The Visual Notation for OWL Ontologies (VOWL) provides a visual language for the representation of ontologies. In contrast to related work, VOWL aims at an intuitive and interactive visualization that is also understandable to users less familiar with ontologies. This paper presents ProtégéVOWL, a first implementation of VOWL realized as a plugin for the ontology editor Protégé. It accesses the internal ontology representation provided by the OWL API and defines graphical mappings according to the VOWL specification. The information visualization toolkit Prefuse is used to render the visual elements and to combine them into a force-directed graph layout. Results from a preliminary user study indicate that ProtégéVOWL does indeed provide a comparatively intuitive and usable ontology visualization.

Steffen Lohmann, Stefan Negru, David Bold
A Rule-Based System for Monitoring of Microblogging Disease Reports

Real-time microblogging messages are an interesting data source for the realization of early warning systems that track the outbreaks of epidemic diseases. Microblogging monitoring systems might be able to detect disease outbreaks in communities faster than the traditional public health services. The realization of such systems requires a message classification approach that can distinguish the messages which concern diseases from other unrelated messages. The existing machine learning classification approaches have some difficulties due to the lack of a longer history-based learning curve and the short length of the messages. In this paper, we present a demonstration of our rule-based approach for classification of disease reports. Our system is built based on the extraction of disease-related named entities. The type identification of the recognized named entities using the existing knowledge bases helps our system to classify a message as a disease report. We combine our approach with further text processing approaches like term frequency calculation. Our experimental results show that the presented approach is capable of classifying the disease report messages with acceptable precision and recall.

Wojciech Lukasiewicz, Kia Teymourian, Adrian Paschke
LinkZoo: A Linked Data Platform for Collaborative Management of Heterogeneous Resources

Modern collaborations rely on sharing and reusing heterogeneous resources. The ability to combine different types of information objects in semantically meaningful ways becomes a necessity for the information-intensive requirements of collaborative environments. In this paper we present LinkZoo, a web-based, linked data enabled platform that allows users to create, manage, discover and search heterogeneous resources such as files, web documents, people and events, interlink them, annotate them, exploit their inherent structures, enrich them with semantics and make them available as linked data. LinkZoo easily and intuitively allows for dynamic communities that enable web-based collaboration through resource sharing and annotating, exposing objects on the Linked Data Web under controlled vocabularies and permissions.

Marios Meimaris, George Alexiou, George Papastefanatos
TRTML - A Tripleset Recommendation Tool Based on Supervised Learning Algorithms

The Linked Data initiative promotes the publication of interlinked RDF triplesets, thereby creating a global scale data space. However, to enable the creation of such data space, the publisher of a tripleset

$$t$$

must be aware of other triplesets that he can interlink with

$$t$$

. Towards this end, this paper describes a Web-based application, called

TRTML

, that explores metadata available in Linked Data catalogs to provide data publishers with recommendations of related triplesets.

TRTML

combines supervised learning algorithms and link prediction measures to provide recommendations. The evaluation of the tool adopted as ground truth a set of links obtained from metadata stored in the DataHub catalog. The high precision and recall results demonstrate the usefulness of

TRTML

.

Alexander Arturo Mera Caraballo, Narciso Moura Arruda Jr., Bernardo Pereira Nunes, Giseli Rabello Lopes, Marco Antonio Casanova
morph-LDP: An R2RML-Based Linked Data Platform Implementation

The W3C Linked Data Platform (LDP) candidate recommendation defines a standard HTTP-based protocol for read/write Linked Data. The W3C R2RML recommendation defines a language to map relational databases (RDBs) and RDF. This paper presents morph-LDP, a novel system that combines these two W3C standardization initiatives to expose relational data as read/write Linked Data for LDP-aware applications, whilst allowing legacy applications to continue using their relational databases.

Nandana Mihindukulasooriya, Freddy Priyatna, Oscar Corcho, Raúl García-Castro, Miguel Esteban-Gutiérrez
Combining a REST Lexical Analysis Web Service with SPARQL for Mashup Semantic Annotation from Text

Current automatic annotation systems are often monolithic, holding internal copies of both machine-learned annotation models and the reference vocabularies they use. This is problematic particularly for frequently changing references such as person and place registries, as the information in the copy quickly grows stale. In this paper, arguments and experiments are presented on the notion that sufficient accuracy and recall can both be obtained simply by combining a sufficiently capable lexical analysis web service with querying a primary SPARQL store, even in the case of often problematic highly inflected languages.

Eetu Mäkelä
Aether – Generating and Viewing Extended VoID Statistical Descriptions of RDF Datasets

This paper presents the Aether web application for generating, viewing and comparing extended VoID statistical descriptions of RDF datasets. The tool is useful for example in getting to know a newly encountered dataset, in comparing datasets between versions and in detecting outliers and errors. Examples are given on how the tool has been used to shed light on multiple important datasets.

Eetu Mäkelä
SPARQL SAHA, a Configurable Linked Data Editor and Browser as a Service

SPARQL SAHA is a linked data editor and browser that can be used as a service, targeting any available SPARQL endpoint. Besides being available as a web service, the primary differentiating features of the tool are its configurability to match the underlying data, and the fact that the usability of its user interface has been verified by dozens of non-experts using the tool in multiple multi-year projects.

Eetu Mäkelä, Eero Hyvönen
LinkLion: A Link Repository for the Web of Data

Links between knowledge bases build the backbone of the Web of Data. Consequently, numerous applications have been developed to compute, evaluate and infer links. Still, the results of many of these applications remain inaccessible to the tools and frameworks that rely upon it. We address this problem by presenting LinkLion, a repository for links between knowledge bases. Our repository is designed as an open-access and open-source portal for the management and distribution of link discovery results. Users are empowered to upload links and specify how these were created. Moreover, users and applications can select and download sets of links via dumps or SPARQL queries. Currently, our portal contains 12.6 million links of 10 different types distributed across 3184 mappings that link 449 datasets. In this demo, we will present the repository as well as different means to access and extend the data it contains. The repository can be found at

http://www.linklion.org

.

Markus Nentwig, Tommaso Soru, Axel-Cyrille Ngonga Ngomo, Erhard Rahm
Big, Linked and Open Data: Applications in the German Aerospace Center

Earth Observation satellites acquire huge volumes of high resolution images continuously increasing the size of the archives and the variety of EO products. However, only a small part of this data is exploited. In this paper, we present how we take advantage of the TerraSAR-X images of the German Aerospace Center in order to build applications on top of EO data.

C. Nikolaou, K. Kyzirakos, K. Bereta, K. Dogani, S. Giannakopoulou, P. Smeros, G. Garbis, M. Koubarakis, D. E. Molina, O. C. Dumitru, G. Schwarz, M. Datcu
VideoLecturesMashup: Using Media Fragments and Semantic Annotations to Enable Topic-Centred e-Learning

In this demo, we present the VideoLecturesMashup, which delivers re-mixes of learning materials from the VideoLectures.NET portal based on shared topics across different lectures. Learners need more efficient access to teaching on specific topics which could be part of a larger lecture (focused on a different topic) and occur across lectures from different collections in distinct domains. Current e-learning video portals can not address this need, either to quickly dip into a shorter part focused on a specific topic of a longer lecture or to explore what is taught about a certain topic easily across collections. Through application of video analysis, semantic annotation and media fragment URIs, we have implemented a first demo of VideoLecturesMashup.

Lyndon Nixon, Tanja Zdolsek, Ana Fabjan, Peter Kese
Securing Access to Sensitive RDF Data

Given the increasing amount of sensitive RDF data available on the Web, it becomes critical to guarantee secure access to this content. The problem becomes even more challenging in the presence of RDFS inference, where inferred knowledge needs to be protected in the same way as explicit one. State of the art models for RDF access control annotate triples with concrete values that denote whether a triple can be accessed or not. In such approaches, the computation of the corresponding values for the inferred triples is hard-coded; this creates several problems in the presence of updates in the data, or, most importantly, when the access control policies change. We answer the above challenges by proposing an abstract model where the access labels are abstract tokens, and the computation of inferred labels is modelled through abstract operators. We demonstrate our model through the

HACEA

(

H

ealth

A

ccess

C

ontrol

E

nforcement

A

pplication) that provides simple access control/privacy functionalities in the context of a medical use case.

V. Papakonstantinou, G. Flouris, I. Fundulaki, H. Kondylakis
SCS Connector - Quantifying and Visualising Semantic Paths Between Entity Pairs

A key challenge of the Semantic Web lies in the creation of semantic links between Web resources. The creation of links serves as a mean to semantically enrich Web resources, connecting disparate information sources and facilitating data reuse and sharing. As the amount of data on the Web is ever increasing, automated methods to unveil links between Web resources are required. In this paper, we introduce a tool, called

SCS Connector

, that assists users to uncover links between entity pairs within and across datasets.

SCS Connector

provides a Web-based user interface and a RESTful API that enable users to interactively visualise and analyse paths between an entity pair

$$(e_i,e_j)$$

through known links that can reveal meaningful relationships between

$$(e_i,e_j)$$

according to a semantic connectivity score (

$$SCS$$

).

Bernardo Pereira Nunes, José Herrera, Davide Taibi, Giseli Rabello Lopes, Marco A. Casanova, Stefan Dietze
Distant Supervision for Relation Extraction Using Ontology Class Hierarchy-Based Features

Relation extraction is a key step in the problem of structuring natural language text. This paper demonstrates a multi-class classifier for relation extraction, constructed using the distant supervision approach, along with resources of the Semantic Web. In particular, the classifier uses a feature based on the class hierarchy of an ontology that, in conjunction with basic lexical features, improves accuracy and recall. The paper contains extensive experiments, using a corpus extracted from the Wikipedia and the DBpedia ontology, to demonstrate the usefulness of the new feature.

Pedro H. R. Assis, Marco A. Casanova
Augmenting TV Newscasts via Entity Expansion

We present an approach that leverages on the knowledge present on the Web for identifying and enriching relevant items inside a News video and displaying them in a timely and user friendly fashion. This second screen prototype (i) collects and offers information about persons, locations, organizations and concepts occurring in the newscast, and (ii) combines them for enriching the underlying story along five main dimensions: expert’s opinions, timeline, in depth, in other sources, and geo-localized comments from other viewers. Starting from preliminary insights coming from the named entities spotted on the subtitles, we expand this initial context to a broader event representation by relying in the knowledge of other Web documents talking about the same fact. An online demo of the proposed solution is available at

http://www.linkedtv.project.cwi.nl/news/

.

José Luis Redondo-García, Michiel Hildebrand, Lilia Perez Romero, Raphaël Troncy
Geographic Summaries from Crowdsourced Data

In this paper, we present a research prototype for creating geographic summaries using the whereabouts of Foursquare users. Exploiting the density of the venue types in a particular region, the system adds a layer over any typical cartography geographic maps service, creating a first glance summary over the venues sampled from the Foursquare knowledge base. Each summary is represented by a convex hull. The shape is automatically computed according to the venue densities enclosed in the area. The summary is then labeled with the most prominent category or categories. The prominence is given by the observed venue category density. The prototype provides two outputs: a light-weight representation structured in GeoJSON, and a semantic description using the Open Annotation Ontology. We evaluate the quality of the summaries using the Sum of Squared Errors (SSE) and the Jaccard distance. The system is available at

http://geosummly.eurecom.fr

.

Giuseppe Rizzo, Giacomo Falcone, Rosa Meo, Ruggero G. Pensa, Raphaël Troncy, Vuk Milicic
Dendro: Collaborative Research Data Management Built on Linked Open Data

Research datasets in the so-called “long-tail of science” are easily lost after their primary use. Support for preservation, if available, is hard to fit in the research agenda. Our previous work has provided evidence that dataset creators are motivated to spend time on data description, especially if this also facilitates data exchange within a group or a project. This activity should take place early in the data generation process, when it can be regarded as an actual part of data creation. We present the first prototype of the Dendro platform, designed to help researchers use concepts from domain-specific ontologies to collaboratively describe and share datasets within their groups. Unlike existing solutions, ontologies are used at the core of the data storage and querying layer, enabling users to establish meaningful domain-specific links between data, for any domain. The platform is currently being tested with research groups from the University of Porto.

João Rocha da Silva, João Aguiar Castro, Cristina Ribeiro, João Correia Lopes
Analyzing Linked Data Quality with LiQuate

The number of datasets in the Linking Open Data (LOD) cloud as well as LOD-based applications have exploded in the last years. However, because of data source heterogeneity, published data may suffer of redundancy, inconsistencies, or may be incomplete; thus, results generated by LOD-based applications may be imprecise, ambiguous, or unreliable. We demonstrate the capabilities of LiQuate (Linked Data Quality Assessment), a tool that relies on Bayesian Networks to analyze the quality of data and links in the LOD cloud.

Edna Ruckhaus, Maria-Esther Vidal, Simón Castillo, Oscar Burguillos, Oriana Baldizan
Collaborative Semantic Management and Automated Analysis of Scientific Literature

The overabundance of literature available in online repositories is an ongoing challenge for scientists that have to efficiently manage and analyze content for their information needs. Most of the existing literature management systems merely provide support for storing bibliographical metadata, tagging, and simple annotation capabilities. In this demo paper, we go beyond these approaches by demonstrating how an innovative combination of semantic web technologies with natural language processing can mitigate the information overload by helping in curating and organizing scientific literature. We present the

Zeeva

system as a first prototype that demonstrates how we can turn existing papers into a queryable knowledge base.

Bahar Sateli, René Witte
di.me: Ontologies for a Pervasive Information System

The di.me userware is a pervasive personal information management system that successfully adopted ontologies to provide various intelligent features. Supported by a suitable user interface, di.me provides ontology-driven support for the (i) integration of personal information from multiple personal sources, (ii) privacy-aware sharing of personal data, (iii) context-awareness and personal situation recognition, and (iv) creation of personalised rules that operate over live events to provide notifications, effect system changes or share data.

Simon Scerri, Ismael Rivera, Jeremy Debattista, Simon Thiel, Keith Cortis, Judie Attard, Christian Knecht, Andreas Schuller, Fabian Hermann
IDE Integrated RDF Exploration, Access and RDF-Based Code Typing with LITEQ

In order to access RDF data in Software development, one needs to deal with challenges concerning the integration of one or several RDF data sources into a host programming language. LITEQ allows for exploring an RDF data source and mapping the data schema and the data itself from this RDF data source into the programming environment for easy reuse by the developer. Core to LITEQ is a novel kind of path query language, NPQL, that allows for both extensional queries returning data and intensional queries returning class descriptions. This demo presents a prototype of LITEQ that supports such a type mapping as well as autocompletion for NPQL queries.

Stefan Scheglmann, Ralf Lämmel, Martin Leinberger, Steffen Staab, Matthias Thimm, Evelyne Viegas
Browsing DBpedia Entities with Summaries

The term “Linked Data” describes online-retrievable formal descriptions of entities and their links to each other. Machines and humans alike can retrieve these descriptions and discover information about links to other entities. However, for human users it becomes difficult to browse descriptions of single entities because, in many cases, they are referenced in more than a thousand statements.

In this demo paper we present

summarum

, a system that ranks triples and enables entity summaries for improved navigation within Linked Data. In its current implementation, the system focuses on DBpedia with the summaries being based on the PageRank scores of the involved entities.

Andreas Thalhammer, Achim Rettinger
Using Semantic Technologies for Scalable Multi-channel Communication

The development of the Web in the direction of user-generated content, information sharing, online collaboration and social media, have drastically increased the number of communication channels that can be used to interact with potential customers. In this demonstration we present the latest developments of our multi-channel communication solution, which enables touristic service providers, e.g. hoteliers and touristic associations, in dealing with the challenge of improving and maintaining their communication needs. We make use of semantic technologies, i.e. semantic analysis, semantic annotations, ontologies, semantic matching and rules in order to automate several multi-channel communication tasks.

Ioan Toma, Christoph Fuchs, Corneliu Stanciu, Dieter Fensel
Backmatter
Metadaten
Titel
The Semantic Web: ESWC 2014 Satellite Events
herausgegeben von
Valentina Presutti
Eva Blomqvist
Raphael Troncy
Harald Sack
Ioannis Papadakis
Anna Tordai
Copyright-Jahr
2014
Electronic ISBN
978-3-319-11955-7
Print ISBN
978-3-319-11954-0
DOI
https://doi.org/10.1007/978-3-319-11955-7

Neuer Inhalt