Skip to main content

2015 | Buch

Knowledge Engineering and Knowledge Management

EKAW 2014 Satellite Events, VISUAL, EKM1, and ARCOE-Logic, Linköping, Sweden, November 24-28, 2014. Revised Selected Papers.

herausgegeben von: Patrick Lambrix, Eero Hyvönen, Eva Blomqvist, Valentina Presutti, Guilin Qi, Uli Sattler, Ying Ding, Chiara Ghidini

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of Satellite Events held at the 19th International Conference on Knowledge Engineering and Knowledge Management, EKAW 2014 in November 2014. EKAW 2014 hosted three satellite workshops: VISUAL 2014, International Workshop on Visualizations and User Interfaces for Knowledge Engineering and Linked Data Analytics, EKM1, the First International Workshop on Educational Knowledge Management and ARCOE-Logic 2014, the 6th International Workshop on Acquisition, Representation and Reasoning about Context with Logic. This volume also contains the accepted contributions for the EKAW 2014 tutorials, demo and poster sessions.

Inhaltsverzeichnis

Frontmatter

Tutorials

Frontmatter
Language Resources and Linked Data: A Practical Perspective

Recently, experts and practitioners in language resources have started recognizing the benefits of the linked data (LD) paradigm for the representation and exploitation of linguistic data on the Web. The adoption of the LD principles is leading to an emerging ecosystem of multilingual open resources that conform to the Linguistic Linked Open Data Cloud, in which datasets of linguistic data are interconnected and represented following common vocabularies, which facilitates linguistic information discovery, integration and access. In order to contribute to this initiative, this paper summarizes several key aspects of the representation of linguistic information as linked data from a practical perspective. The main goal of this document is to provide the basic ideas and tools for migrating language resources (lexicons, corpora, etc.) as LD on the Web and to develop some useful NLP tasks with them (e.g., word sense disambiguation). Such material was the basis of a tutorial imparted at the EKAW’14 conference, which is also reported in the paper.

Jorge Gracia, Daniel Vila-Suero, John P. McCrae, Tiziano Flati, Ciro Baron, Milan Dojchinovski
From Knowledge Engineering for Development to Development Informatics

Knowledge Sharing is a key enabler of development of the rural poor. ICTs can play a critical role, providing for instance market data or weather information to sustenance farmers, or education to children in remote areas. While advanced knowledge technology has proven its use in many applications in the so-called developed world most of the tools cannot be easily applied in developing countries, because of restricted infrastructure, unsuitable modes of communication or ignorance of the local context. In the K4D tutorial at EKAW 2014 we argued that a new of kind of research in Knowledge Engineering is needed in order to make knowledge technology useful outside privileged developed countries. This research will have to include existing social and economic structures as fundamental requirements in order to be successful. Finally, we claim that this holds for a broader spectrum of subdisciplines of Computer Science, and not just for Knowledge Engineering, which lets us advocate Development Informatics: a joint forum for CS researchers who try to make their research relevant for the developing world as well.

Stefan Schlobach, Victor de Boer, Christophe Guéret, Stéphane Boyera, Philippe Cudré-Mauroux

Workshop summaries and best papers

Frontmatter
Acquisition, Representation and Reasoning About Context with Logic (ARCOE-Logic 2014)

In recent years, research in contextual knowledge representation and reasoning became more relevant in the areas of Semantic Web, Linked Open Data, and Ambient Intelligence, where knowledge is not considered a monolithic and static asset, but it is distributed in a network of interconnected heterogeneous and evolving knowledge resources. The challenge to deal with the contextual nature of this knowledge is brought to an unprecedented scale. The ARCOE-Logic workshop aims to provide a dedicated forum for researchers to discuss recent developments, important open issues and future directions in the area of contextual knowledge representation and knowledge management.

Alessandra Mileo, Martin Homola, Michael Fink
Knowledge Propagation in Contextualized Knowledge Repositories: An Experimental Evaluation
(Extended Paper)

As the interest in the representation of context dependent knowledge in the Semantic Web has been recognized, a number of logic based solutions have been proposed in this regard. In our recent works, in response to this need, we presented the description logic-based Contextualized Knowledge Repository (CKR) framework. CKR is not only a theoretical framework, but it has been effectively implemented over state-of-the-art tools for the management of Semantic Web data: inference inside and across contexts has been realized in the form of forward SPARQL-based rules over different RDF named graphs. In this paper we present the first evaluation results for such CKR implementation. In particular, in our first experiment we study its

scalability

with respect to different reasoning regimes. In a second experiment we analyze the effects of

knowledge propagation

on the reasoning process. In the last experiment we study the effects of

modularization

of global knowledge with respect to local reasoning.

Loris Bozzato, Luciano Serafini
Different Types of Conflicting Knowledge in AmI Environments

We characterize different types of conflicts that often occur in complex distributed multi-agent scenarios, such as in Ambient Intelligence (AmI) environments, and we argue that these conflicts should be resolved in a suitable order and using the most appropriate conflict resolution strategies for each individual conflict type. Our analysis shows that conflict resolution in AmI environments and similar multi-agent domains is a complex process, spanning through different levels of abstraction. The agents deployed in such environments need to handle conflicts with coordination and with certain level of agreement. We consecutively point out how this problem is currently handled in the relevant AmI literature.

Martin Homola, Theodore Patkos
Summary of the Workshop on Educational Knowledge Management

The first edition of the workshop Educational Knowledge Management (EKM 2014) was held at the 19th International Conference on Knowledge Engineering and Knowledge Management (EKAW). The workshop took place in the city of Linköping, Sweden, on the 24th of November 2014. The workshop was organized by Inaya Lahoud and Lars Ahrenberg.

Inaya Lahoud, Lars Ahrenberg
Generating Multiple Choice Questions From Ontologies: How Far Can We Go?

Ontology-based Multiple Choice Question (MCQ) generation has a relatively short history. Many attempts have been carried out to develop methods to generate MCQs from ontologies. However, there is still a need to understand the applicability of these methods in real educational settings. In this paper, we present an empirical evaluation of ontology-based MCQ generation. We examine the feasibility of applying ontology-based MCQ generation methods by educators with no prior experience in ontology building. The findings of this study show that this is feasible and can result in generating a reasonable number of educationally useful questions with good predictions about their difficulty levels.

Tahani Alsubait, Bijan Parsia, Uli Sattler
Introduction to VISUAL 2014 - Workshop on Visualizations and User Interfaces for Knowledge Engineering and Linked Data Analytics

VISUAL 2014 addressed the challenges in providing knowledge engineers and data analysts with visualizations and well-designed user interfaces to support the understanding of the concepts, data instances and relationships in different domains. The workshop was organized around two tracks: one focused on visualizations and user interfaces for Knowledge Engineering, and the other on Visual Analytics for dynamic and large-scale data. Six contributions were presented at the workshop, which also included an interactive tool demonstration session.

Valentina Ivanova, Tomi Kauppinen, Steffen Lohmann, Suvodeep Mazumdar, Catia Pesquita, Kai Xu
OntoViBe 2: Advancing the Ontology Visualization Benchmark

A variety of ontology visualizations have been presented in the last couple of years. The features of these visualizations often need to be tested during their development or for evaluation purposes. However, in particular for the testing of special concepts and concept combinations, it can be difficult to find suitable ontologies. We have developed OntoViBe, an ontology covering a wide variety of OWL language constructs for the purpose of testing ontology visualizations. This paper presents OntoViBe 2, which extends the first version by annotations, individuals, anonymous classes, and a module for testing different combinations of cardinality constraints, among others. We describe the design principles underlying OntoViBe 2 and present the supported features in a coverage matrix. Finally, we load OntoViBe 2 with ontology visualization tools and point to some noteworthy aspects of the respective visualizations that become apparent and demonstrate how OntoViBe can be used for testing ontology visualizations.

Florian Haag, Steffen Lohmann, Stefan Negru, Thomas Ertl

Posters

Frontmatter
The Semantic Lancet Project: A Linked Open Dataset for Scholarly Publishing

In this poster we introduce the

Semantic Lancet Project

, whose goal is to make available rich data about scholarly publications and to provide users with sophisticated services on top of those data.

Andrea Bagnacani, Paolo Ciancarini, Angelo Di Iorio, Andrea Giovanni Nuzzolese, Silvio Peroni, Fabio Vitali
Personalised, Serendipitous and Diverse Linked Data Resource Recommendations

Due to the huge and diverse amount of information, the actual access to a piece of information in the Linked Open Data (LOD) cloud still demands significant amount of effort. To overcome this problem, number of Linked Data based recommender systems have been developed. However, they have been primarily developed for a particular domain, they require human intervention in the dataset pre-processing step, and they can be hardly adopted to new datasets. In this paper, we present our method for personalised access to Linked Data, in particular focusing on its applicability and its salient features.

Milan Dojchinovski, Tomas Vitvar
Spreadsheet-Based Knowledge Acquisition for Facilitating Active Domain Expert Participation

(Poster) We propose a spreadsheet-based knowledge acquisition (KA) approach for lowering the familiarization hurdle and thus increasing the ease of use and user experience for domain experts. Thus, by enabling experts to (mostly) autonomously participate in KA, the approach adds to the overall diversity of today’s KA tool landscape.

Martina Freiberg, Felix Herrmann, Frank Puppe
From ER Models to the Entity Model

In this paper, a new knowledge representation formalism, called the

entity model

, is introduced. This model can be used to address knowledge diversity by making the modeling assumptions of different knowledge representations explicit and by rooting them in a world representation. The entity model can be used to: 1) detect the possible ways in which the diversity appears in ER models and therefore improving their representational adequacy; 2) make the modeling assumptions behind different ER models explicit; 3) combine the different ER models in a unified view, thus enabling data integration.

Fausto Giunchiglia, Mattia Fumagalli
xWCPS: Bridging the Gap Between Array and Semi-structured Data

The ever growing amount of information collected by scientific instruments and the presence of descriptive metadata accompanying them calls for a unified way of querying over array and semi-structured data. We present

xWCPS

, a novel query language that bridges the path between these two different worlds, enhancing the expressiveness and user-friendliness of previous approaches.

Panagiotis Liakos, Panagiota Koltsida, George Kakaletris, Peter Baumann
Adapting the SEPIA system to the educational context

The SEPIA system allows creating assistance systems that meet technical assistance needs. In this paper, we aim at the exploitation of SEPIA in the educational context by confronting it to pedagogical assistance needs. This exploitation shows the limitations in the SEPIA system: complex description of rules by pedagogical designers, lack of domain knowledge. Therefore, we present our patterns that facilitate the creation of assistance systems in the educational context and our ideas to allow the exploitation of domain knowledge.

Le Vinh Thai, Blandine Ginon, Stéphanie Jean-Daubias, Marie Lefevre

Demos

Frontmatter
An Intelligent Textbook that Answers Questions

Inquire Biology is a prototype of a new kind of intelligent textbook that answers students questions, engages their interest, and improves their understanding. Inquire uses knowledge representation of the conceptual knowledge from the textbook and uses inference procedures to answer questions. Students ask questions by typing free-form natural language queries or by selecting passages of text. The system then attempts to answer the question and also generates suggested questions related to the query or selection. The questions supported by the system were chosen to be educationally useful, for example: what is the structure of X?; compare X and Y?; how does X relate to Y? In user studies, students found this question-answering capability to be useful while reading and while doing problem solving. In a controlled experiment, community college students using Inquire Biology outperformed students using either a hard copy or conventional E-book version of the same textbook. While additional research is needed to fully develop Inquire, the prototype demonstrates the promise of applying knowledge representation and question-answering to electronic textbooks.

Vinay K. Chaudhri, Adam Overholtzer, Aaron Spaulding
SHELDON: Semantic Holistic FramEwork for LinkeD ONtology Data

SHELDON is a framework that builds upon a machine reader for extracting RDF graphs from text so that the output is compliant to Semantic Web and Linked Data patterns. It extends the current human-readable web by using semantic best practices and technologies in a machine-processable form. Given a sentence in any language, it provides different semantic tasks as well as nice visualization tools which make use of the JavaScript infoVis Toolkit and a knowledge enrichment component on top of RelFinder. The system can be freely used at

http://wit.istc.cnr.it/stlab-tools/sheldon

.

Diego Reforgiato Recupero, Andrea Giovanni Nuzzolese, Sergio Consoli, Aldo Gangemi, Valentina Presutti
Legalo: Revealing the Semantics of Links

Links in webpages carry an intended semantics: usually, they indicate a relation between two things, a subject (something referenced to within the web page) and an object (the target webpage of the link, or something referred to within it). We designed and implemented a novel system, named

Legalo

, which uncovers the intended semantics of links by defining Semantic Web properties that capture its meaning. Legalo properties can be used for tagging links with semantic relations. The system can be used at

http://wit.istc.cnr.it/stlab-tools/legalo

.

Sergio Consoli, Andrea Giovanni Nuzzolese, Valentina Presutti, Diego Reforgiato Recupero, Aldo Gangemi
TourRDF: Representing, Enriching, and Publishing Curated Tours Based on Linked Data

Current mobile tourist guide systems are developed and used in separate data silos: each system and vendor tends to use its own proprietary, closed formats for representing tours and point of interest (POI) content. As a result, tour data cannot be enriched from other providers’ tour and POI repositories, or from other external data sources — even when such data were publicly available by, e.g., cities willing to promote tourism. This paper argues, that an open shared RDF-based tour vocabulary is needed to address these problems, and introduces such a model, TourRDF, extending the earlier TourML schema into the era of Linked Data. As a test and an evaluation of the approach, a case study based on data about the Unesco World Heritage site Suomenlinna fortress is presented.

Esko Ikkala, Eetu Mäkelä, Eero Hyvönen
SUGOI: Automated Ontology Interchangeability

A foundational ontology can solve interoperability issues am- ong the domain ontologies aligned to it. However, several foundational ontologies have been developed, hence such interoperability issues exist among domain ontologies. The novel SUGOI tool,

Software Used to Gain Ontology Interchangeability

, allows a user to interchange automatically a domain ontology among the DOLCE, BFO and GFO foundational ontologies. The success of swapping varies due to differences in coverage, and amount of mappings both between the foundational ontologies and the alignment mappings between the domain and the foundational ontology. In this demo we present the tool, and attendees can bring their preferred ontology for interchange by SUGOI, and will be assisted with the analysis of the results in terms of ‘good’ and ‘bad’ entity linking to assess how feasible it is to change it over to the other foundational ontology.

Zubeida Casmod Khan, C. Maria Keet
WebVOWL: Web-based Visualization of Ontologies

We present WebVOWL, a responsive web application for the visualization of ontologies. It implements the Visual Notation for OWL Ontologies (VOWL) and is entirely based on open web standards. The visualizations are automatically generated from JSON files, into which the ontologies need to be converted. An exemplary OWL2VOWL converter implemented in Java and based on the OWL API is currently used for this purpose. The ontologies are rendered in a force-directed graph layout according to the VOWL specification. Interaction techniques allow to explore the ontologies and customize their visualizations.

Steffen Lohmann, Vincent Link, Eduard Marbach, Stefan Negru
LSD Dimensions: Use and Reuse of Linked Statistical Data

RDF Data Cube (QB) has boosted the publication of Linked Statistical Data (LSD) on the Web, making them linkable to other related datasets and concepts following the Linked Data paradigm. In this demo we present

LSD Dimensions

, a web based application that monitors the usage of

dimensions

and

codes

(variables and values in QB jargon) in Data Structure Definitions over six hundred public SPARQL endpoints. We plan to extend the system to retrieve more in-use QB metadata, serve the dimension and code data through SPARQL and an API, and provide analytics on the (re)use of statistical properties in LSD over time.

Albert Meroño-Peñuela
Storyscope: Using Setting and Theme to Assist the Interpretation and Development of Museum Stories

Stories are used to provide a context for museum objects, for example linking those objects to what they depict or the historical context in which they were created. Many explicit and implicit relationships exist between the people, places and things mentioned in a story and the museum objects with which they are associated. Storyscope is an environment for authoring museum stories comprising text, media elements and semantic annotations. A recommender component provides additional context as to how the story annotations are related directly or via other concepts not mentioned in the story. The approach involves generating a concept space for different types of story annotation such as artists and museum objects. The concept space of an annotation is predominantly made up of a set of events, forming an event space. The story context is aggregated from the concept spaces of its associated annotations. Narrative notions of setting and theme are used to reason over the concept space, identifying key concepts and time-location pairs, and their relationship to the rest of the story. The author or reader can use setting and theme to navigate the context of the story.

Paul Mulholland, Annika Wolff, Eoin Kilfeather, Evin McCarthy
A Linked Data Approach to Know-How

The Web is one of the major repositories of human generated know-how, such as step-by-step videos and instructions. This knowledge can be potentially reused in a wide variety of applications, but it currently suffers from a lack of structure and isolation from related knowledge. To overcome these challenges we have developed a Linked Data framework which can automate the extraction of know-how from existing Web resources and generate links to related knowledge on the Linked Data Cloud. We have implemented our framework and used it to extract a Linked Data representation of two of the largest know-how repositories on the Web. We demonstrate two possible uses of the resulting dataset of real-world know-how. Firstly, we use this dataset within a Web application to offer an integrated visualization of distributed know-how resources. Lastly, we show the potential of this dataset for inferring common sense knowledge about tasks.

Paolo Pareti, Benoit Testu, Ryutaro Ichise, Ewan Klein, Adam Barker
OntoEnrich: A Platform for the Lexical Analysis of Ontologies

The content of the labels in ontologies is usually considered hidden semantics, because the domain knowledge of such labels is not available as logical axioms in the ontology. The use of systematic naming conventions as best practice for the design of the content of the labels generates labels with structural regularities, namely, lexical regularities. The structure and content of such regularities can help ontology engineers to increase the amount of machine-friendly content in ontologies, that is, to increase the number of logical axioms.

In this paper we present a web platform based on the OntoEnrich framework, which detects and analyzes lexical regularities, providing a series of useful insights about the structure and content of the labels, which can be helpful for the study of the engineering of the ontologies and their axiomatic enrichment. Here, we describe its software architecture, and how it can be used for analyzing the labels of ontologies, which will be illustrated with some examples from our research studies.

Manuel Quesada-Martínez, Jesualdo Tomás Fernández-Breis, Robert Stevens, Nathalie Aussenac-Gilles
Integrating Unstructured and Structured Knowledge with the KnowledgeStore

We showcase the KnowledgeStore, a scalable, fault-tolerant, and Semantic Web grounded framework for interlinking unstructured (e.g., textual documents, web pages) and structured (e.g., RDF, LOD) contents, enabling to jointly store, manage, retrieve, and query, both typologies of contents.

Marco Rospocher, Francesco Corcoglioniti, Roldano Cattoni, Bernardo Magnini, Luciano Serafini

Doctoral Consortium

Frontmatter
Assessing the Spatio-Temporal Fitness of Information Supply and Demand on an Adaptive Ship Bridge

The

information supply

to crew on nautical ship bridges has a significant impact on the probability of accident occurrences. A reason is, too much or too less information is supplied, not satisfying the

information demand

. A solution to this problem is the Adaptive Ship Bridge - a system that closes this

information gap

between supply and demand by adaptation. However, to create an effective Adaptive Ship Bridge it is necessary to assess the information gap during design time. The thesis introduced in this paper proposes a novel method for assessment in spatio-temporal and information quality dimensions. The method will allow designing Adaptive Ship Bridges that will help to reduce accident risk.

Christian Denker
Versatile Visualization, Authoring and Reuse of Ontological Schemas Leveraging on Dataset Summaries

Ontologies and vocabularies written in the OWL language are a crucial part of the semantic web. OWL allows to model the same part of reality using different combinations of constructs, constituting ‘modeling styles’. We draft a novel approach to supporting ontology reuse and development with respect to this heterogeneity. The central notion of this approach is to use ontological background models to describe reality in a way that is OWL-independent and yet can be mapped to OWL. Using such models should enable easy comparison of ontologies from the same domain when they are designed using different modeling styles, automated generation of OWL style variants from existing ontologies or even generating completely new OWL ontologies in the desired style. The PhD thesis focuses on development and implementation of methods for design of ontological background models and their transformation to OWL. The methods include summarization of the actual ontology usage in an existing dataset, which can serve as a starting point for design of ontological background models and might also help to learn how to use the ontology for data annotation properly.

Marek Dudáš
Linked Data Cleansing and Change Management

The Web of Data is constantly growing in terms of covered domains, applied vocabularies, and number of triples. A high level of data quality is in the best interest of any data consumer.

Linked Data publishers can use various data quality evaluation tools prior to publication of their datasets. But nevertheless, most inconsistencies only become obvious when the data is processed in applications and presented to the end users. Therefore, it is not only the responsibility of the original data publishers to keep their data tidy, but progresses to become a mission for all distributors and consumers of Linked Data, too.

My main research topic is the inspection of feedback mechanisms for Linked Data cleansing in open knowledge bases. This work includes a change request vocabulary, the aggregation of change requests produced by various agents, versioning data resources, and consumer notification about changes. The individual components form the basis of a Linked Data Change Management framework.

Magnus Knuth
Culture-Aware Approaches to Modeling and Description of Intonation Using Multimodal Data

Computational approaches that conform to the cultural context are of paramount importance in music information research. The current state-of-the-art has a limited view of such context, which manifests in our ontologies, data-, cognition- and interaction-models that are biased to the market-driven popular music. In a step towards addressing this, the thesis draws upon multimodal data sources concerning art music traditions, extracting culturally relevant and musically meaningful information about melodic intervals from each of them and structuring it with formal knowledge representations. As part of this, we propose novel approaches to describe intonation in audio music recordings and to use and adapt the semantic web infrastructure to complement this with the knowledge extracted from text data. Due to the complementary nature of the data sources, structuring and linking the extracted information results in a symbiosis mutually enriching their information. Over this multimodal knowledge base, we propose similarity measures for the discovery of musical entities, yielding a culturally-sound navigation space.

Gopala Krishna Koduri
RDF Based Management, Syndication and Aggregation of Web Content

Significant parts of the web exist of documents that are automatically generated, or an aggregation of content from different sources. These practises rely on three major tasks: Management, Syndication and Aggregation of web content. For these a heterogeneous set of imperative tools, scripts and systems is available, which are either not compatible or only along very thin lines, e.g. using RSS. In this document, we will propose a novel architecture for dealing with management, syndication and aggregation of web content based on web standards, in particular RDF and Linked Data principles. A declarative, compatible and more efficient way of performing these tasks that effectively merges them, while narrowing the gap between the web of documents and the web of data.

Niels Ockeloen
Towards Ontology Refinement by Combination of Machine Learning and Attribute Exploration

We propose a new method for knowledge acquisition and ontology refinement for the Semantic Web. The method is based on a combination of the attribute exploration algorithm from the formal concept analysis and active learning approach to machine learning classification task. It enables utilization of Linked Data during the process of an ontology refinement in a manner that it is possible to use remote SPARQL endpoints. We also report on a preliminary experimental evaluation and argue that our method is reasonable and useful.

Jedrzej Potoniec
Backmatter
Metadaten
Titel
Knowledge Engineering and Knowledge Management
herausgegeben von
Patrick Lambrix
Eero Hyvönen
Eva Blomqvist
Valentina Presutti
Guilin Qi
Uli Sattler
Ying Ding
Chiara Ghidini
Copyright-Jahr
2015
Electronic ISBN
978-3-319-17966-7
Print ISBN
978-3-319-17965-0
DOI
https://doi.org/10.1007/978-3-319-17966-7