Skip to main content

2018 | Buch

Synergies Between Knowledge Engineering and Software Engineering

insite
SUCHEN

Über dieses Buch

This book compiles a number of contributions originating from the KESE (Knowledge Engineering and Software Engineering) workshop series from 2005 to 2015. The idea behind the series was the realignment of the knowledge engineering discipline and its strong relation to software engineering, as well as to the classical aspects of artificial intelligence research. The book introduces symbiotic work combining these disciplines, such as aspect-oriented and agile engineering, using anti-patterns, and system refinement. Furthermore, it presents successful applications from different areas that were created by combining techniques from both areas.

Inhaltsverzeichnis

Frontmatter

Methodological Studies

Frontmatter
Aspect-Oriented Ontology Development
Abstract
Aspect-Oriented Ontology Development takes inspiration from Aspect-Oriented Programming and provides a novel approach to the problems of ontology modularization and metamodeling by adding support for reified axioms. The book chapter describes the syntax and semantics of Aspect-Oriented Ontology Development, explains its benefits and possible weaknesses as compared to other existing modularization approaches and presents a set of application scenarios as well as a set of supporting tools.
Ralph Schäfermeier, Adrian Paschke
Similarity-Based Retrieval and Automatic Adaptation of Semantic Workflows
Abstract
The increasing demand for individual and more flexible process models and workflows asks for new intelligent process-oriented information systems. Such systems should, among other things, support domain experts in the creation and adaptation of process models or workflows. For this purpose, repositories of best practice workflows are an important means as they collect valuable experiential knowledge that can be reused in various ways. In this chapter we present process-oriented case-based reasoning (POCBR) as a method to support the creation and adaptation of workflows based on such knowledge. We provide a general introduction to process-oriented case-based reasoning and present a concise view of the POCBR methods we developed during the past ten years. This includes graph-based representation of semantic workflows, semantic workflow similarity, similarity-based retrieval, and workflow adaptation based on automatically learned adaptation knowledge. Finally, we sketch several application domains such as traditional business processes, social workflows, and cooking workflows.
Ralph Bergmann, Gilbert Müller
Development of Knowledge-Based Systems Which Use Bayesian Networks
Abstract
Bayesian networks allow for a concise graphical representation of decision makers’ knowledge on an uncertain domain. However, there are no well-defined methodologies showing how to use a Bayesian network as the core of a knowledge-based system, even less if not all the features should be supported by the knowledge model. That is to say, the software, that has to be released to customers, has also to embed functionalities not based on knowledge, concerning to the information management processes closer to the world of a classical software development projects. These components of the software application have to be built according to practices and methods of Software Engineering discipline. This chapter is conceived as a guideline about how to manage and intertwine languages and techniques related to Knowledge Engineering and Software Engineering in order to build a knowledge based system supported by Bayesian networks.
Isabel M. del Águila, José del Sagrado
Knowledge Acquisition During Software Development: Modeling with Anti-patterns
Abstract
Knowledge is a strategic resource; that should be timely acquired and exploited to manage and control software development. Software development is a knowledge intensive process characterized by increased uncertainty, presenting large variations among different development environments. Project uncertainty and volatility confounds the traditional knowledge-based processes since at any time traditional software project management techniques and patterns may be considered out of scope. In this chapter a dynamic and constantly adaptive knowledge encapsulation framework is presented. This framework analytically describes (a) metric collection methods along with metrics that attribute to knowledge creation regarding successful software development (b) representation mechanisms of the knowledge created in the form of anti-patterns (c) Bayesian Network analysis technique for converting the data to knowledge allowing inference mechanisms for testing the applicability of the anti-pattern. The presented approach is demonstrated into a case study showing both its feasibility and applicability.
Paraskevi Smiari, Stamatia Bibi, Ioannis Stamelos
Knowledge Engineering of System Refinement What We Learnt from Software Engineering
Abstract
Formal methods are a usual means to avoid errors or bugs in the development, adjustment and maintenance of both software and knowledge bases. This chapter provides a formal method to refine a knowledge base based on insides about its correctness derived from its use in practice. The objective of this refinement technique is to overcome particular invalidities revealed by the application of a case-oriented validation technology, i.e. it is some kind of “learning by examples”. Approaches from AI or Data Mining to solve such problems are often not useful for a system refinement that aims at is an appropriate modeling of the domain knowledge in way humans would express that, too. Moreover, they often lead to a knowledge base which is difficult to interpret, because it is too far from a natural way to express domain knowledge. The refinement process presented here is characterized by (1) using human expertise that also is a product of the validation technique and (2) keeping as much as possible of the original humanmade knowledge base. At least the second principle is pretty much adopted from Software Engineering. This chapter provides a brief introduction to AI rule base refinement approaches so far as well as an introduction to a validation and refinement framework for rulebased systems. It also states some basic principles for system refinement, which are adopted from Software Engineering. The next section introduces a refinement approach based on these principles. Moreover, it considers this approach from the perspective of the principles. Finally, some more general conclusions for the development, employment, and refinement of complex systems are drawn. The developed technology covers five steps: (1) test case generation, (2) test case experimentation, (3) evaluation, (4) validity assessment, and (5) system refinement. These steps can be performed iteratively, where the process can be conducted again after the improvements have been made.
Rainer Knauf
Using the Event-B Formal Method and the Rodin Framework for Verification the Knowledge Base of an Rule-Based Expert System
Abstract
Verification and validation of a knowledge base of an expert systems are distinct activities that allow to increase the quality and reliability of these systems. While validation ensures the compliance of a developed knowledge base with the initial requirements, the verification ensures that the knowledge base is logically consistent. Our work is focused on the verification activity, which is a difficult task that mainly consists in determination of potential structural errors of the knowledge base. More exactly, we aimed to study the consistency of knowledge bases of rule-based expert systems that use the forward chaining inference, a very important aspect in the verification activity, among others, such as completeness and correctness. We use Event-B as a modelling language because it has a mathematical background that allows to model a dynamic system by specifying its static and dynamic properties. In addition we use the Rodin platform, a support tool for Event-B, which allows to verify the correctness of the specified systems and its properties. For a better understanding of our method, an example written in the CLIPS language is presented in the paper.
Marius Brezovan, Costin Badica
Knowledge Engineering for Distributed Case-Based Reasoning Systems
Abstract
This chapter describes how to identify and collect human knowledge and transform it into machine readable and actionable knowledge. We will focus on the knowledge acquisition for distributed case-based reasoning systems. Case-based reasoning (CBR) is a well-known methodology for implementing knowledge-intensive decision support systems (Aamodt, Plaza, Artif Intell Commun, 7(1):39–59, 1994) [1] and has been applied in a broad range of applications. It captures experiences in the form of problem and solution pairs, which are recalled when similar problems reoccur. In order to create a CBR system the initial knowledge has to be identified and captured. In this chapter, we will summarise the knowledge acquisition method presented by Bach, Knowledge acquisition for case-based reasoning systems. Ph.D. thesis, University of Hildesheim, München (2012) [2] and give an running example within the travel medicine domain utilising the open source tool for developing CBR systems, myCBR.
Kerstin Bach

Application Studies

Frontmatter
Agile Knowledge Engineering for Mission Critical Software Requirements
Abstract
This chapter explains how a mission critical Knowledge-Based System (KBS) has been designed and implemented within a real case study of a governamental organization. Moreover, the KBS has been developed using a novel agile software development methodology. Due to fast changing operational scenarios and volatile requirements, traditional procedural development methodologies perform poorly. Thus, an Agile-like methodology has been exploited, called iAgile. The KBS is based on an ontology used to merge different mental models of users and developers. Moreover, the ontology of the system is useful for interoperability and knowledge representation. Mission critical functionalities have been developed in 5-week cycles, along with the ontology. So, the KBS serves for three main activities: (i) requirement disambiguation, (ii) interoperability with other legacy systems, and (iii) information retrieval and display of different informative sources.
Paolo Ciancarini, Angelo Messina, Francesco Poggi, Daniel Russo
Knowledge Engineering for Decision Support on Diagnosis and Maintenance in the Aircraft Domain
Abstract
The diagnosis of machines belonging to technical domains demands careful attention: dozens of relations between individual parts have to be considered and operational or environmental conditions can effect measurable symptoms and diagnoses. An aircraft is one of the most complex machines built by humans and therefore the diagnosis and maintenance of aircrafts requires intelligent and efficient solutions.
Pascal Reuss, Rotem Stram, Klaus-Dieter Althoff, Wolfram Henkel, Frieder Henning
The Role of Ontologies and Decision Frameworks in Computer-Interpretable Guideline Execution
Abstract
Computer-Interpretable Guidelines (CIGs) are machine readable representations of Clinical Practice Guidelines (CPGs) that serve as the knowledge base in many knowledge-based systems oriented towards clinical decision support. Herein we disclose a comprehensive CIG representation model based on Web Ontology Language (OWL) along with its main components. Additionally, we present results revealing the expressiveness of the model regarding a selected set of CPGs. The CIG model then serves as the basis of an architecture for an execution system that is able to manage incomplete information regarding the state of a patient through Speculative Computation. The architecture allows for the generation of clinical scenarios when there is missing information for clinical parameters.
Paulo Novais, Tiago Oliveira, Ken Satoh, José Neves
Metamarket – Modelling User Actions in the Digital World
Abstract
We present Metamarket (http://​metamarket.​info), an ontology of user actions on the Web as a foundation for understanding user preferences from Web activities and its relations with the state of the art. Metamarket is implemented using the Web Ontology Language (OWL) and is the base of a platform offering Linked Data access for the purpose of market research allowing intelligent applications to enhance local business sales and make new business insights possible. Particularly, the use of user generated content, i.e., digital works in which data from one or more sources is combined and presented in innovative ways, is a great way to expose this value. Although there are many approaches to publishing and using consumer data, we believe Linked Data is a key solution of many of the challenges and can lower the cost and complexity of developing these applications. In addition, Metamarket can be used to develop intelligent UX applications, responsive to user needs, and it can be extended towards modeling emotions.
Adrian Giurca
OntoMaven - Maven-Based Ontology Development and Management of Distributed Ontology Repositories
Abstract
In collaborative agile ontology development projects support for modular reuse of ontologies from large existing remote repositories, ontology project life cycle management, and transitive dependency management are important needs. The Apache Maven approach has proven its success in distributed collaborative Software Engineering by its widespread adoption. The contribution of this paper is a new design artifact called OntoMaven. OntoMaven adopts the Maven-based development methodology and adapts its concepts to knowledge engineering for Maven-based ontology development and management of ontology artifacts in distributed ontology repositories.
Adrian Paschke, Ralph Schäfermeier
Non-distracting, Continuous Collection of Software Development Process Data
Abstract
Knowledge management initiatives often fail when companies lack time and resources to focus on the meaning, implications, capturing and sharing of organizational knowledge management. This problem becomes even more severe when dealing with software development companies: software is invisible, which makes it difficult to reason and to communicate about it. It is hard to understand status, e.g., what the current state of the project is, which difficulties exist, and which problems might be in front of us. This is why we need measurement to obtain data about software, how it is created, and how it is used. This chapter presents non-distracting, automatic measurement, which is based on the extension of code editors or the instrumentation of source code of products, to log how developers or users are interacting with the software. We present two examples how data was collected, analyzed and interpreted. The here discussed methods describe our experiences in developing systems that support software development teams to collect and organize knowledge about their software development process based on non-disturbing, automatic data collection technologies, dashboards, and the Goal-Question-Metric approach.
Andrea Janes
Metadaten
Titel
Synergies Between Knowledge Engineering and Software Engineering
herausgegeben von
Grzegorz J. Nalepa
Joachim Baumeister
Copyright-Jahr
2018
Electronic ISBN
978-3-319-64161-4
Print ISBN
978-3-319-64160-7
DOI
https://doi.org/10.1007/978-3-319-64161-4