Skip to main content
Top

2017 | Book

KI 2017: Advances in Artificial Intelligence

40th Annual German Conference on AI, Dortmund, Germany, September 25–29, 2017, Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 40th Annual German Conference on Artificial Intelligence, KI 2017 held in Dortmund, Germany in September 2017.

The 20 revised full technical papers presented together with 16 short technical communications were carefully reviewed and selected from 73 submissions.

The conference cover a range of topics from, e. g., agents, robotics, cognitive sciences, machine learning, planning, knowledge representation, reasoning, and ontologies, with numerous applications in areas like social media, psychology, transportation systems and reflecting the richness and diversity of their field.

Table of Contents

Frontmatter

Full Technical Papers

Frontmatter
Employing a Restricted Set of Qualitative Relations in Recognizing Plain Sketches

In this paper, we employ aspects of machine learning, computer vision, and qualitative representations to build a classifier of plain sketches. The paper proposes a hybrid technique for accurately recognizing hand-drawn sketches, by relying on a set of qualitative relations between the strokes that compose such sketches, and by taking advantage of two major perspectives for processing images. Our implementation shows promising results for recognizing sketches that have been hand-drawn by human participants.

Ahmed M. H. Abdelfattah, Wael Zakaria
Interval Based Relaxation Heuristics for Numeric Planning with Action Costs

Many real-world problems can be expressed in terms of states and actions that modify the world to reach a certain goal. Such problems can be solved by automated planning. Numeric planning supports numeric quantities such as resources or physical properties in addition to the propositional variables from classical planning. We approach numeric planning with heuristic search and introduce adaptations of the relaxation heuristics $$h_\text {max}$$, $$h_\text {add}$$ and $$h_{\text {FF}}$$ to interval based relaxation frameworks. In contrast to previous approaches, the heuristics presented in this paper are not limited to fragments of numeric planning with instantaneous actions (such as linear or acyclic numeric planning tasks) and support action costs.

Johannes Aldinger, Bernhard Nebel
Expected Outcomes and Manipulations in Online Fair Division

Two simple and attractive mechanisms for the fair division of indivisible goods in an online setting are Like and Balanced Like. We study some fundamental computational problems concerning the outcomes of these mechanisms. In particular, we consider what expected outcomes are possible, what outcomes are necessary and how to compute their exact outcomes. In general, we show that such questions are more tractable to compute for Like than for Balanced Like. As Like is strategy proof but Balanced Like is not, we also consider the computational problem of how, with Balanced Like, an agent can compute a strategic bid to improve their outcome. We prove that this problem is intractable in general.

Martin Aleksandrov, Toby Walsh
Most Competitive Mechanisms in Online Fair Division

This paper combines two key ingredients for online algorithms - competitive analysis (e.g. the competitive ratio) and advice complexity (e.g. the number of advice bits needed to improve online decisions) - in the context of a simple online fair division model where items arrive one by one and are allocated to agents via some mechanism. We consider four such online mechanisms: the popular Ranking matching mechanism adapted from online bipartite matching and the Like, Balanced Like and Maximum Like allocation mechanisms firstly introduced for online fair division problems. Our first contribution is that we perform a competitive analysis of these mechanisms with respect to the expected size of the matching, the utilitarian welfare, and the egalitarian welfare. We also suppose that an oracle can give a number of advice bits to the mechanisms. Our second contribution is to give several impossibility results; e.g. no mechanism can achieve the egalitarian outcome of the optimal offline mechanism supposing they receive partial advice from the oracle. Our third contribution is that we quantify the competitive performance of these four mechanisms w.r.t. the number of oracle requests they can make. We thus present a most competitive mechanism for each objective.

Martin Aleksandrov, Toby Walsh
A Thorough Formalization of Conceptual Spaces

The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points in a high-dimensional space and concepts are represented by convex regions in this space. After pointing out a problem with the convexity requirement, we propose a formalization of conceptual spaces based on fuzzy star-shaped sets. Our formalization uses a parametric definition of concepts and extends the original framework by adding means to represent correlations between different domains in a geometric way. Moreover, we define computationally efficient operations on concepts (intersection, union, and projection onto a subspace) and show that these operations can support both learning and reasoning processes.

Lucas Bechberger, Kai-Uwe Kühnberger
Propagating Maximum Capacities for Recommendation

Neighborhood-based approaches often fail in sparse scenarios; a direct implication for recommender systems exploiting co-occurring items is often an inappropriately poor performance. As a remedy, we propose to propagate information (e.g., similarities) across the item graph to leverage sparse data. Instead of processing only directly connected items (e.g. co-occurrences), the similarity of two items is defined as the maximum capacity path interconnecting them. Our approach resembles a generalization of neighborhood-based methods that are obtained as special cases when restricting path lengths to one. We present two efficient online computation schemes and report on empirical results.

Ahcène Boubekki, Ulf Brefeld, Cláudio Leonardo Lucchesi, Wolfgang Stille
Preventing Groundings and Handling Evidence in the Lifted Junction Tree Algorithm

For inference in probabilistic formalisms with first-order constructs, lifted variable elimination (LVE) is one of the standard approaches for single queries. To handle multiple queries efficiently, the lifted junction tree algorithm (LJT) uses a specific representation of a first-order knowledge base and LVE in its computations. Unfortunately, LJT induces unnecessary groundings in cases where the standard LVE algorithm, GC-FOVE, has a fully lifted run. Additionally, LJT does not handle evidence explicitly. We extend LJT (i) to identify and prevent unnecessary groundings and (ii) to effectively handle evidence in a lifted manner. Given multiple queries, e.g., in machine learning applications, our extension computes answers faster than LJT and GC-FOVE.

Tanya Braun, Ralf Möller
Improving the Cache-Efficiency of Shortest Path Search

Flood-filling algorithms as used for coloring images and shadow casting show that improved locality greatly increases the cache performance and, in turn, reduces the running time of an algorithm. In this paper we look at Dijkstra’s method to compute the shortest paths for example to generate pattern databases. As cache-improving contributions, we propose edge-cost factorization and flood-filling the memory layout of the graph. We conduct experiments in commercial game maps and compare the new priority queues with advanced heap implementations as well as and with alternative bucket implementations.

Stefan Edelkamp
Automating Emendations of the Ontological Argument in Intensional Higher-Order Modal Logic

A shallow semantic embedding of an intensional higher-order modal logic (IHOML) in Isabelle/HOL is presented. IHOML draws on Montague/Gallin intensional logics and has been introduced by Melvin Fitting in his textbook Types, Tableaus and Gödel’s God in order to discuss his emendation of Gödel’s ontological argument for the existence of God. Utilizing IHOML, the most interesting parts of Fitting’s textbook are formalized, automated and verified in the Isabelle/HOL proof assistant. A particular focus thereby is on three variants of the ontological argument which avoid the modal collapse, which is a strongly criticized side-effect in Gödel’s resp. Scott’s original work.

David Fuenmayor, Christoph Benzmüller
Real-Time Public Transport Delay Prediction for Situation-Aware Routing

Situation-aware route planning gathers increasing interest. The proliferation of various sensor technologies in smart cities allows the incorporation of real-time data and its predictions in the trip planning process. We present a system for individual multi-modal trip planning that incorporates predictions of future public transport delays in routing. Future delay times are computed by a Spatio-Temporal-Random-Field based on a stream of current vehicle positions. The conditioning of spatial regression on intermediate predictions of a discrete probabilistic graphical model allows to incorporate historical data, streamed online data and a rich dependency structure at the same time. We demonstrate the system with a real-world use-case at Warsaw city, Poland.

Lukas Heppe, Thomas Liebig
Online Multi-object Tracking-by-Clustering for Intelligent Transportation System with Neuromorphic Vision Sensor

Instead of wastefully sending entire images at fixed frame rates, neuromorphic vision sensors only transmits the local pixel-level changes caused by movement in a scene at the time they occur. This results in a stream of events, with a latency in the order of micro-seconds. While these sensors offer tremendous advantages in terms of latency and bandwidth, they require new, adapted approaches to computer vision, due to their unique event-based pixel-level output. In this contribution, we propose an online multi-target tracking system utilizing for neuromorphic vision sensors, which is the first neuromorphic vision system in intelligent transportation systems. In order to track moving targets, a fast and simple object detection algorithm using clustering techniques is developed. To make full use of the low latency, we integrate an online tracking-by-clustering system running at a high frame rate, which far exceeds the real-time capabilities of traditional frame based industry cameras. The performance of the system is evaluated using real world dynamic vision sensor data of a highway bridge scenario. We hope that our attempt will motivate further research on neuromorphic vision sensors for intelligent transportation systems.

Gereon Hinz, Guang Chen, Muhammad Aafaque, Florian Röhrbein, Jörg Conradt, Zhenshan Bing, Zhongnan Qu, Walter Stechele, Alois Knoll
A Generalization of Probabilistic Argumentation with Dempster-Shafer Theory

One of the first (and key) steps in analyzing an argumentative exchange is to reconstruct complete arguments from utterances which may carry just enthymemes. In this paper, using legal argument from analogy, we argue that in this reconstruction process interpreters may have to deal with a kind of uncertainty that can be appropriately represented in Dempster-Shafer (DS) theory rather than classical probability theory. Hence we generalize and relax existing frameworks of Probabilistic Argumentation (PAF), which are currently based on classical probability theory, by what we refer to as DS-based Argumentation framework (DSAF). Concretely, we first define a DSAF form and semantics by generalizing existing PAF form and semantics. We then present a method to translate existing proof procedures for standard Abstract Argumentation into DSAF inference procedures. Finally we provide a Prolog-based implementation of the resulted DSAF inference procedures.

Nguyen Duy Hung
Evolving Kernel PCA Pipelines with Evolution Strategies

This paper introduces an evolutionary tuning approach for a pipeline of preprocessing methods and kernel principal component analysis (PCA) employing evolution strategies (ES). A simple (1+1)-ES adapts the imputation method, various preprocessing steps like normalization and standardization, and optimizes the parameters of kernel PCA. A small experimental study on a benchmark data set with missing values demonstrates that the evolutionary kernel PCA pipeline can be tuned with relatively few optimization steps, which makes evolutionary tuning applicable to scenarios with very large data sets.

Oliver Kramer
An Experimental Study of Dimensionality Reduction Methods

Dimensionality reduction (DR) lowers the dimensionality of a high-dimensional data set by reducing the number of features for each pattern. The importance of DR techniques for data analysis and visualization led to the development of a large diversity of DR methods. The lack of comprehensive comparative studies makes it difficult to choose the best DR methods for a particular task based on known strengths and weaknesses. To close the gap, this paper presents an extensive experimental study comparing 29 DR methods on 13 artificial and real-world data sets. The performance assessment of the study is based on six quantitative metrics. According to our benchmark and evaluation scheme, the methods mMDS, GPLVM, and PCA turn out to outperform their competitors, although exceptions are revealed for special cases.

Almuth Meier, Oliver Kramer
Planning with Independent Task Networks

Task networks are a powerful tool for AI planning. Classical approaches like forward STN planning and SHOP typically devise non-deterministic algorithms that can be operationalized using classical graph search techniques such as A*. For two reasons, however, this strategy is sometimes inefficient. First, identical tasks might be resolved several times within the search process, i.e., the same subproblem is solved repeatedly instead of being reused. Second, large parts of the search space might be redundant if some of the objects in the planning domain are substitutable.In this paper, we present an extension of simple task networks that avoid these problems and enable a much more efficient planning process. Our main innovation is the creation of new constants during planning combined with AND-OR-graph search. To demonstrate the advantages of these techniques, we present a case study in the field of automated service composition, in which search space reductions of several magnitudes can be achieved.

Felix Mohr, Theo Lettmann, Eyke Hüllermeier
Complexity-Aware Generation of Workflows by Process-Oriented Case-Based Reasoning

One of the biggest challenges in business process management is the creation of appropriate and efficient workflows. This asks for intelligent, knowledge-based systems that assist domain experts in this endeavor. In this paper we investigate workflow creation by applying Process-Oriented Case-Based Reasoning (POCBR). We introduce POCBR and describe how it can be applied to the experience-based generation of workflows by retrieval and adaptation of available best-practice workflow models. While existing approaches have already demonstrated their feasibility in principle, the generated workflows are not optimized with respect to complexity requirements. However, there is a high interest in workflows with a low complexity, e.g., to ensure the appropriate enactment as well as the understandability of the workflow. The main contribution of this paper is thus a novel approach to consider the workflow complexity during the workflow generation. Therefore, a complexity measure for workflows is proposed and integrated into the retrieval and adaptation process. An experimental evaluation with real cooking recipes clearly demonstrates the benefits of the described approach.

Gilbert Müller, Ralph Bergmann
LiMa: Sequential Lifted Marginal Filtering on Multiset State Descriptions

Maintaining the a-posteriori distribution of categorical states given a sequence of noisy and ambiguous observations, e.g. sensor data, can lead to situations where one observation can correspond to a large number of different states. We call these states symmetrical as they cannot be distinguished given the observation. Considering each of them during the inference is computationally infeasible, even for small scenarios. However, the number of situations (called hypotheses) can be reduced by abstracting from particular ones and representing all symmetrical in a single abstract state. We propose a novel Bayesian Filtering algorithm that performs this abstraction. The algorithm that we call Lifted Marginal Filtering (LiMa) is inspired by Lifted Inference and combines techniques known from Computational State Space Models and Multiset Rewriting Systems to perform efficient sequential inference on a parametric multiset state description. We demonstrate that our approach is working by comparing LiMa with conventional filtering.

Max Schröder, Stefan Lüdtke, Sebastian Bader, Frank Krüger, Thomas Kirste
A Priori Advantages of Meta-Induction and the No Free Lunch Theorem: A Contradiction?

Recently a new account to the problem of induction has been developed [1], based on a priori advantages of regret-weighted meta-induction (RW) in online learning [2]. The claimed a priori advantages seem to contradict the no free lunch (NFL) theorem, which asserts that relative to a state-uniform prior distribution (SUPD) over possible worlds all (non-clairvoyant) prediction methods have the same expected predictive success. In this paper we propose a solution to this problem based on four novel results:RW enjoys free lunches, i.e., its predictive long-run success dominates that of other prediction strategies.Yet the NFL theorem applies to online prediction tasks provided the prior distribution is a SUPD.The SUPD is maximally induction-hostile and assigns a probability of zero to all possible worlds in which RW enjoys free lunches. This dissolves the apparent conflict with the NFL.The a priori advantages of RW can be demonstrated even under the assumption of a SUPD. Further advantages become apparent when a frequency-uniform distribution is considered.

Gerhard Schurz, Paul Thorn
Dynamic Map Update of Non-static Facility Logistics Environment with a Multi-robot System

Autonomous robots need to perceive and represent their environments and act accordingly. Using simultaneous localization and mapping (SLAM) methods, robots can build maps of the environment which are efficient for localization and path planning as long as the environment remains unchanged. However, facility logistics environments are not static because pallets and other obstacles are stored temporarily.This paper proposes a novel solution for updating maps of changing environments (i.e. environments with low-dynamic or semi-static objects) in real-time with multiple robots. Each robot is equipped with a laser range sensor and runs localization to estimate its position. Each robot senses the change in the environment with respect to a current map, initially built with a SLAM method, and constructs a temporary map which will be merged into the current map using localization information and line features of the map. This procedure enables the creation of long-term mapping robot systems for facility logistics.

Nayabrasul Shaik, Thomas Liebig, Christopher Kirsch, Heinrich Müller
Similarity and Contrast on Conceptual Spaces for Pertinent Description Generation

Within the general objective of conceiving a cognitive architecture for image interpretation able to generate outputs relevant to several target user profiles, the paper elaborates on a set of operations that should be provided by a cognitive space to guarantee the generation of relevant descriptions. First, it attempts to define a working definition of contrast operation. Then, revisiting well-known results in cognitive studies, it sketches a definition of similarity based on contrast, distinguished from the metric defined on the conceptual space.

Giovanni Sileno, Isabelle Bloch, Jamal Atif, Jean-Louis Dessalles

Technical Communications

Frontmatter
Alarm Management on a Liquid Bulk Terminal

This paper reports on a research project to use Artificial Intelligence (AI) technology to reduce the alarm handling workload of control room operators in a terminal in the harbour of Antwerp. Several characteristics of this terminal preclude the use of standard methods, such as root cause analysis. Therefore, we focused attention on the process engineers and developed a system to help these engineers reduce the number of alarms that occur. It consists of two components: one to identify interesting alarms and another to analyse them. For both components, user-friendly visualisations were developed.

Bram Aerts, Kylian Van Dessel, Joost Vennekens
Action Model Acquisition Using Sequential Pattern Mining

This paper presents an approach to learn the agents’ action model (action blueprints orchestrating transitions of the system state) from plan execution sequences. It does so by representing intra-action and inter-action dependencies in the form of a maximum satisfiability problem (MAX-SAT), and solving it with a MAX-SAT solver to reconstruct the underlying action model. Unlike previous MAX-SAT driven approaches, our chosen dependencies exploit the relationship between consecutive actions, rendering more accurately learnt models in the end.

Ankuj Arora, Humbert Fiorino, Damien Pellier, Sylvie Pesty
Steering Plot Through Personality and Affect: An Extended BDI Model of Fictional Characters

The present paper identifies an important narrative framework that has not yet been employed to implement computational story telling systems. Grounded in this theory, it suggests to use a BDI architecture extended with a personality-based affective appraisal component to model fictional characters. A proof of concept is shown to be capable of generating the plot of a folk tale. This example is used to explore the system’s parameters and the plot-space that is spanned by them.

Leonid Berov
Ontological Modelling of a Psychiatric Clinical Practice Guideline

Clinical practice guidelines (CPGs) serve to transfer results from evidence-based medicine into clinical practice. There is growing interest in clinical decision support systems (CDSS) implementing the guideline recommendations; research on such systems typically considers combinations of workflow languages with knowledge representation formalisms. Here, we report on experience with an OWL-based proof-of-concept implementation of parts of the German S3 guideline for schizophrenia. From the information-technological point of view, the salient feature of our implementation is that it represents the CPG entirely as a logic-based ontology, without resorting, e.g., to rule-based action formalisms or hard-wired workflows to capture clinical pathways. Our current goal is to establish that such an implementation is feasible; long-range benefits we expect from the approach are modularity of CPG implementation, ease of maintenance, and logical unity.

Daniel Gorín, Malte Meyn, Alexander Naumann, Miriam Polzer, Ulrich Rabenstein, Lutz Schröder
A Logic Programming Approach to Collaborative Autonomous Robotics

We consider scenarios in which robots need to collaboratively perform a task. Our focus is on scenarios where every robot has the same goal and performs autonomous decisions on which next step to do to reach the shared goal. The robots can synchronize by requesting each others’ help. Our approach builds on a knowledge-based architecture in which robot goals and behaviour are encoded declaratively using logic programming, that is Prolog in our case. Each robot executes the same Prolog program and requests help from other robots to cooperatively solve some subtasks. In this paper we present the system architecture and a proof-of-concept scenario of tidying up an apartment. In this scenario a set of robots are working autonomously yet collaboratively to tidy up a simulated apartment by placing the scattered objects in their proper places.

Binal Javia, Philipp Cimiano
Parametrizing Cartesian Genetic Programming: An Empirical Study

Since its introduction two decades ago, the way researchers parameterized and optimized Cartesian Genetic Programming (CGP) remained almost unchanged. In this work we investigate non-standard parameterizations and optimization algorithms for CGP. We show that the conventional way of using CGP, i.e. configuring it as a single line optimized by an (1+4) Evolutionary Strategies-style search scheme, is a very good choice but that rectangular CGP geometries and more elaborate metaheuristics, such as Simulated Annealing, can lead to faster convergence rates.

Paul Kaufmann, Roman Kalkreuth
Gesture ToolBox: Touchless Human-Machine Interface Using Deep Learning

Human-Computer Interaction (HMI) is useful in sterile environments such as operating rooms (OR) where surgeons need to interact with images from scanners of organs on screens. Contamination issues may happen if the surgeon must touch a keyboard or the mouse. In order to reduce contamination and improve the interactions with the images without asking another team member, the Gesture ToolBox project, based on previous methods of Altran Research, has been proposed. Ten different signs from the LSF (French Sign Language) have been chosen as a way to interact with the images. In order to detect the signs, deep learning methods have been programmed using a pre-trained Convolutional Neural Network (VGG-16). A Kinect is used to detect the positions of the hand and classify gestures. The system allows the user to select, move, zoom in, or zoom out images from organs on the screen according to the recognised sign. Results with 11 subjects are used demonstrate this system in the laboratory. Future work will include tests in real situations in an operating room to obtain feedback from surgeons to improving the system.

Elann Lesnes-Cuisiniez, Jesus Zegarra Flores, Jean-Pierre Radoux
Analysis of Sound Localization Data Generated by the Extended Mainzer Kindertisch

In this paper, we analyze data generated by the extended Mainzer Kindertisch (ERKI setup). The setup uses five loudspeakers to generate 32 virtual sound sources which are ordered in an half circle. It is a new test setup in the medical field. The task for the test candidate is to locate sounds from different, randomly chosen directions. In our analysis, we apply data from test results of adults and of children and compare them. We show that the ERKI setup is applicable for sound localization and that the generated data contain various properties which can be explained by medical reasons. Further, we demonstrate the possibility to detect noticeable test results with data generated by the ERKI setup.

Daniel Lückehe, Katharina Schmidt, Karsten Plotz, Gabriele von Voigt
A Robust Number Parser Based on Conditional Random Fields

When processing information from unstructured sources, numbers have to be parsed in many cases to do useful reasoning on that information. However, since numbers can be expressed in different ways, a robust number parser that can cope with number representations in different shapes is required in those cases. In this paper, we show how to train such a parser based on Conditional Random Fields. As training data, we use pairs of Wikipedia infobox entries and numbers from public knowledge graphs. We show that it is possible to parse numbers at an accuracy of more than 90%.

Heiko Paulheim
From Natural Language Instructions to Structured Robot Plans

Research into knowledge acquisition for robotic agents has looked at interpreting natural language instructions meant for humans into robot-executable programs; however, the ambiguities of natural language remain a challenge for such “translations”. In this paper, we look at a particular sort of ambiguity: the control flow structure of the program described by the natural language instruction. It is not always clear, when more conditional statements appear in a natural language instruction, which of the conditions are to be thought of as alternative options in the same test, and which belong to a code branch triggered by a previous conditional. We augment a system which uses probabilistic reasoning to identify the meaning of the words in a sentence with reasoning about action preconditions and effects in order to filter out non-sensical code structures. We test our system with sample instruction sheets inspired from analytical chemistry.

Mihai Pomarlan, Sebastian Koralewski, Michael Beetz
Opportunistic Planning with Recovery for Robot Safety

Emerging applications for various types of robots require them to operate autonomously in the vicinity of humans. Ensuring the safety of humans is a permanent requirement that must be upheld at all times. As tasks performed by robots are becoming more complex, adaptive planning components are required. Modern planning approaches do not contribute to a robot system’s safety capabilities, which are thus limited to means like reduction of force and speed or halt of the robot. We argue that during a plan’s execution there should be a backup plan that the robot system can fall back to as an alternative to halting. We present our approach of generating variations of plans by optional exploitation of opportunities, which extend the initially safe plan for extra value when the current safety situation allows it, while maintaining recovery policies to get back to the safe plan in the case of safety-related events.

Bernhard Reiterer, Michael Hofbaur
Towards Simulation-Based Role Optimization in Organizations

The modern workplace is driven by a high amount of available information which can be observed in various domains, e.g., in Industry 4.0. Hence, the question arises: Which competences do actors need to build and efficient work environment? This paper proposes an simulation-based optimization approach to adapt role configurations for team work scenarios. The approach was tested using a multiagent-based job-shop-scheduling model to simulate the effects of various role configurations.

Lukas Reuter, Jan Ole Berndt, Ingo J. Timm
One Knowledge Graph to Rule Them All? Analyzing the Differences Between DBpedia, YAGO, Wikidata & co.

Public Knowledge Graphs (KGs) on the Web are considered a valuable asset for developing intelligent applications. They contain general knowledge which can be used, e.g., for improving data analytics tools, text processing pipelines, or recommender systems. While the large players, e.g., DBpedia, YAGO, or Wikidata, are often considered similar in nature and coverage, there are, in fact, quite a few differences. In this paper, we quantify those differences, and identify the overlapping and the complementary parts of public KGs. From those considerations, we can conclude that the KGs are hardly interchangeable, and that each of them has its strenghts and weaknesses when it comes to applications in different domains.

Daniel Ringler, Heiko Paulheim
ClassifyHub: An Algorithm to Classify GitHub Repositories

The classification of repositories found on GitHub can be considered as a hard task. However, the solution of this task could be helpful for a lot of different applications (e.g. Recommender Systems). In this paper we present ClassifyHub, an algorithm based on Ensemble Learning developed for the InformatiCup 2017 competition, which is able to tackle this classification problem with high precision and recall. In addition we provide a data set of classified repositories for further research.

Marcus Soll, Malte Vosgerau
Bremen Big Data Challenge 2017: Predicting University Cafeteria Load

Big data is a hot topic in research and industry. The availability of data has never been as high as it is now. Making good use of the data is a challenging research topic in all aspects of industry and society. The Bremen Big Data Challenge invites students to dig deep into big data. In this yearly event students are challenged to use the month of March to analyze a big dataset and use the knowledge they gained to answer a question. In this year’s Bremen Big Data Challenge students were challenged to predict the load of the university cafeteria from the load of past years. The best of 24 teams predicted the load with a root mean squared error of 8.6 receipts issued in five minutes, with a fusion system based on the top 5 entries achieving an even better result of 8.28.

Jochen Weiner, Lorenz Diener, Simon Stelter, Eike Externest, Sebastian Kühl, Christian Herff, Felix Putze, Timo Schulze, Mazen Salous, Hui Liu, Dennis Küster, Tanja Schultz
Towards Sentiment Analysis on German Literature

Sentiment Analysis is a Natural Language Processing-task that is relevant in a number of contexts, including the analysis of literature. We report on ongoing research towards enabling, for the first time, sentence-level Sentiment Analysis in the domain of German novels. We create a labelled dataset from sentences extracted from German novels and, by adapting existing sentiment classifiers, reach promising F1-scores of 0.67 for binary polarity classification.

Albin Zehe, Martin Becker, Fotis Jannidis, Andreas Hotho
Backmatter
Metadata
Title
KI 2017: Advances in Artificial Intelligence
Editors
Gabriele Kern-Isberner
Dr. Johannes Fürnkranz
Matthias Thimm
Copyright Year
2017
Electronic ISBN
978-3-319-67190-1
Print ISBN
978-3-319-67189-5
DOI
https://doi.org/10.1007/978-3-319-67190-1

Premium Partner