Skip to main content

2017 | Buch

BNAIC 2016: Artificial Intelligence

28th Benelux Conference on Artificial Intelligence, Amsterdam, The Netherlands, November 10-11, 2016, Revised Selected Papers


Über dieses Buch

This book contains a selection of the best papers that were presented at the 28th edition of the annual Benelux Conference on Artificial Intelligence, BNAIC 2016. The conference took place on November 10-11, 2016, in Hotel Casa 400 in Amsterdam. The conference was jointly organized by the University of Amsterdam and the Vrije Universiteit Amsterdam, under the auspices of the Benelux Association for Artificial Intelligence (BNVKI) and the Dutch Research School for Information and Knowledge Systems (SIKS). The objective of BNAIC is to promote and disseminate recent research developments in Artificial Intelligence, particularly within Belgium, Luxembourg and the Netherlands, although it does not exclude contributions from countries outside the Benelux.
The 13 contributions presented in this volume (8 regular papers, 4 student papers, and 1 demonstration paper) were carefully reviewed and selected from 93 submissions. They address various aspects of artificial intelligence such as natural language processing, agent technology, game theory, problem solving, machine learning, human-agent interaction, AI & education, and data analysis.



Regular Papers

Predicting Civil Unrest by Categorizing Dutch Twitter Events
We propose a system that assigns topical labels to automatically detected events in the Twitter stream. The automatic detection and labeling of events in social media streams is challenging due to the large number and variety of messages that are posted. The early detection of future social events, specifically those associated with civil unrest, has a wide applicability in areas such as security, e-governance, and journalism. We used machine learning algorithms and encoded the social media data using a wide range of features. Experiments show a high-precision (but low-recall) performance in the first step. We designed a second step that exploits classification probabilities, boosting the recall of our category of interest, social action events.
Rik van Noord, Florian A. Kunneman, Antal van den Bosch
Textual Inference with Tree-Structured LSTM
Textual Inference is a research trend in Natural Language Processing (NLP) that has recently received a lot of attention by the scientific community. Textual Entailment (TE) is a specific task in Textual Inference that aims at determining whether a hypothesis is entailed by a text. This paper employs the Child-Sum Tree-LSTM for solving the challenging problem of textual entailment. Our approach is simple and able to generalize well without excessive parameter optimization. Evaluation done on SNLI, SICK and other TE datasets shows the competitiveness of our approach.
Adebayo Kolawole John, Luigi Di Caro, Livio Robaldo, Guido Boella
Extracting Core Claims from Scientific Articles
The number of scientific articles has grown rapidly over the years and there are no signs that this growth will slow down in the near future. Because of this, it becomes increasingly difficult to keep up with the latest developments in a scientific field. To address this problem, we present here an approach to help researchers learn about the latest developments and findings by extracting in a normalized form core claims from scientific articles. This normalized representation is a controlled natural language of English sentences called AIDA, which has been proposed in previous work as a method to formally structure and organize scientific findings and discourse. We show how such AIDA sentences can be automatically extracted by detecting the core claim of an article, checking for AIDA compliance, and – if necessary – transforming it into a compliant sentence. While our algorithm is still far from perfect, our results indicate that the different steps are feasible and they support the claim that AIDA sentences might be a promising approach to improve scientific communication in the future.
Tom Jansen, Tobias Kuhn
Towards Legal Compliance by Correlating Standards and Laws with a Semi-automated Methodology
Since generally legal regulations do not provide clear parameters to determine when their requirements are met, achieving legal compliance is not trivial. The adoption of standards could help create an argument of compliance in favour of the implementing party, provided there is a clear correspondence between the provisions of a specific standard and the regulation’s requirements. However, identifying such correspondences is a complex process which is complicated further by the fact that the established correlations may be overridden in time e.g., because newer court decisions change the interpretation of certain legal provisions. To help solve these problems, we present a framework that supports legal experts in recognizing correlations between provisions in a standard and requirements in a given law. The framework relies on state-of-the-art Natural Language Semantics techniques to process the linguistic terms of the two documents, and maintains a knowledge base of the logic representations of the terms, together with their defeasible correlations, both formal and substantive. An application of the framework is shown by comparing a provision of the European General Data Protection Regulation with the ISO/IEC 27018:2014 standard.
Cesare Bartolini, Andra Giurgiu, Gabriele Lenzini, Livio Robaldo
Mobile Radio Tomography: Agent-Based Imaging
Mobile radio tomography applies moving agents that perform wireless signal strength measurements in order to reconstruct an image of objects inside an area of interest. We propose a toolchain to facilitate automated agent planning, data collection, and dynamic tomographic reconstruction. Preliminary experiments show that the approach is feasible and results in smooth images that clearly depict objects at the expected locations when using missions that sufficiently cover the area of interest.
K. Joost Batenburg, Leon Helwerda, Walter A. Kosters, Tim van der Meij
Combining Combinatorial Game Theory with an - Solver for Clobber: Theory and Experiments
Combinatorial games are a special category of games sharing the property that the winner is by definition the last player able to move. To solve such games two main methods are being applied. The first is a general \(\alpha \)-\(\beta \) search with many possible enhancements. This technique is applicable to every game, mainly limited by the size of the game due to the exponential explosion of the solution tree. The second way is to use techniques from Combinatorial Game Theory (CGT), with very precise CGT values for (subgames of) combinatorial games. This method is only applicable to relatively small (sub)games. In this paper, which is an extended version of [7], we show that the methods can be combined in a fruitful way by incorporating an endgame database filled with CGT values into an \(\alpha \)-\(\beta \) solver.
We apply this technique to the game of Clobber, a well-known all-small combinatorial game. Our test suite consists of 20 boards with sizes up to 18 squares. An endgame database was created for all subgames of size 8 and less. The CGT values were calculated using the CGSUITE package. Experiments reveal reductions of at least 75% in number of nodes investigated.
Jos W. H. M. Uiterwijk, Janis Griebel
Aspects of the Cooperative Card Game Hanabi
We examine the cooperative card game Hanabi. Players can only see the cards of the other players, but not their own. Using hints partial information can be revealed. We show some combinatorial properties, and develop AI (Artificial Intelligence) players that use rule-based and Monte Carlo methods.
Mark J. H. van den Bergh, Anne Hommelberg, Walter A. Kosters, Flora M. Spieksma
Solving the Travelling Umpire Problem with Answer Set Programming
In this paper, we develop an Answer Set Programming (ASP) solution to the Travelling Umpire Problem (TUP). We investigate a number of different ways to improve the computational performance of this solution and compare it to the current state-of-the-art. Our results demonstrate that the ASP solution is superior to other declarative solutions, such as Constraint Programming, but that it cannot match the most recent special-purpose algorithms from the literature. However, when compared to the earlier generation of special-purpose algorithms, it does quite well.
Joost Vennekens

Student Papers

Design of a Fuzzy Logic Based Framework for Comprehensive Anomaly Detection in Real-World Energy Consumption Data
Due to the rapid growth of energy consumption worldwide, it has become a necessity that the energy waste caused by buildings is explicated by the aid of automated systems that can identify anomalous behaviour. Comprehensible anomaly detection, however, is a challenging task considering the lack of annotated real-world data in addition to the real-world uncertainties such as changing weather conditions and varying building features. Fuzzy Logic enables modelling knowledge-based non-linear systems that can handle these uncertainties, and facilitates modelling human interpretable systems. This paper proposes a new method for annotating anomalies and a novel framework for interpretable anomaly detection in real-world gas consumption data belonging to the educational buildings of the Hogeschool van Amsterdam. The proposed architecture uses the Wang and Mendel rule learning with k-means clustering and does not require prior knowledge of the data, while preserving transparency of the model behaviour. Experiments have shown that the proposed system matches the performance of existing baselines using an artificial neural network while providing additional desired features such as transparency of the model behaviour and interpretability of the detected anomalies.
Muriel Hol, Aysenur Bilgin
Fostering Relatedness Between Children and Virtual Agents Through Reciprocal Self-disclosure
A key challenge in developing companion agents for children is keeping them interested after novelty effects wear off. Self Determination Theory posits that motivation is sustained if the human feels related to another human. According to Social Penetration Theory, relatedness can be established through the reciprocal disclosure of information about the self. Inspired by these social psychology theories, we developed a disclosure dialog module to study the self-disclosing behavior of children in response to that of a virtual agent. The module was integrated into a mobile application with avatar presence for diabetic children and subsequently used by 11 children in an exploratory field study over the course of approximately two weeks at home. The number of disclosures that children made to the avatar during the study indicated the relatedness they felt towards the agent at the end of the study. While all children showed a decline in their usage over time, more related children used the application more, and more consistently than less related children. Avatar disclosures of lower intimacy were reciprocated more than avatar disclosures of higher intimacy. Girls reciprocated disclosures more frequently. No relationship was found between the intimacy level of agent disclosures and child disclosures. Particularly the last finding contradicts prior child-peer interaction research and should therefore be further examined in confirmatory research.
Franziska Burger, Joost Broekens, Mark A. Neerincx
Lack of Effort or Lack of Ability? Robot Failures and Human Perception of Agency and Responsibility
Research on human interaction has shown that considering an agent’s actions related to either effort or ability can have important consequences for attributions of responsibility. In this study, these findings have been applied in a HRI context, investigating how participants’ interpretation of a robot failure in terms of effort -as opposed to ability- may be operationalized and how this influences the human perception of the robot having agency over and responsibility for its actions. Results indicate that a robot displaying lack of effort significantly increases human attributions of agency and –to some extent- moral responsibility to the robot. Moreover, we found that a robot’s display of lack of effort does not lead to the level of affective and behavioral reactions of participants normally found in reactions to other human agents.
Sophie van der Woerdt, Pim Haselager
Performance Indicators for Online Secondary Education: A Case Study
There is little consensus about what variables extracted from learner data are the most reliable indicators of learning performance. The aim of this study is to determine such indicators by taking a wide range of variables into consideration concerning overall learning activity and content processing. A genetic algorithm is used for the selection process and variables are evaluated based on their predictive power in a classification task. Variables extracted from exercise activities turn out to be most informative. Exercises designed to train students in understanding and applying material are found to be especially informative.
Pepijn van Diepen, Bert Bredeweg

Demonstration Papers

SWISH DataLab: A Web Interface for Data Exploration and Analysis
SWISH DataLab is a single integrated collaborative environment for data processing, exploration and analysis combining Prolog and R. The web interface makes it possible to share the data, the code of all processing steps and the results among researchers; and a versioning system facilitates reproducibility of the research at any chosen point. Using search logs from the National Library of the Netherlands combined with the collection content metadata, we demonstrate how to use SWISH DataLab for all stages of data analysis, using Prolog predicates, graph visualizations, and R.
Tessel Bogaard, Jan Wielemaker, Laura Hollink, Jacco van Ossenbruggen
BNAIC 2016: Artificial Intelligence
herausgegeben von
Tibor Bosse
Bert Bredeweg
Electronic ISBN
Print ISBN

Premium Partner