Skip to main content

2017 | Buch

Advances in Artificial Intelligence: From Theory to Practice

30th International Conference on Industrial Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2017, Arras, France, June 27-30, 2017, Proceedings, Part I

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 10350 and 10351 constitutes the thoroughly refereed proceedings of the 30th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2017, held in Arras, France, in June 2017.

The 70 revised full papers presented together with 45 short papers and 3 invited talks were carefully reviewed and selected from 180 submissions. They are organized in topical sections: constraints, planning, and optimization; data mining and machine learning; sensors, signal processing, and data fusion; recommender systems; decision support systems; knowledge representation and reasoning; navigation, control, and autonome agents; sentiment analysis and social media; games, computer vision; and animation; uncertainty management; graphical models: from theory to applications; anomaly detection; agronomy and artificial intelligence; applications of argumentation; intelligent systems in healthcare and mhealth for health outcomes; and innovative applications of textual analysis based on AI.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Frontmatter
Fuzzy Semantic Web Languages and Beyond

The aim of this talk is to present the state of the art in representing and reasoning with fuzzy knowledge in Semantic Web Languages such as triple languages RDF/RDFS, conceptual languages of the OWL 2 family and rule languages. We further show how one may generalise them to so-called annotation domains, that cover also e.g. temporal and provenance extensions.

Umberto Straccia
Rational Enterprise Architecture

We are interested in formal foundations for enterprise decision support. In this perspective, enterprise architecture is characterised by highly uncertain plans in a changing environment, and translates strategic goals into an IT strategy. Typically there are a large number of stakeholders with conflicting views, communicating plans of action, and explaining decisions instead of making them. An enterprise architecture considers qualitative before quantitative data, has stronger business focus than other disciplines, and politics, emotions, and soft skills play a bigger role than in other areas. We view a plan abstractly as a sequence of commitments in time, and each commitment in the plan may come with a number of underlying assumptions. If these underlying assumptions change, then parts of the plan may require revision, which in turn may invalidate other parts of the plan, and so on. Therefore, assumptions have an inherently non-monotonic character: they are assumed to be true, unless it becomes clear they are false. This is related to the resource-boundedness of enterprise architecture: an enterprise architect cannot always know all of the assumptions, especially for long term plans.

Leendert van der Torre, Marc van Zee

Constraints, Planning and Optimization

Frontmatter
Cluster-Specific Heuristics for Constraint Solving

In Constraint Satisfaction Problems (CSP), variable ordering heuristics help to increase efficiency. Applying an appropriate heuristic can increase the performance of CSP solvers. On the other hand, if we apply specific heuristics for similar CSPs, CSP solver performance could be further improved. Similar CSPs can be grouped into same clusters. For each cluster, appropriate heuristics can be found by applying a local search. Thus, when a new CSP is created, the corresponding cluster can be found and the pre-calculated heuristics for the cluster can be applied. In this paper, we propose a new method for constraint solving which is called Cluster Specific Heuristic (CSH). We present and evaluate our method on the basis of example CSPs.

Seda Polat Erdeniz, Alexander Felfernig, Muesluem Atas, Thi Ngoc Trang Tran, Michael Jeran, Martin Stettinger
Car Pooling Based on a Meta-heuristic Approach

The high use of private cars increases the load on the environment and raises issues of high levels of air pollution in cities, parking problems, congestion and low transfer velocity. Car pooling is a collective transportation model based on shared use of private cars to reduce the number of cars in use by grouping people. By exploiting car pooling model, it can significantly reduce congestion, fuel consumption, parking demands and commuting costs. An important issue in car pooling systems is to develop a car pooling algorithm to match passengers and drivers. The goals of this paper are to propose a model and a solution methodology that is seamlessly integrated with existing geographic information system to facilitate determination of drivers/passengers for ride sharing. In this paper, we formulate a car pooling problem and propose a solution algorithm for it based on a meta-heuristic approach. We have implemented our solution algorithm and conduct experiments to illustrate the effectiveness of our proposed method by examples.

Fu-Shiung Hsieh, Fu-Min Zhan, Yi-Hong Guo
Reactive Motion Planning with Qualitative Constraints

Qualitative modeling tends to be closer to human type of reasoning than traditional numerical modeling and proved to be very useful in certain branches of cognitive robotics. However, due to the lack of precise numerical relations, planning with qualitative models has been achieved to a limited extent. Typically, it is bound to predicting possible future behaviors of the system, and demands additional exploration of numerical relations, before constructed plans can be executed. In this paper we show how qualitative models can be interpreted in terms of reactive planning, to produce executable actions without the need for additional numerical learning. We demonstrate our method on two classical motion planning problems – pursuing and obstacle avoidance, and a complex problem of pushing objects.

Domen Šoberl, Ivan Bratko
A New System for the Dynamic Shortest Route Problem

The Shortest Route Problem concerns routing one vehicle to one customer while minimizing some objective functions. The problem is essentially a shortest path problem and has been studied extensively in the literature. We report a system with the objective to address two dynamic aspects of the Shortest Route Problem. The first aspect corresponds to handing incremental changes during the routing plan. The second one is about finding the most probable shortest path i.e. the path with the highest probability of being not congested. We describe how each of these two aspects has been implemented in the system as well as the other features and components of this latter.

Eisa Alanazi, Malek Mouhoub, Mahmoud Halfawy
M-NSGA-II: A Memetic Algorithm for Vehicle Routing Problem with Route Balancing

The vehicle routing problem with route balancing (VRPRB) is a variant of classical VRPs. It is a bi-objective optimization problem which considers the total length of routes and the balance issue among different routes. In this paper, the balance objective we introduce is the minimization of the maximal route length, which can effectively avoid the occurrence of distorted solutions. We develop an NSGA-II based memetic algorithm (M-NSGA-II) for the VRPRB. The M-NSGA-II algorithm combines the NSGA-II algorithm with a local search procedure which consists of four local search operators. To evaluate our algorithm, we test it on the standard benchmarks and compare our results with the referenced approach. Moreover, we analyze the effect of different local search operators on M-NSGA-II algorithm. Computational results indicate that our M-NSGA-II algorithm is able to produce better solutions.

Yuyan Sun, Yuxuan Liang, Zizhen Zhang, Jiahai Wang
A Matrix-Based Implementation of DE Algorithm: The Compensation and Deficiency

Differential Evolution has become a very popular continuous optimization algorithm since its inception as its simplicity, easy coding and good performance over kinds of optimization problems. Difference operator in donor vector calculation is the key feature of DE algorithm. Usually, base vector and difference vectors selection in calculating a donor usually cost extra lines of condition judgement. Moreover, these vectors are not equally selected from the individual population. These lead to more perturbation in optimization performance. To tackling this disadvantage of DE implementation, a matrix-based implementation of DE algorithm is advanced herein this paper. Three commonly used DE implementation approaches in literature are also presented and contrasted. CEC2013 test suites for real-parameter optimization are used as the test-beds for these comparison. Experiment results show that the proposed matrix-based implementation of DE algorithm performs better on optimization performance than the common implementation schemes of DE algorithm with similar time complexity.

Jeng-Shyang Pan, Zhenyu Meng, Huarong Xu, Xiaoqing Li
A Bayesian Model of Game Decomposition

In this paper, we propose a Bayesian probabilistic model to describe collective behavior generated by a finite number of agents competing for limited resources. In this model, the strategy for each agent is a binary choice in the Minority Game and it can be modeled by a Binomial distribution with a Beta prior. The strategy of an agent can be learned given a sequence of historical choices by using Bayesian inference. Aggregated micro-level choices constitute the observable time series data in macro-level, therefore, this can be regarded as a machine learning model for time series prediction. To verify the effectiveness of the new model, we conduct a series of experiments on artificial data and real-world stock price data. Experimental results demonstrate the new proposed model has a better performance comparing to a genetic algorithm based decomposition model.

Hanqing Zhao, Zengchang Qin, Weijia Liu, Tao Wan
Two-Timescale Learning Automata for Solving Stochastic Nonlinear Resource Allocation Problems

This papers deals with the Stochastic Non-linear Fractional Equality Knapsack (NFEK) problem which is a fundamental resource allocation problem based on incomplete and noisy information [2, 3]. The NFEK problem arises in many applications such as in web polling under polling constraints, and in constrained estimation. The primary contribution of this paper is a continuous Learning Automata (LA)-based, optimal, efficient and yet simple solution to the NFEK problem. Our solution reckoned as the Two-timescale based Learning Automata (T-TLA) solves the NFEK problem by performing updates on two different timescales. To the best of our knowledge, this is the first tentative in the literature to design an LA that operates with two-time scale updates. Furthermore, the T-TLA solution is distinct from the first-reported optimal solution to the problem due to Granmo and Oommen [2, 3] which resorts to utilizing multiple two-action discretized LA, organized in a hierarchical manner, so as to be able to tackle the case of multi-materials. Hence, the T-TLA scheme mitigates the complexity of the state-of-the-art solution that involves partitioning the material set into two subsets of equal size at each level. We report some representative experimental results that illustrate the convergence of our scheme and its superiority to the state-of-the-art [2, 3].

Anis Yazidi, Hugo Lewi Hammer, Tore Møller Jonassen
A Hybrid of Tabu Search and Simulated Annealing Algorithms for Preemptive Project Scheduling Problem

In this paper, the resource constrained project scheduling problem with preemption is studied in which fixed setup time is needed to resume the preempted activities. The project entails activities with finish-to-start precedence relations, which need a set of renewable resources to be done. A mathematical model is presented for the problem and a hybrid of Tabu Search (TS) and Simulated Annealing (SA) with tuned parameters is developed to solve it. In order to evaluate the performance of the proposed TS/SA a set of 100 test problems is applied. Comprehensive statistical analysis shows that the proposed algorithm efficiently solves the problem. Furthermore, the benefits of preemption with setup times and its justifiability is demonstrated numerically.

Behrouz Afshar-Nadjafi, Mehdi Yazdani, Mahyar Majlesi
Elitist Ant System for the Distributed Job Shop Scheduling Problem

In this paper, we are interested in industrial plants geographically distributed and more precisely the Distributed Job shop Scheduling Problem (DJSP) in multi-factory environment. The problem consists of finding an effective way to assign jobs to factories then, to generate a good operation schedule. To do this, a bio-inspired algorithm is applied, namely the Elitist Ant System (EAS) aiming to minimize the makespan. Several numerical experiments are conducted to evaluate the performance of our algorithm applied to the Distributed Job shop Scheduling Problem and the results show the shortcoming of the Elitist Ant System compared to developed algorithms in the literature.

Imen Chaouch, Olfa Belkahla Driss, Khaled Ghedira
Fuzzy Reinforcement Learning for Routing in Multi-Hop Cognitive Radio Networks

Cognitive radio networks (CRNs) are composed of cognitive, spectrum-agile devices capable of changing their configuration on the fly, based on the spectrum assignment policy. Moreover, the CRNs technology allows sharing of licensed spectrum band in an opportunistic and non-interfering manner. Routing and the spectrum management are the challenges in these networks. To solve these problems, in this paper, a fuzzy reinforcement learning method is proposed, where a new fuzzy reinforcement learning procedure is built in each secondary user (SU). The proposed procedure learns the best routing with a guarantee that the interference at the primary receivers is below the threshold and focuses on the problem of effective routing solutions in multi-hop CRNs.

Jerzy Martyna
FJS Problem Under Machine Breakdowns

One of the most challenging problems in manufacturing field is to solve the flexible job shop (FJS) problem subject to machines breakdown. In this paper, we propose two rescheduling solutions to handle machine breakdowns: a PSO-based solution and a shifting-based solution. The first solution aims to improve the robustness while the second solution aims to improve the stability.

Rim Zarrouk, Imed Bennour, Abderrazak Jemai, Abdelghani Bekrar
A Dijkstra-Based Algorithm for Selecting the Shortest-Safe Evacuation Routes in Dynamic Environments (SSER)

In this work is proposed an approach for addressing the problem to find the shortest-safe routes in buildings with many evacuation doors and where the accessibility of internal areas could be changed by different kind of sensors. We present two advantages over the common use of Dijkstra’s algorithm, related to the problem of obtaining evacuation routes: (1) Fast search of the shortest-safe evacuation route to multiple exits with a backward approach and (2) Support to dynamic environments (graph with variable vertex availability). Four Dijkstra-based algorithms were considered in order to evaluate the performance of the proposed approach, achieving short times in evacuation to multiple exits.

Angely Oyola, Dennis G. Romero, Boris X. Vintimilla
Replication in Fault-Tolerant Distributed CSP

Real life problems can be solved by a distributed way, in particular by multi-agent approaches. However, the fault tolerance is not guarantee when an agent, for example, does not have any activity (e.g. it dies). This problem is very crucial, when the interactional model is based on a Distributed CSP. Many algorithms have been proposed in the literature, but they give wrong results if an agent dies. This paper presents an approach which is based on a replication principle: each local CSP is replicated in another agent.

Fadoua Chakchouk, Julien Vion, Sylvain Piechowiak, René Mandiau, Makram Soui, Khaled Ghedira
Optimal Route Prediction as a Smart Mobile Application of Gift Ideas

Searching for gifts’ inspirations for relatives is becoming increasingly difficult. A person can collect ideas on the internet, ask the relatives or get inspiration from gifts, which the person received in the past. This study focuses on the suggestion and development of an algorithm for the calculation of the optimal route between searched shops, which is a part of the mobile application for Android platform. The results of this study are summarised in the article’s conclusion.

Veronika Nemeckova, Jan Dvorak, Ali Selamat, Ondrej Krejcar

Data Mining and Machine Learning

Frontmatter
Machine Learning Approach to Detect Falls on Elderly People Using Sound

One of the most notable consequences of aging is the loss of motor function abilities, making elderly people specially susceptible to falls, which is of the most remarkable concerns in elder care. Thus, several solutions have been proposed to detect falls, however, none of them achieved a great success mainly because of the need of wearing a recording device. In this paper, we study the use of sound to detect fall events. The advantage of this approach over the traditional ones is that the subject does not require to wear additional devices to monitor his or her activities. Here, we apply machine learning techniques to process sound simulated the most common type of fall for the elderly, i.e., when the foot collides with an obstacle and the trunk hits the ground before using his/her hands to absorb the fall. The results show that high levels of accuracy can be achieved using only a few signal processing techniques.

Armando Collado-Villaverde, María D. R-Moreno, David F. Barrero, Daniel Rodriguez
A Novel k-NN Approach for Data with Uncertain Attribute Values

Data uncertainty arises in several real world domains, including machine learning and pattern recognition applications. In classification problems, we could very well wind up with uncertain attribute values that are caused by sensor failures, measurements approximations or even subjective expert assessments, etc. Despite their seriousness, these kinds of data are not well covered till now. In this paper, we propose to develop a machine learning model for handling such kinds of imperfection. More precisely, we suggest to develop a new version of the well known k-nearest neighbors classifier to handle the uncertainty that occurs in the attribute values within the belief function framework.

Asma Trabelsi, Zied Elouedi, Eric Lefevre
On Combining Imputation Methods for Handling Missing Data

In real-world problems, data are generally characterized by their imperfection. One of the most common forms of imperfection is missing data. In fact, dealing with missing data remains a very important issue in data mining and knowledge discovery researches. A panoply of methods, addressing this problem, is proposed in the literature handling different types of data. In this work, we focus our study towards three methods which are KNN, MissForest, and EM algorithm. These methods are considered among the most efficient in different imputation problems. In the first part of this work, we present a brief state of the art of the used imputation methods and the strategy that we propose to use. In the second part, we provide a comparative study based on different criterion showing the efficiency of MissForest compared to the other methods and we demonstrate that the combination is preferable to improve the imputation of continuous data instead of using them individually.

Nassima Ben Hariz, Hela Khoufi, Ezzeddine Zagrouba
Supervised Feature Space Reduction for Multi-Label Nearest Neighbors

With the ability to process many real-world problems, multi-label classification has received a large attention in recent years and the instance-based ML-kNN classifier is today considered as one of the most efficient. But it is sensitive to noisy and redundant features and its performances decrease with increasing data dimensionality. To overcome these problems, dimensionality reduction is an alternative but current methods optimize reduction objectives which ignore the impact on the ML-kNN classification. We here propose ML-ARP, a novel dimensionality reduction algorithm which, using a variable neighborhood search metaheuristic, learns a linear projection of the feature space which specifically optimizes the ML-kNN classification loss. Numerical comparisons have confirmed that ML-ARP outperforms ML-kNN without data processing and four standard multi-label dimensionality reduction algorithms.

Wissam Siblini, Reda Alami, Frank Meyer, Pascale Kuntz
Stock Volatility Prediction Using Recurrent Neural Networks with Sentiment Analysis

In this paper, we propose a model to analyze sentiment of online stock forum and use the information to predict the stock volatility in the Chinese market. We have labeled the sentiment of the online financial posts and make the dataset public available for research. By generating a sentimental dictionary based on financial terms, we develop a model to compute the sentimental score of each online post related to a particular stock. Such sentimental information is represented by two sentiment indicators, which are fused to market data for stock volatility prediction by using the Recurrent Neural Networks (RNNs). Empirical study shows that, comparing to using RNN only, the model performs significantly better with sentimental indicators.

Yifan Liu, Zengchang Qin, Pengyu Li, Tao Wan
Incremental Quantiles Estimators for Tracking Multiple Quantiles

In this paper, we investigate the problem of estimating multiple quantiles when samples are received online (data stream). We assume that we are dealing with a dynamical system, i.e. the distribution of the samples from the data stream changes with time. A major challenge arises when simultaneously maintaining multiple quantile estimates using incremental type of estimators. In fact, a naive implementation where multiple incremental quantile estimators are updated in isolation might lead to violation monotone property of quantiles, i.e., an estimate of a lower target quantile might erroneously overpass that of a higher one. Surprisingly, the related work on countering those violations is extremely sparse [1, 3] and almost absent.Our work tries to fill this literature gap by proposing two solutions to the problem that build on the deterministic update based multiplicative incremental quantile estimator (DUMIQE) recently proposed by Yazidi and Hammer [5], which was shown to be the most efficient incremental quantile estimator in the literature.Experimental results show that the modified DUMIQE methods perform very well and have a superior performance to the DUMIQE. Moreover, our proposed methods satisfy the monotone property of quantiles. The methods outperform the state of the art multiple incremental quantile estimator of Cao et al. [1, 3].

Hugo Lewi Hammer, Anis Yazidi
Forecasting Passenger Flows Using Data Analytics

In this paper, we focus on the forecasting of monthly departure passenger movements for one of the busiest airport in Asia. Firstly, we forecast the monthly airport departure passenger flows for the next 12 months for macro level planning. Next, we used SAS Forecast Studio for detailed-level planning based on airline and per airline-city combinations using hierarchical forecasting. We have also used the actual data to validate the accuracy of the forecast error. We have shown that in most cases, the mean absolute percentage error is less than 3%, which indicates the usefulness of our model for better decision making.

Nang Laik Ma
Co-location Rules Discovery Process Focused on Reference Spatial Features Using Decision Tree Learning

The co-location discovery process serves to find subsets of spatial features frequently located together. Many algorithms and methods have been designed in recent years; however, finding this kind of patterns around specific spatial features is a task in which the existing solutions provide incorrect results. Throughout this paper we propose a knowledge discovery process to find co-location patterns focused on reference features using decision tree learning algorithms on transactional data generated using maximal cliques. A validation test of this process is provided.

Giovanni Daián Rottoli, Hernán Merlino, Ramón García-Martinez
Virtual Career Advisor System with an Artificial Neural Network

We introduce the Dolphin system, a novel virtual career advisor system that implements artificial neural networks trained on student data to provide students with career advice. The Dolphin system’s Experiences-to-Careers advisor uses an artificial neural network and takes as input a student’s course experience ratings and returns as output a career ranking. We present our methods to train, validate and test the Experiences-to-Careers advisor’s artificial neural network. We have implemented and deployed the Experiences-to-Careers advisor. We surveyed 39 students who used the Experiences-to-Careers advisor to receive career advice. Of these students, 5 students (12.8%) indicated that they prefer the Experiences-to-Careers advisor over a human advisor, 9 students (23.1%) indicated that they prefer a human advisor over the Experiences-to-Careers advisor and 25 students (64.1%) indicated that they prefer the Experiences-to-Careers advisor along with a human advisor. Of the 76.9% of the students who prefer the Experiences-to-Careers advisor over or along with a human advisor, a common reason for the students’ choice is that the Experiences-to-Careers advisor provides a perspective different from that of a human advisor.

Tracey John, Dwaine Clarke
Implicit Knowledge Extraction and Structuration from Electrical Diagrams

The electrical domain, either domestic or industrial, benefits from a huge set of well-defined norms at both the national and international levels. However and surprisingly enough, there is no such norm regarding the actual conception and structuration of electrical diagrams, even though the basic symbols and notations remain the same. Each company is actually free to design such diagram relative to its own experience, expertise and know-how. The difficulty is that such diagrams, which are most of the time materialized as a PDF booklet, do not reflect this implicit knowledge. In this paper, we introduce our work on the extraction and the structuration of such knowledge using ad-hoc graph and text analysis as well as clustering techniques. Starting from a set of raw documents, we propose an end-to-end solution that offers a company dependent structured view, of any electrical diagram.

Ikram Chraibi Kaadoud, Nicolas Rougier, Frederic Alexandre
An Energy-Aware Learning Agent for Power Management in Mobile Devices

The optimization of the energy consumption in mobile devices can be performed on hardware and software components. For example, reducing the screen brightness or switching off GPS. The energy control must take into account both the current context and user habits, on the base of usage knowledge acquired from sensors and OS data records. The whole process of energy management then includes data collection, usage learning and analysis, decision-making and control of device components. To integrate these activities, we propose to use a software agent whose goal is to save the energy of the mobile device with the lowest effect on QoS.

Ismat Chaib Draa, Emmanuelle Grislin-Le Strugeon, Smail Niar

Sensors, Signal Processing and Data Fusion

Frontmatter
An Empirical Study on Verifier Order Selection in Serial Fusion Based Multi-biometric Verification System

Selecting the order of verifier in a serial fusion based multi-biometric system is a crucial parameter to fix because of its high impact on verification errors. A wrong choice of verifier order might lead to tremendous user inconvenience by denying a large number of genuine users and might cause severe security breach by accepting impostors frequently. Unfortunately, this design issue has been poorly investigated in multi-biometric literature. In this paper, we address this design issue by performing experiments using three different serial fusion based multi-biometric verification schemes, in particular (1) symmetric scheme, (2) SPRT-based scheme, and (3) Marcialis et al.’s scheme. We experimented on publicly available NIST-BSSR1 multi-modal database. We tested 24 orders—all possible orders originated from four individual verifiers—on a four-stage biometric verification system. Our experimental results show that the verifier order “best-to-worst”, where the best performing individual verifier is placed in the first stage, the next best performing individual verifier is placed in the second stage, and so on, is the top performing order for all three serial fusion schemes mentioned above.

Md Shafaeat Hossain, Khandaker Abir Rahman
Characterization of Cardiovascular Diseases Using Wavelet Packet Decomposition and Nonlinear Measures of Electrocardiogram Signal

Cardiovascular diseases (CVDs) remain as the primary causes of disability and mortality worldwide and are predicted to continue rise in the future due to inadequate preventive actions. Electrocardiogram (ECG) signal contains vital clinical information that assists significantly in the diagnosis of CVDs. Assessment of subtle ECG parameters that indicate the presence of CVDs are extremely difficult and requires long hours of manual examination for accurate diagnosis. Hence, automated computer-aided diagnosis systems might help in overcoming these limitations. In this study, a novel algorithm is proposed based on the combination of wavelet packet decomposition (WPD) and nonlinear features. The proposed method achieved classification results of 97.98% accuracy, 99.61% sensitivity and 94.84% specificity with 8 reliefF ranked features. The proposed methodology is highly efficient in helping clinical staff to detect cardiac abnormalities using a single algorithm.

Hamido Fujita, Vidya K. Sudarshan, Muhammad Adam, Shu Lih Oh, Jen Hong Tan, Yuki Hagiwara, Kuang Chua Chua, Kok Poo Chua, U. Rajendra Acharya
Biometric Keystroke Signal Preprocessing Part I: Signalization, Digitization and Alteration

Biometric keystroke term basically represents the classification of the users based on password entering style. The characteristic feature to be extracted in most keystroke authentication systems mainly is the inter-key times so the waiting time between key-press moments is supposed to be unique and hard to mimic. Therefore, the majority of the proposed systems start from computing the times in time-domain and the differences for each key in the password, without any pre-process. The performance of the systems as well as the accuracy of classification methodology merely depend on non-processed data. Given this fact, we present preparation methods starting form data acquisition for better post-processing of biometric keystroke signals.

Orcan Alpar, Ondrej Krejcar
Robust Sensor Data Fusion Through Adaptive Threshold Learning

Sensor fusion is the process of combining sensor readings from disparate resources so that the resulting information is more accurate and complete. The key challenge in sensor fusion arises from the inherent imperfection of data, commonly caused by sampling error, network respond time, imprecise measurement, and unreliable resources. Therefore, data fusion methods need to be advanced to address various aspects of data imperfections. In this paper, we first propose a novel unified data fusion framework based on rough set theory to systematically represent data granularity and imprecision. Then, we develop a cost-driven adaptive learning algorithm that can infer the optimal threshold values from data to obtain minimum cost. Our experimental study demonstrates the framework’s effectiveness and validity.

Bing Zhou, Hyuk Cho, Adam Mansfield
An Application of Fuzzy Signal-to-Noise Ratio to the Assessment of Manufacturing Processes

Taguchi method is an important tool used for robust design to produce high quality products efficiently. In Taguchi method, the signal-to-noise (SN) ratio serves as the objective function for optimization. This ratio is a useful measurement indicator for manufacturing processes. Conventionally, one calculates the SN ratio with the crisp observations. However, there are cases that observations are difficult to measure precisely, or observations need to be estimated. This paper develops a fuzzy nonlinear programming model, based on the SN ratio, to assess the manufacturing processes with fuzzy observations. A pair of nonlinear fractional programs is formulated to calculate the lower and upper bounds of the fuzzy SN ratio. By model reduction and variable substitutions, the nonlinear fractional programs are transformed into quadratic programs. Solving the transformed quadratic programs, we obtain the global optimum solutions of the lower bound and upper bound fuzzy SN ratio. By deriving the ranking index of the fuzzy SN ratios of the manufacturing process alternatives, the ranking result of the assessment is determined.

Shiang-Tai Liu
Biometric Keystroke Signal Preprocessing Part II: Manipulation

Biometric keystroke authentication methods deal with extracting the key-press times to validate the users considering the uniqueness of password entering style. When the proposed algorithms have no sub-system to check the password itself, the keystroke signal should include the key-codes for better discrimination. On the contrary, if the key-codes are already validated, the signal could be irreversibly manipulated to form a new and unique signal. In general, the key-press and inter-key times are directly used as array, subsequent to extraction without any process. Therefore in this paper we propose several techniques for preprocessing the keystroke signal. The main methods we dealt with are binarization, over-quantization and spectrogram conversion. As a result of these conversions, the new signals somehow exhibit same property and tendency of the original signal, while revealing the hidden features.

Orcan Alpar, Ondrej Krejcar
Computational Intelligence Techniques for Modelling the Critical Flashover Voltage of Insulators: From Accuracy to Comprehensibility

This paper copes with the problem of flashover voltage on polluted insulators, being one of the most important components of electric power systems. Α number of appropriately selected computational intelligence techniques are developed and applied for the modelling of the problem. Some of the applied techniques work as black-box models, but they are capable of achieving highly accurate results (artificial neural networks and gravitational search algorithms). Other techniques, on the contrary, obtain results somewhat less accurate, but highly comprehensible (genetic programming and inductive decision trees). However, all the applied techniques outperform standard data analysis approaches, such as regression models. The variables used in the analyses are the insulator’s maximum diameter, height, creepage distance, insulator’s manufacturing constant, and also the insulator’s pollution. In this research work the critical flashover voltage on a polluted insulator is expressed as a function of the aforementioned variables. The used database consists of 168 different cases of polluted insulators, created through both actual and simulated values. Results are encouraging, with room for further study, aiming towards the development of models for the proper inspection and maintenance of insulators.

Evangelos Karampotsis, Konstantinos Boulas, Alexandros Tzanetos, Vasilios P. Androvitsaneas, Ioannis F. Gonos, Georgios Dounias, Ioannis A. Stathopulos

Recommender Systems

Frontmatter
Replication and Reproduction in Recommender Systems Research - Evidence from a Case-Study with the rrecsys Library

Recommender systems (RS) are a real-world application domain for Artificial Intelligence standing at the core of massively used e-commerce and social-media platforms like Amazon, Netflix, Spotify and many more. The research field of recommendation systems now has already a more than 20 years long tradition and issues like replication of results and reproducibility of algorithms become more important. Therefore this work is oriented towards better understanding the underlying challenges of reproducibility of offline measurements of recommendation techniques. We therefore introduce rrecsys, an open-source package in R, that implements many popular RS algorithms, expansion capabilities and has an integrated offline evaluation mechanism following an accepted methodology. In addition, we present a case study on the usability of the library along with results of benchmarking the provided algorithms with other open-source implementations.

Ludovik Çoba, Markus Zanker
A New User-Based Collaborative Filtering Under the Belief Function Theory

The collaborative filtering (CF) is considered as the most widely used approach in the field of Recommender Systems (RSs). It tends to predict the users’ preferences based on the users sharing similar interests. However, ignoring the uncertainty involved in the provided predictions is among the limitations related to this approach. To deal with this issue, we propose in this paper a new user-based collaborative filtering within the belief function theory. In our approach, the evidence of each similar user is taken into account and Dempster’s rule of combination is used for combining these pieces of evidence. A comparative evaluation on a real world data set shows that the proposed method outperforms traditional user-based collaborative filtering recommenders.

Raoua Abdelkhalek, Imen Boukhris, Zied Elouedi
Aggregating Top-K Lists in Group Recommendation Using Borda Rule

With the democratization of the web, recent works aimed at making recommendations for groups of people to consider the circumstances where the item is selected to be consumed collectively. This paper proposes a group recommender system which is able to support partial rankings of items from different users in the form of top-k lists. In fact, the proposed group recommender system is based on generating recommendation lists for the group members using user-based collaborative filtering, then applying approximation Borda rule to generate group recommendations. Experiments show that the proposed group recommender system using approximate voting rules produced more accurate and interesting recommendations than using the standard voting rules.

Sabrine Ben Abdrabbah, Manel Ayadi, Raouia Ayachi, Nahla Ben Amor
An Analysis of Group Recommendation Heuristics for High- and Low-Involvement Items

Group recommender systems are based on aggregation heuristics that help to determine a recommendation for a group. These heuristics aggregate the preferences of individual users in order to reflect the preferences of the whole group. There exist a couple of different aggregation heuristics (e.g., most pleasure, least misery, and average voting) that are applied in group recommendation scenarios. However, to some extent it is still unclear which heuristics should be applied in which context. In this paper, we analyze the impact of the item domain (low involvement vs. high involvement) on the appropriateness of aggregation heuristics (we use restaurants as an example of low-involvement items and shared apartments as an example of high-involvement ones). The results of our study show that aggregation heuristics in group recommendation should be tailored to the underlying item domain.

Alexander Felfernig, Muesluem Atas, Thi Ngoc Trang Tran, Martin Stettinger, Seda Polat Erdeniz, Gerhard Leitner
SemCoTrip: A Variety-Seeking Model for Recommending Travel Activities in a Composite Trip

Selecting appropriate activities, especially in multi-destinations trips, is a hard task that many travellers face each time they want to plan for a trip. With the budget and time limitations, travellers will try to select activities that best fit their personal interests. Most of existing travel recommender systems don’t focus on activities that a traveller might be interested in. In this paper, we go beyond the specific problem of combining regions in a composite trip to propose a variety-seeking model which is capable of providing travelllers with recommendations on what activities they can engage in when visiting different regions. A semantical hierarchical clustering-based model is proposed to guarantee diversity within the set of recommended activities. Experimental results on a real dataset have shown that the proposed approach helps the traveller to avoid doing the same or similar activities in a composite trip, thus, promoting less popular activities to be selected.

Montassar Ben Messaoud, Ilyes Jenhani, Eya Garci, Toon De Pessemier

Decision Support Systems

Frontmatter
A New Dynamic Model for Anticipatory Adaptive Control of Airline Seat Reservation via Order Statistics of Cumulative Customer Demand

This paper deals with dynamic anticipatory adaptive control of airline seat reservation for the stochastic customer demand that occurs over time T before the flight is scheduled to depart. It is assumed that time T is divided into m periods, namely a full fare period and m−1 discounted fare periods. The fare structure is given. An airplane has a seat capacity of U. For the sake of simplicity, but without loss of generality, we consider (for illustration) the case of nonstop flight with two fare classes (business and economy). The proposed policies of the airline seat inventory control are based on the use of order statistics of cumulative customer demand, which have such properties as bivariate dependence and conditional predictability. Dynamic adaptation of the airline seat reservation system to airline customer demand is carried out via the bivariate dependence of order statistics of cumulative customer demand. Dynamic anticipatory adaptive optimization of the airline seat allocation includes total dynamic anticipatory adaptive non-nested optimization of booking limits and local dynamic anticipatory adaptive nested optimization of protection levels over time T. It is carried out via the conditional predictability of order statistics. The airline seat reservation system makes on-line decisions as to whether to accept or reject any customer request using established decision rules based on order statistics of the current cumulative customer demand. The computer simulation results are promising.

Nicholas Nechval, Gundars Berzins, Vadims Danovics
A Multi-Criteria Decision Support Framework for Interactive Adaptive Systems Evaluation

Many usability evaluation methods for interactive adaptive systems exist in the literature. There is not yet an agreement in the adaptive system community about which method is more useful than another in specific evaluation situations. This raises the question, “What is (are) the best evaluation method(s) that need(s) to be used in specific evaluation constraints?” This paper presents possible directions to address this issue by proposing a multi-criteria decision support framework for selecting the appropriate evaluation methods for interactive adaptive systems. The proposed decision support framework is applied to determine the suitable usability evaluation methods for a specific adaptation layer of a given adaptive hypermedia system.

Amira Dhouib, Abdelwaheb Trabelsi, Christophe Kolski, Mahmoud Neji
Application of Multi-Criteria Decision Making Method for Developing a Control Plan

The control plan optimization is an important issue for manufacturing companies in order to produce high quality products at lower costs. This paper presents a Multi-Criteria Decision Making framework to establish an efficient control plan. The proposed approach models the problem of selecting the best control scenario based on the decision maker preferences, and takes into account conflicting criteria such as reducing the Risk Priority Number and minimizing the control cost and time. At the first stage, Analytic Hierarchy Process (AHP) is used to provide priorities ratings for the available control alternatives. In the second step, the Choquet integral operator is employed for the aggregation of the partial scores obtained for the different alternatives according to each criterion in order to deal with the existing interactions between the criteria. An industrial case study from a manufacturing enterprise is provided to illustrate the application of the suggested approach.

Fadwa Oukhay, Hajer Ben Mahmoud, Taieb Ben Romdhane
Efficient Matching in Heterogeneous Rule Engines

Modern institutions seeking more complex software solutions to represent knowledge in the Cloud are using rule-based systems that serve several applications or clients. Rule-based systems hosted in the Cloud are thus required to support its heterogeneous nature. However, current systems only focus on techniques that isolate instances of rule engines. This paper builds upon earlier work on scoped rule engines that provide mechanisms for supporting shared heterogeneous contexts. We present the scope-based hashing algorithm (SBH) that enables efficient matching in scoped rule engines based on the Rete algorithm. SBH introduces scoped hash tables in alpha memories that help in avoiding unnecessary join tests that hamper performance. Our experimental results show that SBH offers significant improvements in efficiency during the matching process of a heterogeneous rule engine. Consequently, SBH significantly decreases the response time of rule engines in heterogeneous environments having entities sharing the same knowledge base.

Kennedy Kambona, Thierry Renaux, Wolfgang De Meuter
Towards Extending Business Process Modeling Formalisms with Information and Knowledge Dimensions

Sensitive Business Processes (SBPs) modeling has become an effective way of managing and developing organization’s knowledge which needs to be capitalized. These processes are characterized by a high complexity and dynamism in their execution, high number of critical activities with intensive acquisition, sharing, storage and (re)use of very specific crucial knowledge, diversity of knowledge sources, and high degree of collaboration among experts. Hence, we propose a semantically rich conceptualization for describing an SBP organized in a generic Business Process Meta-model for Knowledge Identification (BPM4KI), in order to develop a rich and expressive representation of SBPs to identify and localize the crucial knowledge. BPM4KI covers all aspects of business process modeling: the functional, organizational, behavioral, informational, intentional and knowledge perspectives. This paper aims to introduce a more explicit border between information and knowledge concepts and dimensions which are relevant in SBP models, based on «core» domain ontologies.

Mariam Ben Hassen, Mohamed Turki, Faïez Gargouri
Adaptive Planning in-Service Inspections of Fatigued Structures in Damage Tolerance Situations via Observations of Crack Growth Process

From an engineering standpoint the fatigue life of a fatigued structure consists of two periods: (i) crack initiation period, which starts with the first load cycle and ends when a technically detectable crack is presented, and (ii) crack propagation period, which starts with a technically detectable crack and ends when the remaining cross section can no longer withstand the loads applied and fails statically. The main aim of this paper is to present more accurate innovative stochastic fatigue model for adaptive planning inspections of fatigued structures in damage tolerance situations via observations of crack growth process during a crack propagation period. A new crack growth equation is based on this model. It is attractively simple and easy to apply in practice for effective in-service inspection planning (with decreasing intervals between sequential inspections as alternative to constant intervals often used in practice for convenience in operation). During the period of crack propagation (when the damage tolerance situation is used), the proposed crack growth equation, based on the innovative model, allows one to construct more accurate and effective reliability-based inspection strategy in this case. For illustration, a numerical example is given.

Nicholas Nechval, Gundars Berzins, Vadims Danovics
Introducing Causality in Business Rule-Based Decisions

Decision automation is expanding as many corporations capture and operate their business policies through business rules. Because laws and corporate regulations require transparency, decision automation must also provide some explanation capabilities. Most rule engines provide information about the rules that are executed, but rarely give an explanation about why those rules executed without degrading their performance. A need exists for a human readable decision trace that explains why decisions are made. This paper proposes a first approach to introduce causality to describe the existing (and sometimes hidden) relations in a decision trace of a Business Rule-Based System (BRBS). This involves a static analysis of the business rules and the construction of causal models.

Karim El Mernissi, Pierre Feillet, Nicolas Maudet, Wassila Ouerdane
Model-Based Diagnosis in Practice: Interaction Design of an Integrated Diagnosis Application for Industrial Wind Turbines

Model-based diagnosis derives explanations for discrepancies between the expected and observed system behavior by relying on a formal representation of the artifact under consideration. Although its theoretical background has been established decades ago and various research prototypes have been implemented, industrial applications are sparse. This paper emphasizes the role of essential technology acceptance factors, i.e., usefulness and usability, within the context of model-based diagnosis. In particular, we develop a concept and interface design for an abductive model-based diagnosis application integrated into existing condition monitoring software for industrial wind turbines. This fault identification tool should enhance the performance of the maintenance personnel while respecting their current work processes, taking into account their particular needs, and being easy to use under the given work conditions. By employing an iterative design process, continuous feedback in regard to the users’ work goals, tasks, and patterns can be included, while also considering other stakeholders’ requirements. The result is a workflow and interface design proposal to be implemented in the final software product.

Roxane Koitz, Johannes Lüftenegger, Franz Wotawa
A New Model to Implement a SWOT Fuzzy ANP

SWOT (i.e. Strengths, Weaknesses, Opportunities, Threats) analysis is considered as an important tool to conduct a strategic planning process by providing an internal and an external context analysis. However, these analyzes are based on the brainstorming of decision makers which may lead to subjectivity problems. Therefore, this paper aims to propose a new SWOT analysis model based on the fuzzy analytic network process (ANP) that deals with the subjectivity shortcoming.

Mounira Souli, Ahmed Badreddine, Taieb Ben Romdhane

Knowledge Representation and Reasoning

Frontmatter
Argumentative Approaches to Reasoning with Consistent Subsets of Premises

It has been shown that entailments based on the maximally consistent subsets (MCS) of a given set of premises can be captured by Dung-style semantics for argumentation frameworks. This paper shows that these links are much tighter and go way beyond simplified forms of reasoning with MCS. Among others, we consider different types of entailments that these kinds of reasoning induce, extend the framework for arbitrary (not necessarily maximal) consistent subsets, and incorporate non-classical logics. The introduction of declarative methods for reasoning with MCS by means of (sequent-based) argumentation frameworks provides, in particular, a better understanding of logic-based argumentation and allows to reevaluate some negative results concerning the latter.

Ofer Arieli, AnneMarie Borg, Christian Straßer
Volunteered Geographic Information Management Supported by Fuzzy Ontologies and Level-Based Approximate Reasoning

The paper proposes level-based approximate reasoning on a fuzzy ontology as a modeling framework to support the creation and retrieval of Volunteered Geographic Information (VGI) affected by observation deficiencies causing both uncertainty and fuzziness. The paper recalls the inadequacy of classic ontologies to create VGI, the limitation of the use of fuzzy ontologies to model both fuzziness and uncertainty, and proposes level based reasoning to answer user queries on a VGI collection supported by a fuzzy ontology.

Gloria Bordogna, Simone Sterlacchini
Regular and Sufficient Bounds of Finite Domain Constraints for Skeptical C-Inference

Skeptical c-inference based on a set of conditionals of the form If A then usually B is defined by taking the set of c-representations into account. C-representations are ranking functions induced by impact vectors encoding the conditional impact on each possible world. By setting a bound for the maximal impact value, c-inference can be approximated. We investigate the concepts of regular and sufficient upper bounds for conditional impacts and how they can be employed for implementing c-inference as a finite domain constraint solving problem.

Christoph Beierle, Steven Kutsch
On Transformations and Normal Forms of Conditional Knowledge Bases

Background knowledge is often represented by sets of conditionals of the form “if A then usually B”. Such knowledge bases should not be circuitous, but compact and easy to compare in order to allow for efficient processing in approaches dealing with and inferring from background knowledge, such as nonmonotonic reasoning. In this paper we present transformation systems on conditional knowledge bases that allow to identify and remove unnecessary conditionals from the knowledge base while preserving the knowledge base’s model set.

Christoph Beierle, Christian Eichhorn, Gabriele Kern-Isberner
ADnOTO: A Self-adaptive System for Automatic Ontology-Based Annotation of Unstructured Documents

In this paper we describe ADnOTO, a self-adaptive system for automatic ontology-based document annotation. The main goal of ADnOTO is the automatization of the document annotation process, particularly in the context of ontology-based digital libraries.

Laura Pandolfo, Luca Pulina
Ontologies in System Engineering: A Field Report

In this paper, we consider four different contributions to system engineering wherein ontologies provide enhancements over traditional techniques.

Marco Menapace, Armando Tacchella
An Argumentative Agent-Based Model of Scientific Inquiry

In this paper we present an agent-based model (ABM) of scientific inquiry aimed at investigating how different social networks impact the efficiency of scientists in acquiring knowledge. As such, the ABM is a computational tool for tackling issues in the domain of scientific methodology and science policy. In contrast to existing ABMs of science, our model aims to represent the argumentative dynamics that underlies scientific practice. To this end we employ abstract argumentation theory as the core design feature of the model.

AnneMarie Borg, Daniel Frey, Dunja Šešelja, Christian Straßer

Navigation, Control and Autonomous Agents

Frontmatter
Development of a Novel Driver Model Offering Human like Longitudinal Vehicle Control in Order to Simulate Emission in Real Driving Conditions

Toyota would like to simulate emissions in real-world conditions and support future engine development newly regulated by Real Driving Emission from 2017. A realistic driver model is necessary to simulate representative vehicle emissions. This paper presents a new driver model trained using real-world data including GPS localization and recorded engine ECU parameters. From a geolocalisation webservice, the proposed approach extracts the road attributes that influence human driving behaviour such as traffic signs, road cross, etc. The novel BiMap innovative algorithm, is then used to learn and map the driver behaviour with respect to the road properties while a regression tree algorithm is used to learn a realistic gear selection model. Experimental tests, executed within Carmaker™ vehicle simulation platform, show that the resulting model can drive along arbitrary real-world routes, generated using a map service. Moreover, it exhibits a human-like driving behaviour while being robust to different car setups. Finally, the realism of the proposed driver’s behaviour is supported by both a high similarity in Engine Operative Point usage and a less than 1.5% deviation in terms CO2 emission versus measured data.

Aymeric Rateau, Wim van der Borght, Marcello Mastroleo, Alessandro Pietro Bardelli, Alessandro Bacchini, Federico Sassi
Consistency Check in a Multiple Viewpoint System for Reasoning About Occlusion

This paper presents the implementation of a qualitative spatial reasoning formalism called Interval Occlusion Calculus based on Allen’s Algebra, that considers multiple viewpoints in a scene and the interpretation of the observations made from each point from the perspective of other agents. Furthermore we present a mechanism to check consistency for the information provided by the agents using a constraint satisfaction process. This formalism was tested in a 3D domain with real and simulated robot’s viewpoints.

Ana Paula Martin, Paulo E. Santos, Marjan Safi-Samghabadi
An Advanced Teleassistance System to Improve Life Quality in the Elderly

Over the last decades the population in developed countries is becoming increasingly older, while the life expectancy is growing supported on medical advances. In despite of such progress, how to support older adults to continue living independently and retaining their current lifestyle is becoming a social problem. Through the careful placement of technological support, elders can continue living in their own homes longer and thus, maintaining and enhancing their quality of life. In this paper we present an AI-based system that integrates a (i) Wireless Sensor Network for receiving information of the environment and the dependent person, (ii) an autonomous robot able to take decisions based on the received information, and (iii) a Web-based system to provide telecare assistance.

Fernando Ropero, Daniel Vaquerizo, Pablo Muñoz, María D. R-Moreno
Learning the Elasticity of a Series-Elastic Actuator for Accurate Torque Control

Series elastic actuators (SEAs) have been frequently used in torque control mode by using the elastic element as torque measuring device. In order to precisely control the torque, an ideal torque source is critical for higher level control strategies. The elastic elements are traditionally metal springs which are normally considered as linear elements in the control scheme. However, many elastic elements are not perfectly linear, especially for an elastic element built out of multiple springs or using special materials and thus their nonlinearities are very noticeable. This paper presents two data-driven methods for learning the spring model of a series-elastic actuator: (1) a Dynamic Gaussian Mixture Model (DGMM) is used to capture the relationship between actuator torque, velocity, spring deflection and its history. Once the DGMM is trained, the spring deflection can be estimated by using the conditional probability function which later is used for torque control. For comparison, (2) a deep-learning approach is also evaluated which uses the same variables as training data for learning the spring model. Results show that the data-driven methods improve the accuracy of the torque control as compared to traditional linear models.

Bingbin Yu, José de Gea Fernández, Yohannes Kassahun, Vinzenz Bargsten
The Effect of Rotation in the Navigation of Multi-level Buildings: A Pilot Study

The aim of the present paper is to investigate user’s perception of buildings’ layouts with particular emphasis on navigation of multi-level buildings. Up to date, research seems to pay more attention to wayfinding in two-dimensional environments, investigating it in public buildings such as hospitals, airports or university departments where it is more common to experience disorientation. The present work deepens this issue and focuses on the effect of rotation – due to staircases – on people’s cognitive maps. The study consists of a pilot work based on two cases: one qualitative, conducted at the University of Bremen, and the other one quantitative, conducted at the Technical University of Bari. Main results suggests that staircases affect somehow people’s perception of layout during navigation of multi-level buildings.

Giulia Mastrodonato, Domenico Camarda, Caterina De Lucia, Dino Borri
NAO Robot, Transmitter of Social Cues: What Impacts?
The Example with “Endowment effect”

Assuming that social norms are engaged in all human-human interactions in an automatic manner, how to program a robot as to activate respect of social norms from humans? We argue that endowment effect, constituting a bias in decision making, could be produced by a “politeness effect” within the exchange paradigm of Knetsch (1989). To test this hypothesis, NAO, a humanoid robot took the place of the human experimenter and was programmed to behave in a neutral way, annihilating all non-verbal social cues emission. In this condition, politeness rules had been respected by minority in contrast with the same methodology lead by a human. Following this experiment, NAO was programmed as to re-activate social norms, using several non-verbal social cues: face tracking, intonations of voice and gestures. First results in this way tend to show the impact of non-verbal social cues, producing an endowment effect again.

Olivier Masson, Jean Baratgin, Frank Jamet
Arduino as a Control Unit for the System of Laser Diodes

The article describes the possibilities of the Arduino platform when employed as a control unit of laser diode system. Considering the prices of control units for laser diodes that are available it is necessary to search for new and cheaper solutions. Also, there is a requirement to enable control of such a setup using a personal computer. The proposed setup should be provided by the Arduino platform in combination with laser diodes. The aim of this project is to create a device with many options of employment in the lighting field.

Jiri Bradle, Jakub Mesicek, Ondrej Krejcar, Ali Selamat, Kamil Kuca

Sentiment Analysis and Social Media

Frontmatter
Timeline Summarization for Event-Related Discussions on a Chinese Social Media Platform

In this paper, we proposed an approach to automatically generate timeline summarization for sub-event discussions related to a query event without supervised learning. In order to select event-related sentences, we designed a two-stage method to extract representative entity terms in the event-related discussions and filter out most of the sentences semantically un-related to the query event. A rule-based method was applied to extract sentences which describing sub-events. After that, the discussions are assigned to the corresponding sub-events according to the semantic relatedness measure. Finally, according to the occurring time of each sub-event, the timeline summarization is organized. We evaluated the performance of the proposed method on the real-world datasets. The experiment results showed that each processing step perform effectively. Especially, most noise sentences could be filtered by the proposed method. Moreover, the final timeline summarization graded by users is proven to be useful to well understand the discussion trend of a sub-event.

Han Wang, Jia-Ling Koh
Evidential Link Prediction in Uncertain Social Networks Based on Node Attributes

The design of an efficient link prediction method is still an open hot issue that has been addressed mostly through topological properties in recent years. Yet, other relevant information such as the node attributes may inform the link prediction task and enhance performances. This paper presents a novel framework for link prediction that combines node attributes and structural properties. Furthermore, the proposed method handles uncertainty that characterizes social network noisy and missing data by embracing the general framework of the belief function theory. An experimental evaluation on real world social network data shows that attribute information improves further the prediction results.

Sabrine Mallek, Imen Boukhris, Zied Elouedi, Eric Lefevre
Arabic Tweets Sentimental Analysis Using Machine Learning

The continuous rapid growth of electronic Arabic contents in social media channels and in Twitter particularly poses an opportunity for opinion mining research. Nevertheless, it is hindered by either the lack of sentimental analysis resources or Arabic language text analysis challenges. This study introduces an Arabic Jordanian twitter corpus where Tweets are annotated as either positive or negative. It investigates different supervised machine learning sentiment analysis approaches when applied to Arabic user’s social media of general subjects that are found in either Modern Standard Arabic (MSA) or Jordanian dialect. Experiments are conducted to evaluate the use of different weight schemes, stemming and N-grams terms techniques and scenarios. The experimental results provide the best scenario for each classifier and indicate that SVM classifier using term frequency–inverse document frequency (TF-IDF) weighting scheme with stemming through Bigrams feature outperforms the Naïve Bayesian classifier best scenario performance results. Furthermore, this study results outperformed other results from comparable related work.

Khaled Mohammad Alomari, Hatem M. ElSherif, Khaled Shaalan
Getting Frustrated: Modelling Emotional Contagion in Stranded Passengers

Train passengers can get stranded due to a variety of events, such as a delay, technical malfunctioning or a natural disaster. Stranded passengers can get frustrated, which could escalate in misbehaviours. Examples are verbal and physical violence or dangerous behaviours such as opening emergency exits and walking in unauthorized areas. In this work, an agent-based model of stranded passengers was created to analyse and predict the dynamics of frustration and misbehaviours. It was determined how age, gender, emotional contagion, social identity and traveller type influence the frustration dynamics and number of misbehaviours. Important findings are that emotional contagion, age and gender can have an amplifying effect on frustration and misbehaviours, while traveller type seemed to have no influence. This model can be used by transport operators in preparing for stranded passengers scenarios.

C. Natalie van der Wal, Maik Couwenberg, Tibor Bosse
An Agent-Based Evacuation Model with Social Contagion Mechanisms and Cultural Factors

A fire incident at a transport hub can cost many lives. To save lives, effective crisis management and prevention measures need to be taken. In this project, the effect of cultural factors in managing and preventing emergencies in public transport systems is analysed. An agent–based model of an evacuating crowd was created. Socio-cultural factors that were modelled are: familiarity with environment, response time and social contagion of fear and beliefs about the situation. Simulation results show that (1) familiarity and social contagion decrease evacuation time, while increasing the number of falls; (2) crowd density and social contagion increase evacuation time and falls. All three factors show different effects on the response time. This model will be used by transport operators to estimate the effect of these socio-cultural factors and prepare for emergencies.

C. Natalie van der Wal, Daniel Formolo, Tibor Bosse
A Consensus Approach to Sentiment Analysis

There are many situations where the opinion of the majority of participants is critical. The scenarios could be multiple, like a number of doctors finding commonality on the diagnosing of an illness or parliament members looking for consensus on a specific law being passed. In this article we present a method that utilises Induced Ordered Weighted Averaging (IOWA) operators to aggregate a majority opinion from a number of Sentiment Analysis (SA) classification systems, where the latter occupy the role usually taken by human decision-makers. Previously determined sentence intensity polarity by different SA classification methods are used as input to a specific IOWA operator. During the experimental phase, the use of the IOWA operator coupled with the linguistic quantifier ‘most’ ($$\text {IOWA}_{most}$$) proved to yield superior results compared to those achieved when utilising other techniques commonly applied when some sort of averaging is needed, such as arithmetic mean or median techniques.

Orestes Appel, Francisco Chiclana, Jenny Carter, Hamido Fujita
Way of Coordination of Visual Modeling and Mental Imagery in Conceptual Solution of Project Task

In designing the software-intensive systems (SISs), one of the basic reasons for the extremely low degree of success is a gap between natural and artificial forms of human interactions with a computerized environment. Controlled coordinating a mental imagination and visual modeling can facilitate increasing the degree of success by the more effective interacting the designers with natural experience and its models as in predictable so in unpredictable situations. For achieving these effects, we have developed and tested a way of operative creating the necessary interfaces during conceptual solutions of project tasks.

P. Sosnin, M. Galochkin
Backmatter
Metadaten
Titel
Advances in Artificial Intelligence: From Theory to Practice
herausgegeben von
Salem Benferhat
Karim Tabia
Moonis Ali
Copyright-Jahr
2017
Electronic ISBN
978-3-319-60042-0
Print ISBN
978-3-319-60041-3
DOI
https://doi.org/10.1007/978-3-319-60042-0