Skip to main content
Top

2019 | Book

Artificial Intelligence Applications and Innovations

15th IFIP WG 12.5 International Conference, AIAI 2019, Hersonissos, Crete, Greece, May 24–26, 2019, Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 15th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2019, held in Hersonissos, Crete, Greece, in May 2019.

The 49 full papers and 6 short papers presented were carefully reviewed and selected from 101 submissions. They cover a broad range of topics such as deep learning ANN; genetic algorithms - optimization; constraints modeling; ANN training algorithms; social media intelligent modeling; text mining/machine translation; fuzzy modeling; biomedical and bioinformatics algorithms and systems; feature selection; emotion recognition; hybrid Intelligent models; classification - pattern recognition; intelligent security modeling; complex stochastic games; unsupervised machine learning; ANN in industry; intelligent clustering; convolutional and recurrent ANN; recommender systems; intelligent telecommunications modeling; and intelligent hybrid systems using Internet of Things. The papers are organized in the following topical sections:AI anomaly detection - active learning; autonomous vehicles - aerial vehicles; biomedical AI; classification - clustering; constraint programming - brain inspired modeling; deep learning - convolutional ANN; fuzzy modeling; learning automata - logic based reasoning; machine learning - natural language; multi agent - IoT; nature inspired flight and robot; control - machine vision; and recommendation systems.

Table of Contents

Frontmatter

Invited Paper

Frontmatter
The Power of the “Pursuit” Learning Paradigm in the Partitioning of Data

Traditional Learning Automata (LA) work with the understanding that the actions are chosen purely based on the “state” in which the machine is. This modus operandus completely ignores any estimation of the Random Environment’s (RE’s) (specified as $$\mathbb {E}$$ ) reward/penalty probabilities. To take these into consideration, Estimator/Pursuit LA utilize “cheap” estimates of the Environment’s reward probabilities to make them converge by an order of magnitude faster. This concept is quite simply the following: Inexpensive estimates of the reward probabilities can be used to rank the actions. Thereafter, when the action probability vector has to be updated, it is done not on the basis of the Environment’s response alone, but also based on the ranking of these estimates. While this phenomenon has been utilized in the field of LA, until recently, it has not been incorporated into solutions that solve partitioning problems. In this paper, we will submit a complete survey of how the “Pursuit” learning paradigm can be and has been used in Object Partitioning. The results demonstrate that incorporating this paradigm can hasten the partitioning by a order of magnitude.

Abdolreza Shirvani, B. John Oommen

AI Anomaly Detection - Active Learning

Frontmatter
Cyber-Typhon: An Online Multi-task Anomaly Detection Framework

According to the Greek mythology, Typhon was a gigantic monster with one hundred dragon heads, bigger than all mountains. His open hands were extending from East to West, his head could reach the sky and flames were coming out of his mouth. His body below the waste consisted of curled snakes. This research effort introduces the “Cyber-Typhon” (CYTY) an Online Multi-Task Anomaly Detection Framework. It aims to fully upgrade old passive infrastructure through an intelligent mechanism, using advanced Computational Intelligence (COIN) algorithms. More specifically, it proposes an intelligent Multi-Task Learning framework, which combines On-Line Sequential Extreme Learning Machines (OS-ELM) and Restricted Boltzmann Machines (RBMs) in order to control data flows. The final target of this model is the intelligent classification of Critical Infrastructures’ network flow, resulting in Anomaly Detection due to Advanced Persistent Threat (APT) attacks.

Konstantinos Demertzis, Lazaros Iliadis, Panayiotis Kikiras, Nikos Tziritas
Investigating the Benefits of Exploiting Incremental Learners Under Active Learning Scheme

This paper examines the efficacy of incrementally updateable learners under the Active Learning concept, a well-known iterative semi-supervised scheme where the initially collected instances, usually a few, are augmented by the combined actions of both the chosen base learner and the human factor. Instead of exploiting conventional batch-mode learners and refining them at the end of each iteration, we introduce the use of incremental ones, so as to apply favorable query strategies and detect the most informative instances before they are provided to the human factor for annotating them. Our assumption about the benefits of this kind of combination into a suitable framework is verified by the achieved classification accuracy against the baseline strategy of Random Sampling and the corresponding learning behavior of the batch-mode approaches over numerous benchmark datasets, under the pool-based scenario. The measured time reveals also a faster response of the proposed framework, since each constructed classification model into the core of Active Learning concept is built partially, updating the existing information without ignoring the already processed data. Finally, all the conducted comparisons are presented along with the appropriate statistical testing processes, so as to verify our claim.

Stamatis Karlos, Vasileios G. Kanas, Nikos Fazakis, Christos Aridas, Sotiris Kotsiantis
The Blockchain Random Neural Network in Cybersecurity and the Internet of Things

The Internet of Things (IoT) enables increased connectivity between devices; however, this benefit also intrinsically increases cybersecurity risks as cyber attackers are provided with expanded network access and additional digital targets. To address this issue, this paper presents a holistic digital and physical cybersecurity user authentication method based on the Blockchain Random Neural Network. The Blockchain Neural Network connects increasing neurons in a chain configuration providing an additional layer of resilience against Cybersecurity attacks in the IoT. The proposed user access authentication covers holistically its digital access through the seven OSI layers and its physical user identity such as passport before the user is accepted in the IoT network. The user’s identity is kept secret codified in the neural weights, although in case of cybersecurity breach, its physical identity can be mined and the attacker identified, therefore enabling a safe decentralized confidentiality. The validation results show that the addition of the Blockchain Neural Network provides a user access control algorithm with increased cybersecurity resilience and decentralized user access and connectivity.

Will Serrano

Autonomous Vehicles - Aerial Vehicles

Frontmatter
A Visual Neural Network for Robust Collision Perception in Vehicle Driving Scenarios

This research addresses the challenging problem of visual collision detection in very complex and dynamic real physical scenes, specifically, the vehicle driving scenarios. This research takes inspiration from a large-field looming sensitive neuron, i.e., the lobula giant movement detector (LGMD) in the locust’s visual pathways, which represents high spike frequency to rapid approaching objects. Building upon our previous models, in this paper we propose a novel inhibition mechanism that is capable of adapting to different levels of background complexity. This adaptive mechanism works effectively to mediate the local inhibition strength and tune the temporal latency of local excitation reaching the LGMD neuron. As a result, the proposed model is effective to extract colliding cues from complex dynamic visual scenes. We tested the proposed method using a range of stimuli including simulated movements in grating backgrounds and shifting of a natural panoramic scene, as well as vehicle crash video sequences. The experimental results demonstrate the proposed method is feasible for fast collision perception in real-world situations with potential applications in future autonomous vehicles.

Qinbing Fu, Nicola Bellotto, Huatian Wang, F. Claire Rind, Hongxin Wang, Shigang Yue
An LGMD Based Competitive Collision Avoidance Strategy for UAV

Building a reliable and efficient collision avoidance system for unmanned aerial vehicles (UAVs) is still a challenging problem. This research takes inspiration from locusts, which can fly in dense swarms for hundreds of miles without collision. In the locust’s brain, a visual pathway of LGMD-DCMD (lobula giant movement detector and descending contra-lateral motion detector) has been identified as collision perception system guiding fast collision avoidance for locusts, which is ideal for designing artificial vision systems. However, there is very few works investigating its potential in real-world UAV applications. In this paper, we present an LGMD based competitive collision avoidance method for UAV indoor navigation. Compared to previous works, we divided the UAV’s field of view into four subfields each handled by an LGMD neuron. Therefore, four individual competitive LGMDs (C-LGMD) compete for guiding the directional collision avoidance of UAV. With more degrees of freedom compared to ground robots and vehicles, the UAV can escape from collision along four cardinal directions (e.g. the object approaching from the left-side triggers a rightward shifting of the UAV). Our proposed method has been validated by both simulations and real-time quadcopter arena experiments.

Jiannan Zhao, Xingzao Ma, Qinbing Fu, Cheng Hu, Shigang Yue
Mixture Modules Based Intelligent Control System for Autonomous Driving

As a typical artificial intelligence system, a safe and comfortable control system is essential for self-driving vehicles to have the same level of driving ability as human drivers. This paper proposes a novel control system for autonomous driving vehicles based on mixture modules, which aims to ensure the accuracy of path tracking while meeting the requirements of safety and ride comfort. The mixture modules consist of a lateral controller to control the steering wheel angle of the vehicle for path tracking and a longitudinal controller to adjust the speed of the vehicle. We conducted a series of experiments on our simulation platform and real self-driving vehicles to test the proposed control system and compared it with the traditional methods which are widely used. The experimental results indicate that our control system can run effectively on real vehicles. It may accurately track the intended driving path and adjust the driving speed comfortably and smoothly, which demonstrates a high level of intelligence.

Tangyike Zhang, Songyi Zhang, Yu Chen, Chao Xia, Shitao Chen, Nanning Zheng

Biomedical AI

Frontmatter
An Adaptive Temporal-Causal Network Model for Stress Extinction Using Fluoxetine

In this paper, an adaptive temporal causal network model based on drug therapy named fluoxetine to decrease the stress level of post-traumatic stress disorder is presented. The stress extinction is activated by a cognitive drug therapy (here fluoxetine) that uses continuous usage of medicine. The aim of this therapy is to reduce the connectivity between some components inside the brain which are responsible for causing stress. This computational model aspires to realistically demonstrate the activation of different portions of brain when the therapy is applied. The cognitive model starts with a situation of strong and continuous stress in an individual and after using fluoxetine the stress level begins to decrease over time. As a result, the patient will have a reduced stress level compared to not using drug.

S. Sahand Mohammadi Ziabari
Clustering Diagnostic Profiles of Patients

Electronic Health Records provide a wealth of information about the care of patients and can be used for checking the conformity of planned care, computing statistics of disease prevalence, or predicting diagnoses based on observed symptoms, for instance. In this paper, we explore and analyze the recorded diagnoses of patients in a hospital database in retrospect, in order to derive profiles of diagnoses in the patient database. We develop a data representation compatible with a clustering approach and present our clustering approach to perform the exploration. We use a k-means clustering model for identifying groups in our binary vector representation of diagnoses and present appropriate model selection techniques to select the number of clusters. Furthermore, we discuss possibilities for interpretation in terms of diagnosis probabilities, in the light of external variables and with the common diagnoses occurring together.

Jaakko Hollmén, Panagiotis Papapetrou
Emotion Analysis in Hospital Bedside Infotainment Platforms Using Speeded up Robust Features

Far from the heartless aspect of bytes and bites, the field of affective computing investigates the emotional condition of human beings interacting with computers by means of sophisticated algorithms. Systems that integrate this technology in healthcare platforms allow doctors and medical staff to monitor the sentiments of their patients, while they are being treated in their private spaces. It is common knowledge that the emotional condition of patients is strongly connected to the healing process and their health. Therefore, being aware of the psychological peaks and troughs of a patient, provides the advantage of timely intervention by specialists or closely related kinsfolk. In this context, the developed approach describes an emotion analysis scheme which exploits the fast and consistent properties of the Speeded-Up Robust Features (SURF) algorithm in order to identify the existence of seven different sentiments in human faces. The whole functionality is provided as a web service for the healthcare platform during regular Web RTC video teleconference sessions between authorized medical personnel and patients. The paper discusses the technical details of the implementation and the incorporation of the proposed scheme and provides initial results of its accuracy and operation in practice.

A. Kallipolitis, M. Galliakis, A. Menychtas, I. Maglogiannis
FISUL: A Framework for Detecting Adverse Drug Events from Heterogeneous Medical Sources Using Feature Importance

Adverse drug events (ADEs) are considered to be highly important and critical conditions, while accounting for around 3.7% of hospital admissions all over the world. Several studies have applied predictive models for ADE detection; nonetheless, only a restricted number and type of features has been used. In the paper, we propose a framework for identifying ADEs in medical records, by first applying the Boruta feature importance criterion, and then using the top-ranked features for building a predictive model as well as for clustering. We provide an experimental evaluation on the MIMIC-III database by considering 7 types of ADEs illustrating the benefit of the Boruta criterion for the task of ADE detection.

Corinne G. Allaart, Lena Mondrejevski, Panagiotis Papapetrou

Classification - Clustering

Frontmatter
A New Topology-Preserving Distance Metric with Applications to Multi-dimensional Data Clustering

In many cases of high dimensional data analysis, data points may lie on manifolds of very complex shapes/geometries. Thus, the usual Euclidean distance may lead to suboptimal results when utilized in clustering or visualization operations. In this work, we introduce a new distance definition in multi-dimensional spaces that preserves the topology of the data point manifold. The parameters of the proposed distance are discussed and their physical meaning is explored through 2 and 3-dimensional synthetic datasets. A robust method for the parameterization of the algorithm is suggested. Finally, a modification of the well-known k-means clustering algorithm is introduced, to exploit the benefits of the proposed distance metric for data clustering. Comparative results including other established clustering algorithms are presented in terms of cluster purity and V-measure, for a number of well-known datasets.

Konstantinos K. Delibasis
Classification of Incomplete Data Using Autoencoder and Evidential Reasoning

To classify data with missing values, we propose a method exploiting autoencoders and evidence theory. We augment the complete data by deleting each feature once and imputing it using the nearest neighbor to a set of predefined points generated using a new scheme. We train an autoencoder with the complete data set to get a latent space representation of the input. The network is retrained with the augmented data to get a better latent space representation. Now for each class, we train a support vector machine (SVM) with a one-vs-all strategy using the latent space representation of the complete data set. For an r-class problem, the output of each of the r SVMs is used to define a Basic Probability Assignment (BPA). The BPAs are combined using Dempster’s rule of combination to make the final decision. Now to classify any test instance with missing values, we make an initial guess of the missing values using the nearest neighbor rule. We take the latent space representation of that imputed instance and pass it through each trained SVM. As done earlier, using each SVM output, we generate a BPA and the r BPAs are aggregated to get a composite BPA. The class label of the test point is then determined using the Pignistic probabilities. We have compared the proposed method with four state-of-the-art techniques using three experiments with artificial and real datasets. The proposed method is found to perform better.

Suvra Jyoti Choudhury, Nikhil R. Pal
Dynamic Reliable Voting in Ensemble Learning

The combination of multiple classifiers can produce an optimal solution than relying on the single learner. However, it is difficult to select the reliable learning algorithms when they have contrasted performances. In this paper, the combination of the supervised learning algorithms is proposed to provide the best decision. Our method transforms a classifier score of training data into a reliable score. Then, a set of reliable candidates is determined through static and dynamic selection. The experimental result of eight datasets shows that our algorithm gives a better average accuracy score compared to the results of the other ensemble methods and the base classifiers.

Agus Budi Raharjo, Mohamed Quafafou
Extracting Action Sensitive Features to Facilitate Weakly-Supervised Action Localization

Weakly-supervised temporal action localization has attracted much attention among researchers in video content analytics, thanks to its relaxed requirements of video-level annotations instead of frame-level labels. However, many current weakly-supervised action localization methods depend heavily on naive feature combination and empirical thresholds to determine temporal action boundaries, which is practically feasible but could still be sub-optimal. Inspired by the momentum term, we propose a general-purpose action recognition criterion that replaces explicit empirical thresholds. Based on such criterion, we analyze different combination of streams and propose the Action Sensitive Extractor (ASE) that produces action sensitive features. Our ASE sets temporal stream as main stream and extends with complementary spatial streams. We build our Action Sensitive Network (ASN) and evaluate on THUMOS14 and ActivityNet1.2 with different selection method. Our network yields state-of-art performance in both datasets.

Zijian Kang, Le Wang, Ziyi Liu, Qilin Zhang, Nanning Zheng
Image Recognition Based on Combined Filters with Pseudoinverse Learning Algorithm

Deep convolution neural network (CNN) is one of the most popular Deep neural networks (DNN). It has won state-of-the-art performance in many computer vision tasks. The most used method to train DNN is Gradient descent-based algorithm such as Backpropagation. However, backpropagation algorithm usually has the problem of gradient vanishing or gradient explosion, and it relies on repeated iteration to get the optimal result. Moreover, with the need to learn many convolutional kernels, the traditional convolutional layer is the main computational bottleneck of deep CNNs. Consequently, the current deep CNN is inefficient on computing resource and computing time. To solve these problems, we proposed a method which combines Gabor kernel, random kernel and pseudoinverse kernel, incorporating with pseudoinverse learning (PIL) algorithm to speed up DNN training processing. With the multiple fixed convolution kernels and pseudoinverse learning algorithm, it is simple and efficient to use the proposed method. The performance of the proposed model is tested on MNIST and CIFAR-10 datasets without using GPU. Experimental results show that our model is better than existing benchmark methods in speed, at the same time it has the comparative recognition accuracy.

Xiaodan Deng, Xiaoxuan Sun, Ping Guo, Qian Yin

Constraint Programming - Brain Inspired Modeling

Frontmatter
Design-Parameters Optimization of a Deep-Groove Ball Bearing for Different Boundary Dimensions, Employing Amended Differential Evolution Algorithm

Rolling-element bearing that is mostly used wherever rotary motion is provided to a shaft in rotating machineries. A deep-groove ball bearing is one type of rolling-element bearing which is used to support radial load, axial load or combination of both. After proper installation and condition, ball bearings usually fail because of fatigue under normal operating conditions. Therefore, the fatigue-life optimization is a prime objective in designing a ball bearing. In the present work, eleven different problems of a deep-groove ball bearing by changing boundary dimensions are optimize to obtain maximum fatigue life. Amended Differential Evolution Algorithm (ADEA), which is modified version of Differential Evolution (DE) algorithm along with constraint handling technique, is applied to these eleven problems and optimum results in the form of optimal design-parameters and fatigue life is reported. The design parameters are bearing pitch diameter, ball diameter, number of balls and curvature coefficient of the outer and inner raceway groove are considered. Further, optimal results are compared with the other researcher’s work and standard catalogue for the same problems. Better results for fatigue life are obtained using ADEA.

Parthiv B. Rana, Jigar L. Patel, D. I. Lalwani
Exploring Brain Effective Connectivity in Visual Perception Using a Hierarchical Correlation Network

Brain-inspired computing is a research hotspot in artificial intelligence (AI). One of the key problems in this field is how to find the bridge between brain connectivity and data correlation in a connection-to-cognition model. Functional magnetic resonance imaging (fMRI) signals provide rich information about brain activities. Existing modeling approaches with fMRI focus on the strength information, but neglect structural information. In a previous work, we proposed a monolayer correlation network (CorrNet) to model the structural connectivity. In this paper, we extend the monolayer CorrNet to a hierarchical correlation network (HcorrNet) by analysing visual stimuli of natural images and fMRI signals in the entire visual cortex, that is, V1, V2 V3, V4, fusiform face area (FFA), the lateral occipital complex (LOC) and parahippocampal place area (PPA). Through the HcorrNet, the efficient connectivity of the brain can be inferred layer by layer. Then, the stimulus-sensitive activity mode of voxels can be extracted, and the forward encoding process of visual perception can be modeled. Both of them can guide the decoding process of fMRI signals, including classification and image reconstruction. In the experiments, we improved a dynamic evolving spike neuron network (SNN) as the classifier, and used Generative Adversarial Networks (GANs) to reconstruct image.

Siyu Yu, Nanning Zheng, Hao Wu, Ming Du, Badong Chen
Solving the Talent Scheduling Problem by Parallel Constraint Programming

The Talent Scheduling problem (TS) is a practical problem entailed by devising a schedule for shooting a film, which is a typical constraint optimization problem. The current modeling approaches are limited and not efficient enough. We present a more concise and efficient modeling approach for the problem. Besides, we exploit TS as a case study to explore how to utilize parallel constraint solving to speedup this constraint optimization problem.

Ke Liu, Sven Löffler, Petra Hofstedt

Deep Learning - Convolutional ANN

Frontmatter
A Deep Reinforcement Learning Approach for Automated Cryptocurrency Trading

Nowadays, Artificial Intelligence (AI) is changing our daily life in many application fields. Automatic trading has inspired a large number of field experts and scientists in developing innovative techniques and deploying cutting-edge technologies to trade different markets. In this context, cryptocurrency has given new interest in the application of AI techniques for predicting the future price of a financial asset. In this work Deep Reinforcement Learning is applied to trade bitcoin. More precisely, Double and Dueling Double Deep Q-learning Networks are compared over a period of almost four years. Two reward functions are also tested: Sharpe ratio and profit reward functions. The Double Deep Q-learning trading system based on Sharpe ratio reward function demonstrated to be the most profitable approach for trading bitcoin.

Giorgio Lucarelli, Matteo Borrotti
Capacity Requirements Planning for Production Companies Using Deep Reinforcement Learning
Use Case for Deep Planning Methodology (DPM)

In recent years, deep reinforcement learning has proven an impressive success in the area of games, without explicit knowledge about the rules and strategies of the games itself, like Backgammon, Checkers, Go, Atari video games, for instance [1]. Deep reinforcement learning combines reinforcement-learning algorithms with deep neural networks. In principle, reinforcement-learning applications learn an appropriate policy automatically, which maximizes an objective function in order to win a game. In this paper, a universal methodology is proposed on how to create a deep reinforcement learning application for a business planning process systematically, named Deep Planning Methodology (DPM). This methodology is applied to the business process domain of capacity requirements planning. Therefore, this planning process was designed as a Markov decision process [2]. The proposed deep neuronal network learns a policy choosing the best shift schedule, which provides the required capacity for producing orders in time, with high capacity utilization, minimized stock and a short throughput time. The deep learning framework TensorFlowTM [3] was used to implement the capacity requirements planning application for a production company.

Harald Schallner
Comparison of Neural Network Optimizers for Relative Ranking Retention Between Neural Architectures

Autonomous design and optimization of neural networks is gaining increasingly more attention from the research community. The main barrier is the computational resources required to conduct experimental and production project. Although most researchers focus on new design methodologies, the main computational cost remains the evaluation of candidate architectures. In this paper we investigate the feasibility of using reduced epoch training, by measuring the rank correlation coefficients between sets of optimizers, given a fixed number of training epochs. We discover ranking correlations of more than 0.75 and up to 0.964 between Adam with 50 training epochs, stochastic gradient descent with nesterov momentum with 10 training epochs and Adam with 20 training epochs. Moreover, we show the ability of genetic algorithms to find high-quality solutions of a function, by searching in a perturbed search space, given that certain correlation criteria are met.

George Kyriakides, Konstantinos Margaritis
Detecting Violent Robberies in CCTV Videos Using Deep Learning

Video surveillance through security cameras has become difficult due to the fact that many systems require manual human inspection for identifying violent or suspicious scenarios, which is practically inefficient. Therefore, the contribution of this paper is twofold: the presentation of a video dataset called UNI-Crime, and the proposal of a violent robbery detection method in CCTV videos using a deep-learning sequence model. Each of the 30 frames of our videos passes through a pre-trained VGG-16 feature extractor; then, all the sequence of features is processed by two convolutional long-short term memory (convLSTM) layers; finally, the last hidden state passes through a series of fully-connected layers in order to obtain a single classification result. The method is able to detect a variety of violent robberies (i.e., armed robberies involving firearms or knives, or robberies showing different level of aggressiveness) with an accuracy of 96.69%.

Giorgio Morales, Itamar Salazar-Reque, Joel Telles, Daniel Díaz
Diversity Regularized Adversarial Deep Learning

The two key players in Generative Adversarial Networks (GANs), the discriminator and generator, are usually parameterized as deep neural networks (DNNs). On many generative tasks, GANs achieve state-of-the-art performance but are often unstable to train and sometimes miss modes. A typical failure mode is the collapse of the generator to a single parameter configuration where its outputs are identical. When this collapse occurs, the gradient of the discriminator may point in similar directions for many similar points. We hypothesize that some of these shortcomings are in part due to primitive and redundant features extracted by discriminator and this can easily make the training stuck. We present a novel approach for regularizing adversarial models by enforcing diverse feature learning. In order to do this, both generator and discriminator are regularized by penalizing both negatively and positively correlated features according to their differentiation and based on their relative cosine distances. In addition to the gradient information from the adversarial loss made available by the discriminator, diversity regularization also ensures that a more stable gradient is provided to update both the generator and discriminator. Results indicate our regularizer enforces diverse features, stabilizes training, and improves image synthesis.

Babajide O. Ayinde, Keishin Nishihama, Jacek M. Zurada
Interpretability of a Deep Learning Model for Rodents Brain Semantic Segmentation

In recent years, as machine learning research has become real products and applications, some of which are critical, it is recognized that it is necessary to look for other model evaluation mechanisms. The commonly used main metrics such as accuracy or F-statistics are no longer sufficient in the deployment phase. This fostered the emergence of methods for interpretability of models. In this work, we discuss an approach to improving the prediction of a model by interpreting what has been learned and using that knowledge in a second phase. As a case study we have used the semantic segmentation of rodent brain tissue in Magnetic Resonance Imaging. By analogy with what happens to the human visual system, the experiment performed provides a way to make more in-depth conclusions about a scene by carefully observing what attracts more attention after a first glance in en passant.

Leonardo Nogueira Matos, Mariana Fontainhas Rodrigues, Ricardo Magalhães, Victor Alves, Paulo Novais
Learning and Detecting Stuttering Disorders

Stuttering is a widespread speech disorder involving about the $$5\%$$ of the population and the $$2.5\%$$ of children under the age of 5. Much work in literature studies causes, mechanisms and epidemiology and much work is devoted to illustrate treatments, prognosis and how to diagnose stutter. Relevantly, a stuttering evaluation requires the skills of a multi-dimensional team. An expert speech-language therapist conduct a precise evaluation with a series of tests, observations, and interviews. During an evaluation, a speech language therapist perceive, record and transcribe the number and types of speech disfluencies that a person produces in different situations. Stuttering is very variable in the number of repeated syllables/words and in the secondary aspects that alter the clinical picture. This work wants to help in the difficult task of evaluating the stuttering and recognize the occurrencies of disfluency episodes like repetitions and prolongations of sounds, syllables, words or phrases silent pauses, hesitations or blocks before speech. In particular, we propose a deep-learning based approach able at automatically detecting difluent production point in the speech helping in early classification of the problems providing the number of disfluencies and time intervals where the disfluencies occur. A deep learner is built to preliminarily valuate audio fragments. However, the scenario at hand contains some peculiarities making the detection challenging. Indeed, (i) fragments too short lead to uneffective classification since a too short audio fragment is not able to capture the stuttering episode; and (ii) fragments too long lead to uneffective classification since stuttering episode can have a very small duration and, then, the much fluent speaking contained in the fragment masks the disfluence. So, we design an ad-hoc segment classifier that, exploiting the output of a deep learner working with non too short fragments, classifies each small segment composing an audio fragment by estimating the probability of containing a disfluence.

Fabio Fassetti, Ilaria Fassetti, Simona Nisticò
Localization of Epileptic Foci by Using Convolutional Neural Network Based on iEEG

Epileptic focus localization is a critical factor for successful surgical therapy of resection of epileptogenic tissues. The key challenging problem of focus localization lies in the accurate classification of focal and non-focal intracranial electroencephalogram (iEEG). In this paper, we introduce a new method based on short time Fourier transform (STFT) and convolutional neural networks (CNN) to improve the classification accuracy. More specifically, STFT is employed to obtain the time-frequency spectrograms of iEEG signals, from which CNN is applied to extract features and perform classification. The time-frequency spectrograms are normalized with Z-score normalization before putting into this network. Experimental results show that our method is able to differentiate the focal from non-focal iEEG signals with an average classification accuracy of 91.8%.

Linfeng Sui, Xuyang Zhao, Qibin Zhao, Toshihisa Tanaka, Jianting Cao
Review Spam Detection Using Word Embeddings and Deep Neural Networks

Review spam (fake review) detection is increasingly important taking into consideration the rapid growth of internet purchases. Therefore, sophisticated spam filters must be designed to tackle the problem. Traditional machine learning algorithms use review content and other features to detect review spam. However, as demonstrated in related studies, the linguistic context of words may be of particular importance for text categorization. In order to enhance the performance of review spam detection, we propose a novel content-based approach that considers both bag-of-words and word context. More precisely, our approach utilizes n-grams and the skip-gram word embedding method to build a vector model. As a result, high-dimensional feature representation is generated. To handle the representation and classify the review spam accurately, a deep feed-forward neural network is used in the second step. To verify our approach, we use two hotel review datasets, including positive and negative reviews. We show that the proposed detection system outperforms other popular algorithms for review spam detection in terms of accuracy and area under ROC. Importantly, the system provides balanced performance on both classes, legitimate and spam, irrespective of review polarity.

Aliaksandr Barushka, Petr Hajek
Tools for Semi-automatic Preparation of Training Data for OCR

This work aims at data preparation for OCR systems based on recurrent neural networks. Precisely annotated data are necessary for training a network as well as for evaluation of OCR methods. It is possible to synthesize the data, however such data are not that realistic as the real ones. Manual annotation is thus still needed in many cases, especially in the case of historical documents we are focusing on. Although there are several complex systems for historical document processing, to the best of our knowledge, a simple annotation tool for OCR data is completely missing. Therefore, we propose and implement a set of tools utilizing artificial intelligence that simplify the annotation process. These tools create ground truths for line images that are used for training of nowadays OCR systems. Another contribution of this paper is making these tools freely available for research purposes.

Ladislav Lenc, Jiří Martínek, Pavel Král
Training Strategies for OCR Systems for Historical Documents

This paper presents an overview of training strategies for optical character recognition of historical documents. The main issue is the lack of the annotated data and its quality. We summarize several ways of synthetic data preparation. The main goal of this paper is to show and compare possibilities how to train a convolutional recurrent neural network classifier using the synthetic data and its combination with a real annotated dataset.

Jiří Martínek, Ladislav Lenc, Pavel Král
A Review on the Application of Deep Learning in Legal Domain

The Amount of legal information that is being produced on a daily basis in the law courts is increasing enormously and nowadays this information is available in electronic form also. The application of various machine learning and deep learning methods for processing of legal documents has been receiving considerate attention over the last few years. Legal document classification, translation, summarization, contract review, case prediction and information retrieval are some of the tasks that have received concentrated efforts from the research community. In this survey, we have performed a comprehensive study of various deep learning methods applied in the legal domain and classified various legal tasks into three broad categories, viz. legal data search, legal text analytics and legal intelligent interfaces. The proposed study suggests that deep learning models like CNNs, RNNs, LSTM and GRU, and multi-task deep learning models are being used actively to solve wide variety of legal tasks and are giving state-of-the-art performance.

Neha Bansal, Arun Sharma, R. K. Singh
Long-Short Term Memory for an Effective Short-Term Weather Forecasting Model Using Surface Weather Data

Numerical Weather Prediction (NWP) requires considerable computer power to solve complex mathematical equations to obtain a forecast based on current weather conditions. In this article, we propose a lightweight data-driven weather forecasting model by exploring state-of-the-art deep learning techniques based on Artificial Neural Network (ANN). Weather information is captured by time-series data and thus, we explore the latest Long Short-Term Memory (LSTM) layered model, which is a specialised form of Recurrent Neural Network (RNN) for weather prediction. The aim of this research is to develop and evaluate a short-term weather forecasting model using the LSTM and evaluate the accuracy compared to the well-established Weather Research and Forecasting (WRF) NWP model. The proposed deep model consists of stacked LSTM layers that uses surface weather parameters over a given period of time for weather forecasting. The model is experimented with different number of LSTM layers, optimisers, and learning rates and optimised for effective short-term weather predictions. Our experiment shows that the proposed lightweight model produces better results compared to the well-known and complex WRF model, demonstrating its potential for efficient and accurate short-term weather forecasting.

Pradeep Hewage, Ardhendu Behera, Marcello Trovati, Ella Pereira
Segmentation Methods for Image Classification Using a Convolutional Neural Network on AR-Sandbox

Fields such as early education and motor rehabilitation provide a space for their integration with augmented reality devices as AR-Sandbox, in order to provide support to these fields, generating a feedback of tasks carried out through the recognition of images based on convolutional neural networks. However, the nature of the AR-Sandbox generates a high noise level of the acquired images, for this reason the present study has as purpose the implementation and comparison of three segmentation methods (Canny Edge Detector, Color-space and Threshold) for the training and prediction phase of a convolutional neural network model previously established. When carrying out this study, it was obtained that the combined model with color-space segmentation presents an average percentage of 99% performance for the classification of vowels, described by the AUC of the ROC curve, this being the model with the best performance.

Andres Ovidio Restrepo Rodriguez, Daniel Esteban Casas Mateus, Paulo Alonso Gaona Garcia, Adriana Gomez Acosta, Carlos Enrique Montenegro Marin

Fuzzy Modeling

Frontmatter
A Hybrid Model Based on Fuzzy Rules to Act on the Diagnosed of Autism in Adults

Aspects of Autistic Spectrum Disorder (ASD) can be diagnosed, with rare frequency, in people already in adulthood. To aid in the diagnosed of autistic traits, a mobile system was developed with the objective of executing the techniques extracted from expert studies to determine the effective diagnosis of the disease. This type of system uses artificial intelligence capabilities and machine learning techniques to assign probabilities to people who pass the in-app test. According to the information provided by the authors of the mobile application, future research could address the use of other intelligent models to assist in predicting whether or not the patient has traits of autism. Therefore, this paper proposes the insertion of a hybrid interpretive technique based on the synergy of the concepts of artificial neural networks and fuzzy systems trained by the extreme learning machine to generate fuzzy rules to deal with questions provided by users seeking to obtain immediate answers on preliminary diagnoses of autism in adults. The tests performed achieved high levels of accuracy superior to the preliminary studies that inspired this research, making it a viable alternative for the efficient diagnosed of autism in adults.

Augusto J. Guimarães, Vinicius J. Silva Araujo, Vanessa S. Araujo, Lucas O. Batista, Paulo V. de Campos Souza
An Unsupervised Fuzzy Rule-Based Method for Structure Preserving Dimensionality Reduction with Prediction Ability

We propose an unsupervised fuzzy rule-based system to learn structure preserving data projection. Although, the framework is quite general and any structure preserving measure can be used, we use Sammon’s stress, an extensively used objective function for dimensionality reduction. Unlike Sammon’s method, it can predict the projection for new test points. To extract fuzzy rules, we perform fuzzy c-means clustering on the input data and translate the clusters to the antecedent parts of the rules. Initially, we set the consequent parameters of the rules with random values. We estimate the parameters of the rule base minimizing the Sammon’s stress error function using gradient descent. We explore both Mamdani-Assilian and Takagi-Sugeno type fuzzy rule-based systems. An additional advantage of the proposed system over a neural network based generalization of the Sammon’s method is that the proposed system can reject the test data that are far from the training data used to design the system. We use both synthetic as well as real-world datasets to validate the proposed scheme.

Suchismita Das, Nikhil R. Pal
Interpretable Fuzzy Rule-Based Systems for Detecting Financial Statement Fraud

Systems for detecting financial statement frauds have attracted considerable interest in computational intelligence research. Diverse classification methods have been employed to perform automatic detection of fraudulent companies. However, previous research has aimed to develop highly accurate detection systems, while neglecting the interpretability of those systems. Here we propose a novel fuzzy rule-based detection system that integrates a feature selection component and rule extraction to achieve a highly interpretable system in terms of rule complexity and granularity. Specifically, we use a genetic feature selection to remove irrelevant attributes and then we perform a comparative analysis of state-of-the-art fuzzy rule-based systems, including FURIA and evolutionary fuzzy rule-based systems. Here, we show that using such systems leads not only to competitive accuracy but also to desirable interpretability. This finding has important implications for auditors and other users of the detection systems of financial statement fraud.

Petr Hajek

Learning Automata - Logic Based Reasoning

Frontmatter
Learning Automata-Based Solutions to the Single Elevator Problem

The field of AI has been a topic of interest for the better part of a century, where the goal is to have computers mimic human behaviour. Researchers have incorporated AI in different problem domains, such as autonomous driving, game playing, diagnosis and security. This paper concentrates on a subfield of AI, i.e., the field of Learning Automata (LA), and to use its tools to tackle a problem that has not been tackled before using AI, namely the problem of the optimally scheduling and parking of elevators. In particular, we are concerned with determining the Elevators’ optimal “parking” location. In this paper, we specifically work with the Single (We consider the more complicated multi-elevator problem in a forthcoming paper.) Elevator Problem (SEP), and show how it can be extended to the solution to Elevator-like Problems (ELPs), which are a family of problems with similar characteristics. Here, the objective is to find the optimal parking floors for the single elevator scenario so as to minimize the passengers’ Average Waiting Time (AWT). Apart from proposing benchmark solutions, we have provided two different novel LA-based solutions for the single-elevator scenario. The first solution is based on the well-known $$L_{RI}$$ scheme, and the second solution incorporates the Pursuit concept to improve the performance and the convergence speed of the former, leading to the $$PL_{RI}$$ scheme. The simulation results presented demonstrate that our solutions performed much better than those used in modern-day elevators, and provided results that are near-optimal, yielding a performance increase of up to 80%.

O. Ghaleb, B. John Oommen
Optimizing Self-organizing Lists-on-Lists Using Enhanced Object Partitioning

The question of how to store, manage and access data has been central to the field of Computer Science, and is even more pertinent in these days when megabytes of data are being generated every second. This paper considers the problem of minimizing the cost of data retrieval from the most fundamental data structure, i.e., a Singly-Linked List (SLL). We consider a SLL in which the elements are accessed by a Non-stationary Environment (NSE) exhibiting so-called “Locality of Reference”. We propose a solution to the problem by designing an “Adaptive” Data Structure (ADS) which is created by means of a composite of hierarchical data “sub”-structures to constitute the overall data structure. In this paper, we design an hierarchical Lists-on-Lists (LOLs) by assembling a SLL into an hierarchical scheme that results in a Singly-Linked List on Singly-Linked Lists (SLLs-on-SLLs) comprising of an outer-list and sublist context. The goal is that elements that are more likely to be accessed together are grouped within the same sub-context, while the sublists themselves are moved “en masse” towards the head of the list-context so as to minimize the overall access cost. This move is carried-out by employing the “de-facto” list re-organization schemes, i.e., the Move-To-Front (MTF) and Transposition (TR) rules. To achieve the clustering of elements within the sublists, we invoke the Object Migration Automaton (OMA) family of reinforcement schemes from the theory of Learning Automata (LA). They are introduced so as to capture the probabilistic dependence of the elements in the data structure as it receives query accesses from the Environment. In this paper, we show that SLLs-on-SLLs augmented with the Enhanced Object Migration Automaton (EOMA) minimizes the retrieval cost for elements in NSEs and are superior to the stand-alone MTF and TR schemes, and also superior to the OMA-augmented SLLs-on-SLLs operating in such Environments.

O. Ekaba Bisong, B. John Oommen
EduBAI: An Educational Platform for Logic-Based Reasoning

The field of logic-based Knowledge Representation and Reasoning has produced powerful formalisms for modeling commonsense knowledge in Artificial Intelligence. In this paper, we present EduBAI, an educational platform that helps users familiarize themselves with the main tenets of commonsense reasoning in dynamic, causal domains by means of an interactive entertaining environment. We present the design and implementation of the platform, along with the rationale of sample game tactics of diverse modeling complexity.

Dimitrios Arampatzis, Maria Doulgeraki, Michail Giannoulis, Evropi Stefanidi, Theodore Patkos

Machine Learning - Natural Language

Frontmatter
A Machine Learning Tool for Interpreting Differences in Cognition Using Brain Features

Predicting variability in cognition traits is an attractive and challenging area of research, where different approaches and datasets have been implemented with mixed results. Some powerful Machine Learning algorithms employed before are difficult to interpret, while other algorithms are easy to interpret but might not be as powerful. To improve understanding of individual cognitive differences in humans, we make use of the most recent developments in Machine Learning in which powerful prediction models can be interpreted with confidence. We used neuroimaging data and a variety of behavioural, cognitive, affective and health measures from 905 people obtained from the Human Connectome Project (HCP). As a main contribution of this paper, we show how one could interpret the neuroanatomical basis of cognition, with recent methods which we believe are not yet fully explored in the field. By reducing neuroimages to a well characterised set of features generated from surface-based morphometry and cortical myelin estimates, we make the interpretation of such models easier as each feature is self-explanatory. The code used in this tool is available in a public repository: https://github.com/tjiagoM/interpreting-cognition-paper-2019 .

Tiago Azevedo, Luca Passamonti, Pietro Lió, Nicola Toschi
Comparison of the Best Parameter Settings in the Creation and Comparison of Feature Vectors in Distributional Semantic Models Across Multiple Languages

Measuring the semantic similarity and relatedness of words is important for many natural language processing tasks. Although distributional semantic models designed for this task have many different parameters, such as vector similarity measures, weighting schemes and dimensionality reduction techniques, there is no truly comprehensive study simultaneously evaluating these parameters while also analysing the differences in the findings for multiple languages. We would like to address this gap with our systematic study by searching for the best combination of parameter settings in the creation and comparison of feature vectors in distributional semantic models for English, Spanish and Hungarian separately, and then comparing our findings across these languages.During our extensive analysis we test a large number of possible settings for all parameters, with more than a thousand novel variants in case of some of them. As a result of this we were able to find such combinations of parameter settings that significantly outperform conventional settings combinations and achieve state-of-the-art results.

András Dobó, János Csirik
Distributed Community Prediction for Social Graphs Based on Louvain Algorithm

Nowadays, the problem of community detection has become more and more challenging. With application in a wide range of fields such as sociology, digital marketing, bio-informatics, chemical engineering and computer science, the need for scalable and efficient solutions is strongly underlined. Especially, in the rapidly developed and widespread area of social media where the size of the corresponding networks exceeds the hundreds of millions of vertices in the average case. However, the standard sequential algorithms applications have practically proven not only infeasible but also terribly unscalable due to the excessive computation demands and the overdone resources prerequisites. Therefore, the introduction of compatible distributed machine learning solutions seems the most promising option to tackle this NP-hard class problem. The purpose of this work is to propose a novel distributed community detection methodology, based on the supervised community prediction concept that is extremely scalable, remarkably efficient and circumvent the intrinsic adversities of classic community detection approaches.

Christos Makris, Dionisios Pettas, Georgios Pispirigos
Iliou Machine Learning Data Preprocessing Method for Suicide Prediction from Family History

As real world data tends to be incomplete, noisy and inconsistent, data preprocessing is an important issue for data mining. Data preparation includes data cleaning, data integration, data transformation and data reduction. In this paper, Iliou preprocessing method is compared with Principal Component Analysis in suicide prediction according to family history. The dataset consists of 360 students, aged 18 to 24, who were experiencing family history problems. The performance of Iliou and Principal Component Analysis data preprocessing methods was evaluated using the 10-fold cross validation method assessing ten classification algorithms, IB1, J48, Random Forest, MLP, SMO, JRip, RBF, Naïve Bayes, AdaBoostM1 and HMM, respectively. Experimental results illustrate that Iliou data preprocessing algorithm outperforms Principal Component Analysis data preprocessing method, achieving 100% against 71.34% classification performance, respectively. According to the classification results, Iliou preprocessing method is the most suitable for suicide prediction.

Theodoros Iliou, Georgia Konstantopoulou, Christina Lymperopoulou, Konstantinos Anastasopoulos, George Anastassopoulos, Dimitrios Margounakis, Dimitrios Lymberopoulos
Ontology Population Framework of MAGNETO for Instantiating Heterogeneous Forensic Data Modalities

The growth in digital technologies has influenced three characteristics of information namely the volume, the modality and the frequency. As the amount of information generated by individuals increases, there is a critical need for the Law Enforcement Agencies to exploit all available resources to effectively carry out criminal investigation. Addressing the increasing challenges in handling the large amount of diversified media modalities generated at high-frequency, the paper outlines a systematic approach adopted for the processing and extraction of semantic concepts formalized to assist criminal investigations. The novelty of the proposed framework relies on the semantic processing of heterogeneous data sources including audio-visual footage, speech-to-text, text mining, suspect tracking and identification using distinctive region or pattern. Information extraction from textual data, machine-translated into English from various European languages, uses semantic role labeling. All extracted information is stored in one unifying system based on an ontology developed specifically for this task. The described technologies will be implemented in the Multimedia Analysis and correlation enGine for orgaNised crime prEvention and invesTigatiOn (MAGNETO).

Ernst-Josef Behmer, Krishna Chandramouli, Victor Garrido, Dirk Mühlenberg, Dennis Müller, Wilmuth Müller, Dirk Pallmer, Francisco J. Pérez, Tomas Piatrik, Camilo  Vargas
Random Forest Surrogate Models to Support Design Space Exploration in Aerospace Use-Case

In engineering, design analyses of complex products rely on computer simulated experiments. However, high-fidelity simulations can take significant time to compute. It is impractical to explore design space by only conducting simulations because of time constraints. Hence, surrogate modelling is used to approximate the original simulations. Since simulations are expensive to conduct, generally, the sample size is limited in aerospace engineering applications. This limited sample size, and also non-linearity and high dimensionality of data make it difficult to generate accurate and robust surrogate models. The aim of this paper is to explore the applicability of Random Forests (RF) to construct surrogate models to support design space exploration. RF generates meta-models or ensembles of decision trees, and it is capable of fitting highly non-linear data given quite small samples. To investigate the applicability of RF, this paper presents an approach to construct surrogate models using RF. This approach includes hyperparameter tuning to improve the performance of the RF’s model, to extract design parameters’ importance and if-then rules from the RF’s models for better understanding of design space. To demonstrate the approach using RF, quantitative experiments are conducted with datasets of Turbine Rear Structure use-case from an aerospace industry and results are presented.

Siva Krishna Dasari, Abbas Cheddad, Petter Andersson
Stacking Strong Ensembles of Classifiers

A variety of methods have been developed in order to tackle a classification problem in the field of decision support systems. A hybrid prediction scheme which combines several classifiers, rather than selecting a single robust method, is a good alternative solution. In order to address this issue, we have provided an ensemble of classifiers to create a hybrid decision support system. This method based on stacking variant methodology that combines strong ensembles to make predictions. The presented hybrid method has been compared with other known-ensembles. The experiments conducted on several standard benchmark datasets showed that the proposed scheme gives promising results in terms of accuracy in most of the cases.

Stamatios-Aggelos N. Alexandropoulos, Christos K. Aridas, Sotiris B. Kotsiantis, Michael N. Vrahatis

Multi Agent - IoT

Frontmatter
An Agent-Based Framework for Complex Networks

A large number of research and industrial projects could benefit from a module-based development. However, these modules and the communication between them may vary from project to project. Therefore, a general middleware instead of several specialized middlewares for each domain is desired. This paper presents the ACONA Framework (Agent-based Complex Network Architecture). It is an agent-based middleware with a lightweight and flexible infrastructure. Also, it offers the possibility of evolutionary programming. Its performance is demonstrated in three applications: (i) A cognitive architecture with around 40 interconnected modules; (ii) a stock market simulator with elements of evolutionary programming; and (iii) an industry 4.0 application of a conveyor belt.

Alexander Wendt, Maximilian Götzinger, Thilo Sauter
Studying Emotions at Work Using Agent-Based Modeling and Simulation

Emotions in workplace is a topic that has increasingly attracted attention of both organizational practitioners and academics. This is due to the fundamental role emotions play in shaping human resources behaviors, performance, productivity, interpersonal relationships and engagement at work. In the current research, a computational social simulation approach is adopted to replicate and study the emotional experiences of employees in organizations. More specifically, an emotional agent-based model of an employee at work is proposed. The developed model is used in a computer simulator WEMOS (Workers EMotions in Organizations Simulator) to conduct certain analyzes in relation to the most likely emotions-evoking stimuli as well as the emotional content of several work-related stimuli. Simulation results can be employed to gain deeper understanding about emotions in the work life.

Hanen Lejmi-Riahi, Mouna Belhaj, Lamjed Ben Said
Towards an Adaption and Personalisation Solution Based on Multi Agent System Applied on Serious Games

Serious games (SG) have the potential to become one of the most important future e-learning tools. The use of SG in education is a large deviation from the common education standards, which usually are based on mass systems of instruction, assessment, grading and reporting students’ knowledge and skills. SG encourage self‐directness and independency of student, thus providing a framework for self-learning activities. However, the benefits of using SG as a learning tool are maximized in a personalised and adaptive environment. Although it has been suggested in the past that SG can take advantage of Artificial Intelligence (AI) methods for automated adaptation to the learner, there is not so much research in the field.Taking the above into consideration, this paper aims to provide a framework on adaptive and personalised SG using AI methods. The advances in technology have made it possible to trace and collect user generated data that we can use to capture essentially players’ in-game behaviours and trace knowledge or skills acquired from the player during playing. This will actually be a two-step process, “User Identification” and “Content Adaptation” to learners’ needs. In the proposed methodology “User Identification” will be implemented from data derived from “User Behaviour” and “System Feedback”. That data will feed a Learner Agent supported by an Adaption and Personalisation engine, which will interact with both the “Instructional Content” and “Game Characteristics” in order to achieve the desired adaption. This paper will be used as a basis for further development of an adaptive and personalised SG.

Spyridon Blatsios, Ioannis Refanidis

Nature Inspired Flight and Robot Control - Machine Vision

Frontmatter
Constant Angular Velocity Regulation for Visually Guided Terrain Following

Insects use visual cues to control their flight behaviours. By estimating the angular velocity of the visual stimuli and regulating it to a constant value, honeybees can perform a terrain following task which keeps the certain height above the undulated ground. For mimicking this behaviour in a bio-plausible computation structure, this paper presents a new angular velocity decoding model based on the honeybee’s behavioural experiments. The model consists of three parts, the texture estimation layer for spatial information extraction, the motion detection layer for temporal information extraction and the decoding layer combining information from pervious layers to estimate the angular velocity. Compared to previous methods on this field, the proposed model produces responses largely independent of the spatial frequency and contrast in grating experiments. The angular velocity based control scheme is proposed to implement the model into a bee simulated by the game engine Unity. The perfect terrain following above patterned ground and successfully flying over irregular textured terrain show its potential for micro unmanned aerial vehicles’ terrain following.

Huatian Wang, Qinbing Fu, Hongxin Wang, Jigen Peng, Shigang Yue
Motion Segmentation Based on Structure-Texture Decomposition and Improved Three Frame Differencing

Motion segmentation from the video datasets has several important applications like traffic monitoring, action recognition, visual object tracking, and video surveillance. The proposed technique combines the structure-texture decomposition and the improved three frames differencing for motion segmentation. First, the Osher and Vese approach is employed to decompose the video frame into two components, viz., structure and texture/noise. Now, to eliminate the noise, only the structure components are employed for further steps. Subsequently, the difference between (i) the current frame and the previous frame as well as (ii) the current frame and the next frame are estimated. Next, both the difference frames are combined using pixel-wise maximum operation. Each combined difference frame is then partitioned into non-overlapping blocks, and the intensity sum as well as median of each block is computed. Successively, target objects are detected with the help of threshold and intensity median. Finally, post-processing in the form of morphology operation and connected component analysis is carried out to accurately find the foreground. Our technique has been formulated, implemented and tested on publicly available standard benchmark datasets and it is proved from performance analysis that our technique exhibit efficient outcomes than existing approaches.

Sandeep Singh Sengar
Using Shallow Neural Network Fitting Technique to Improve Calibration Accuracy of Modeless Robots

This paper describes a technique for the position error estimations and compensations of the modeless robots and manipulators calibration process based on a shallow neural network fitting function method. Unlike traditional model-based robots calibrations, the modeless robots calibrations do not need to perform any modeling and identification processes. Only two processes, measurements and compensations, are necessary for this kind of robots calibrations. By using the shallow neural network fitting technique, the accuracy of the position error compensation can be greatly improved, which is confirmed by the simulation results given in this paper. Also the comparisons among the popular traditional interpolation methods, such as bilinear and fuzzy interpolations, and this shallow neural network technique, are made via simulation studies. The simulation results show that more accurate compensation result can be achieved using the shallow neural network fitting technique compared with the bilinear and fuzzy interpolation methods.

Ying Bai, Dali Wang

Recommendation Systems

Frontmatter
Banner Personalization for e-Commerce

Real-time website personalization is a concept that is being discussed for more than a decade, but has only recently been applied in practice, according to new marketing trends. These trends emphasize on delivering user-specific content based on behavior and preferences. In this context, banner recommendation in the form of personalized ads is an approach that has attracted a lot of attention. Nevertheless, banner recommendation in terms of e-commerce main page sliders and static banners is even today an underestimated problem, as traditionally only large e-commerce stores deal with it. In this paper we propose an integrated framework for banner personalization in e-commerce that can be applied in small-medium e-retailers. Our approach combines topic-models and a neural network, in order to recommend and optimally rank available banners of an e-commerce store to each user separately. We evaluated our framework against a dataset from an active e-commerce store and show that it outperforms other popular approaches.

Ioannis Maniadis, Konstantinos N. Vavliakis, Andreas L. Symeonidis
Hybrid Data Set Optimization in Recommender Systems Using Fuzzy T-Norms

A recommender system uses specific algorithms and techniques in order to suggest specific services, goods or other type of recommendations that users could be interested in. User’s preferences or ratings are used as inputs and top-N recommendations are produced by the system. The evaluation of the recommendations is usually based on accuracy metrics such as the Mean Absolute Error (MAE) and the Root Mean Squared Error (RMSE), while on the other hand Precision and Recall is used to measure the quality of the top-N recommendations. Recommender systems development has been mainly focused in the development of new recommendation algorithms. However, one of the major problems in modern offline recommendation system is the sparsity of the datasets and the selection of the suitable users Y that could produce the best recommendations for users X. In this paper, we propose an algorithm that uses Fuzzy sets and Fuzzy norms in order to evaluate the correlation between users in the data set so the system can select and use only the most relevant users. At the same time, we are extending our previous work about Reproduction of experiments in recommender systems by developing new explanations and variables for the proposed new algorithm. Our proposed approach has been experimentally evaluated using a real dataset and the results show that it is really efficient and it can increase both accuracy and quality of recommendations.

Antonios Papaleonidas, Elias Pimenidis, Lazaros Iliadis
MuSIF: A Product Recommendation System Based on Multi-source Implicit Feedback

Collaborative Filtering (CF) is a well-established method in Recommendation Systems. Recent research focuses on extracting recommendations also based on implicitly gathered information. Implicit Feedback (IF) systems present several new challenges that need to be addressed. This paper reports on MuSIF, a product recommendation system based solely on IF. MuSIF incorporates CF with Matrix Factorization and Association Rule Mining. It implements a hybrid recommendation algorithm in a way that different methods can be used to increase accuracy. In addition, it is equipped with a new method to increase the accuracy of matrix factorization algorithms via initialization of factor vectors, which, as far as we know, is tested for the first time in an implicit model-based CF approach. Moreover, it includes methods for addressing data sparsity, a major issue for many recommendation engines. Evaluation shows that the proposed methodology is promising and can benefit customers and e-shop owners with personalization in real world scenarios.

Ιοannis Schoinas, Christos Tjortjis
On the Invariance of the SELU Activation Function on Algorithm and Hyperparameter Selection in Neural Network Recommenders

In a number of recent studies the Scaled Exponential Linear Unit (SELU) activation function has been shown to automatically regularize network parameters and to make learning robust due to its self-normalizing properties. In this paper we explore the utilization of SELU in training different neural network architectures for recommender systems and validate that it indeed outperforms other activation functions for these types of problems. More interestingly however, we show that SELU also exhibits performance invariance with regards to the selection of the optimization algorithm and its corresponding hyperparameters. This is clearly demonstrated by a number of experiments which involve a number of activation functions and optimization algorithms for training different neural network architectures on standard recommender systems benchmark datasets.

Flora Sakketou, Nicholas Ampazis
Backmatter
Metadata
Title
Artificial Intelligence Applications and Innovations
Editors
John MacIntyre
Ilias Maglogiannis
Prof. Lazaros Iliadis
Dr. Elias Pimenidis
Copyright Year
2019
Electronic ISBN
978-3-030-19823-7
Print ISBN
978-3-030-19822-0
DOI
https://doi.org/10.1007/978-3-030-19823-7

Premium Partner