main-content

Über dieses Buch

This two-volume set LNCS 10305 and LNCS 10306 constitutes the refereed proceedings of the 14th International Work-Conference on Artificial Neural Networks, IWANN 2017, held in Cadiz, Spain, in June 2017.

The 126 revised full papers presented in this double volume were carefully reviewed and selected from 199 submissions. The papers are organized in topical sections on Bio-inspired Computing; E-Health and Computational Biology; Human Computer Interaction; Image and Signal Processing; Mathematics for Neural Networks; Self-organizing Networks; Spiking Neurons; Artificial Neural Networks in Industry ANNI'17; Computational Intelligence Tools and Techniques for Biomedical Applications; Assistive Rehabilitation Technology; Computational Intelligence Methods for Time Series; Machine Learning Applied to Vision and Robotics; Human Activity Recognition for Health and Well-Being Applications; Software Testing and Intelligent Systems; Real World Applications of BCI Systems; Machine Learning in Imbalanced Domains; Surveillance and Rescue Systems and Algorithms for Unmanned Aerial Vehicles; End-User Development for Social Robotics; Artificial Intelligence and Games; and Supervised, Non-Supervised, Reinforcement and Statistical Algorithms.

Inhaltsverzeichnis

Prediction of Protein Oxidation Sites

Although reactive oxygen species are best known as damaging agents linked to aerobic metabolism, it is now clear that they can also function as messengers in cellular signalling processes. Methionine, one of the two sulphur containing amino acids in proteins, is liable to be oxidized by a well-known reactive oxygen species: hydrogen peroxide. The awareness that methionine oxidation may provide a mechanism to the modulation of a wide range of protein functions and cellular processes has recently encouraged proteomic approaches. However, these experimental studies are considerably time-consuming, labor-intensive and expensive, thus making the development of in silico methods for predicting methionine oxidation sites highly desirable. In the field of protein phosphorylation, computational prediction of phosphorylation sites has emerged as a popular alternative approach. On the other hand, very few in-silico studies for methionine oxidation prediction exist in the literature. In the current study we have addressed this issue by developing predictive models based on machine learning strategies and models—random forests, support vector machines, neural networks and flexible discriminant analysis—, aimed at accurate prediction of methionine oxidation sites.

Francisco J. Veredas, Francisco R. Cantón, Juan C. Aledo

Neuronal Texture Analysis in Murine Model of Down’s Syndrome

An alteration of neuronal morphology is present in cognitive neurological diseases where learning or memory abilities are affected. The quantification of this alteration and its evolution by the study of microscopic images is essential. However, the use of advanced and automatic image processing techniques is currently very limited, focusing on the analysis of the morphology of isolated neurons. On this article we present a new methodology, based on texture analysis, to characterize the global distribution of different neural patterns in immunofluorescence images of brain tissue sections, where the neurons can be visualized as they are really distributed. We apply the technique to mice brain tissue section dividing them into two classes: Ts1Cje Down’s syndrome model and wild type, free of this neurodegenerative disease. Taking into account CA1 region of the hippocampus, we calculate and compare several state of the art texture descriptors that are subsequently classified using machine learning techniques. Achieving a 95% of accuracy, the assumption that texture characterization is relevant to quantify globally morphological alterations in the neurons, seems to be demonstrated.

Auxiliadora Sarmiento, Miguel Ángel Fernández-Granero, Beatriz Galán, María Luz Montesinos, Irene Fondón

Architecture for Neurological Coordination Tests Implementation

This paper proposes a generic architecture for devising interactive neurological assessment tests, aimed at being implemented on a touchscreen device. The objective is both to provide a set of software primitives that allow the modular implementation of tests, and to contribute to the standardization of test protocols. Although our original goal was the application of machine learning methods to the analysis of test data, it turned out that the construction of such framework was a pre-requisite to collect enough data with the required levels of accuracy and reproducibility. In the proposed architecture, tests are defined by a set of stimuli, responses, feedback information, and execution control procedures. The presented definition has allowed for the implementation of a particular test, the Finger-Nose-Finger, that will allow the exploitation of data with intelligent techniques.

Michel Velázquez-Mariño, Miguel Atencia, Rodolfo García-Bermúdez, Francisco Sandoval, Daniel Pupo-Ricardo

Adaptation of Deep Convolutional Neural Networks for Cancer Grading from Histopathological Images

The paper addresses the medical challenge of interpreting histopathological slides through expert-independent automated learning with implicit feature determination and direct grading establishment. Deep convolutional neural networks model the image collection and are able to give a timely and accurate support for pathologists, who are more than often burdened by large amounts of data to be processed. The paradigm is however known to be problem-dependent in variable setting, therefore automatic parametrization is also considered. Due to the large necessary runtime, this is restricted to kernel size optimization in each convolutional layer. As processing time still remains considerable for five variables, a surrogate model is further constructed. Results support the use of the deep learning methodology for computational assistance in cancer grading from histopathological images.

Stefan Postavaru, Ruxandra Stoean, Catalin Stoean, Gonzalo Joya Caparros

Deep Learning to Analyze RNA-Seq Gene Expression Data

Deep learning models are currently being applied in several areas with great success. However, their application for the analysis of high-throughput sequencing data remains a challenge for the research community due to the fact that this family of models are known to work very well in big datasets with lots of samples available, just the opposite scenario typically found in biomedical areas. In this work, a first approximation on the use of deep learning for the analysis of RNA-Seq gene expression profiles data is provided. Three public cancer-related databases are analyzed using a regularized linear model (standard LASSO) as baseline model, and two deep learning models that differ on the feature selection technique used prior to the application of a deep neural net model. The results indicate that a straightforward application of deep nets implementations available in public scientific tools and under the conditions described within this work is not enough to outperform simpler models like LASSO. Therefore, smarter and more complex ways that incorporate prior biological knowledge into the estimation procedure of deep learning models may be necessary in order to obtain better results in terms of predictive performance.

D. Urda, J. Montes-Torres, F. Moreno, L. Franco, J. M. Jerez

Designing BENECA m-Health APP, A Mobile Health Application to Monitor Diet and Physical Activity in Cancer Survivors

This is the abstract of a proposed mobile health application for assessing and monitoring healthy lifestyles (in terms of diet and physical activity levels) in cancer survivors, to be fully exposed at the IWANN 2017. The main goal of this mobile health application is to help cancer patients with energy imbalance, which can increase the risk of recurrence and other associated problems, such as metabolic syndrome and even death, to adhere to the international healthy recommendations in terms of diet and physical activity. The system, called BENECA m-Health app, is still in development, and will be a reliable instrument to assess physical activity and diet in cancer survivors, offering them a real-time energy balance feedback and attempting to overcome specific identified barriers to facilitate the inclusion of exercise and healthy diet into supportive care programs for cancer survivors. This mobile application has been designed to address the new needs for support of breast cancer survivors, reflecting the emerging need to merge new, low cost treatment options. This m-Health System could be a promising approach for dietary and physical assessment, as well as for intervention programs, which can be used whenever and wherever patients want.

Mario Lozano-Lozano, Jose A. Moral-Munoz, Noelia Galiano-Castillo, Lydia Martín-Martín, Carolina Fernández-Lao, Manuel Arroyo-Morales, Irene Cantarero-Villanueva

Automatic 2D Motion Capture System for Joint Angle Measurement

Joints angles are some of the most common measurements for the evaluation of lower limb injury risk, specially of lower limb joints. The 2D projections of these angles, as the Frontal Plane Projection Angle (FPPA), are widely used as an estimation of the angle value. Traditional procedures to measure 2D angles imply huge time investments, primarily when evaluating multiple subjects. This work presents a novel 2D video analysis system directed to capture the joint angles in a cost-and-time-effective way. It employs Kinect V2 depth sensor to track retro-reflective markers attached to the patient’s joints to provide an automatic estimation of the desired angles. The information registered by the sensor is processed and managed by a computer application that expedites the analysis of the results. The reliability of the system has been studied against traditional procedures obtaining excellent results. This system is aimed to be the starting point of an autonomous injury prediction system based on machine learning techniques.

Carlos Bailon, Miguel Damas, Hector Pomares, Oresti Banos

Mobile Application for Executing Therapies with Robots

While robotic technology is being incorporated in therapies, still not enough research has been done to find out how different end-users are willing or able to use robots in their practice. To investigate this issue, a specific study has been designed to determine the preferences of end-users that execute or receive therapies using robots. We applied a participatory design approach which included brainstorming and testing at every stage of the development process. We first determine the preferences of professionals from clinics and schools for children with Autism Spectrum Disorder (ASD). The results indicated that shared (semi-autonomous) control of the robot is preferred in therapies, and mobile devices, like smartphones and tablets, are the preferred interface for the shared robot control. The outcomes of this first stage of research were used as design requirements for the development of a mobile application to be used as an interactive robot control interface. We further developed and tested the application for usability by a broad spectrum of users.

Manuel Martin-Ortiz, Min-Gyu Kim, Emilia I. Barakova

Automated EEG Signals Analysis Using Quantile Graphs

Recently, a map from time series to networks has been proposed [7, 8], allowing the use of network statistics to characterize time series. In this approach, time series quantiles are naturally mapped into nodes of a graph. Networks generated by this method, called Quantile Graphs (QGs), are able to capture and quantify features such as long-range correlations or randomness present in the underlying dynamics of the original signal. Here we apply the QG method to the problem of detecting the differences between electroencephalographic time series (EEG) of healthy and unhealthy subjects. Our main goal is to illustrate how the differences in dynamics are reflected in the topology of the corresponding QGs. Results show that the QG method cannot only differentiate epileptic from normal data, but also distinguish the different abnormal stages/patterns of a seizure, such as pre-ictal (EEG changes preceding a seizure) and ictal (EEG changes during a seizure).

Andriana S. L. O. Campanharo, Erwin Doescher, Fernando M. Ramos

Hybrid Models for Short-Term Load Forecasting Using Clustering and Time Series

Short-term forecasting models on the micro-grid level help guaranteeing the cost-effective dispatch of available resources and maintaining shortfalls and surpluses to a minimum in the spot market. In this paper, we introduce two time series models for forecasting the day-ahead total power consumption and the fine-granular 24-hour consumption pattern of individual buildings. The proposed model for predicting the consumption pattern outperforms the state-of-the-art algorithm of Pattern Sequence-based Forecasting (PSF). Our analysis reveals that the clustering of individual buildings based on their seasonal, weekly, and daily patterns of power consumption improves the prediction accuracy and increases the time efficiency by reducing the search space.

Wael Alkhatib, Alaa Alhamoud, Doreen Böhnstedt, Ralf Steinmetz

Multi-resolution Time Series Discord Discovery

Discord Discovery is a recent approach for anomaly detection in time series that has attracted much research because of the wide variety of real-world applications in monitoring systems. However, finding anomalies by different levels of resolution has received little attention in this research line. In this paper, we introduce a multi-resolution representation based on local trends and mean values of the time series. We require the level of resolution as parameter, but it can be automatically computed if we consider the maximum resolution of the time series. In order to provide a useful representation for discord discovery, we propose dissimilarity measures for achieving high effective results, and a symbolic representation based on SAX technique for efficient searches using a multi-resolution indexing scheme. We evaluate our method over a diversity of data domains achieving a better performance compared with some of the best-known classic techniques.

Heider Sanchez, Benjamin Bustos

A Pliant Arithmetic-Based Fuzzy Time Series Model

In this study, a fuzzy arithmetics-based fuzzy time series modeling method is introduced. After input data normalization, the fuzzy c-means clustering algorithm is used for fuzzification and establishment of antecedents of the fuzzy rules. Here, each rule consequent is treated as a fuzzy number composed of a left and a right hand side fuzzy set, each of which is given by a sigmoid membership function. The novelty of the proposed method lies in the application of pliant arithmetics to aggregate separately the left and the right hand sides of the individual fuzzy consequents, taking the activation levels of the corresponding antecedents into account. Here, Dombi’s conjunction operator is applied to form the fuzzy output from the aggregates of the left and right hand side sigmoid functions. The introduced defuzzification method does not require any numerical integration and runs in constant time. The output of the pliant arithmetic based fuzzy time series model is obtained by denormalizing the crisp output produced by the fuzzy inference. Lastly, the modeling capability of the introduced methodology was tested on empirical data. Based on these results, our method may be viewed as a viable alternative prediction technique.

József Dombi, Tamás Jónás, Zsuzsanna Eszter Tóth

Robust Clustering for Time Series Using Spectral Densities and Functional Data Analysis

In this work a robust clustering algorithm for stationary time series is proposed. The algorithm is based on the use of estimated spectral densities, which are considered as functional data, as the basic characteristic of stationary time series for clustering purposes. A robust algorithm for functional data is then applied to the set of spectral densities. Trimming techniques and restrictions on the scatter within groups reduce the effect of noise in the data and help to prevent the identification of spurious clusters. The procedure is tested in a simulation study, and is also applied to a real data set.

Diego Rivera-García, Luis Angel García-Escudero, Agustín Mayo-Iscar, Joaquín Ortega

Introducing a Fuzzy-Pattern Operator in Fuzzy Time Series

In this paper we introduce a fuzzy pattern operator and propose a new weighting fuzzy time series strategy for generating accurate ex-post forecasts. A decision support system is built for managing the weights of the information provided by the historical data, under a fuzzy time series framework. Our procedure analyzes the historical performance of the time series using different experiments, and it classifies the characteristics of the series through a fuzzy operator, providing a trapezoidal fuzzy number as one-step ahead forecast. We also present some numerical results related to the predictive performance of our procedure with time series of financial data sets.

Abel Rubio, Enriqueta Vercher, José D. Bermúdez

Scalable Forecasting Techniques Applied to Big Electricity Time Series

This paper presents different scalable methods to predict time series of very long length such as time series with a high sampling frequency. The Apache Spark framework for distributed computing is proposed in order to achieve the scalability of the methods. Namely, the existing MLlib machine learning library from Spark has been used. Since MLlib does not support multivariate regression, the forecasting problem has been split into h forecasting subproblems, where h is the number of future values to predict. Then, representative forecasting methods of different nature have been chosen such as models based on trees, two ensembles techniques (gradient-boosted trees and random forests), and a linear regression as a reference method. Finally, the methodology has been tested on a real-world dataset from the Spanish electricity load data with a ten-minute frequency.

Antonio Galicia, José F. Torres, Francisco Martínez-Álvarez, Alicia Troncoso

Forecasting Financial Time Series with Multiple Kernel Learning

This paper introduces a forecasting procedure based on multivariate dynamic kernels to re-examine –under a non linear framework– the experimental tests reported by Welch and Goyal showing that several variables proposed in the academic literature are of no use to predict the equity premium under linear regressions. For this approach kernel functions for time series are used with multiple kernel learning in order to represent the relative importance of each of these variables.

Luis Fábregues, Argimiro Arratia, Lluís A. Belanche

Spatial-Temporal Analysis for Noise Reduction in NDVI Time Series

MODerate resolution Imaging Spectroradiometer (MODIS) data are largely used in multitemporal analysis of various Earth-related phenomena, such as mapping patterns of vegetation phenology and detecting land use/land cover change. NDVI time series are composite mosaics of the best quality pixels over a period of sixteen days. However, it is common to find low quality pixels in the composition that affect the time series analysis due to errors in the atmosphere conditions and in data acquisition. We present a filtering methodology that considers the pixel position (location in space) and time (position in the temporal data series) to define a new value for the low quality pixel. This methodology estimates the value of the point of interest, based first on a linear regression excluding pixels with low coefficient of determination R2 and second on excluding outliers according to a boxplot analysis. Thus, from the remaining group of pixels, a Smooth Spline is generated in order to reconstruct the time series. The accuracies of estimated NDVI values using Spline were higher than the Savitzky–Golay method.

Fernanda Carneiro Rola Servián, Julio Cesar de Oliveira

Hidden-Markov Models for Time Series of Continuous Proportions with Excess Zeros

Bounded time series and time series of continuous proportions are often encountered in statistical modeling. Usually, they are addressed either by a logistic transformation of the data, or by specific probability distributions, such as Beta distribution. Nevertheless, these approaches may become quite tricky when the data show an over-dispersion in 0 and/or 1. In these cases, the zero-and/or-one Beta-inflated distributions, $${\mathcal {ZOIB}}$$, are preferred. This manuscript combines $${\mathcal {ZOIB}}$$ distributions with hidden-Markov models and proposes a flexible model, able to capture several regimes controlling the behavior of a time series of continuous proportions. For illustrating the practical interest of the proposed model, several examples on simulated data are given, as well as a case study on historical data, involving the military logistics of the Duchy of Savoy during the XVIth and the XVIIth centuries.

Julien Alerini, Marie Cottrell, Madalina Olteanu

Forecasting Univariate Time Series by Input Transformation and Selection of the Suitable Model

Several tasks in science, engineering, or financial are related with sequences of values throughout the time (time series). This paper is focused in univariate time series, so unknown future values are obtained from k previous (and known) values. To fit a model between independent variables (present and past values) and dependent variables (future values), Artificial Neural Networks, which are data driven, can get good results in its performance results. In this work, we present a method to find some alternatives to the ANN trained with the raw data. This method is based on transforming the original time series into the time series of differences between two consecutive values and the time series of increment (−1, 0, +1) between two consecutive values. The three ANN obtained can be applied in an individual way or combine to get a fourth alternative which result from the combination of the other. The method evaluates the performance of all alternatives and take the decision, on validation subset, which of the alternatives could improve the performance, on test subset of the ANN trained with raw data.

German Gutierrez, M. Paz Sesmero, Araceli Sanchis

Vehicle Classification in Traffic Environments Using the Growing Neural Gas

Traffic monitoring is one of the most popular applications of automated video surveillance. Classification of the vehicles into types is important in order to provide the human traffic controllers with updated information about the characteristics of the traffic flow, which facilitates their decision making process. In this work, a video surveillance system is proposed to carry out such classification. First of all, a feature extraction process is carried out to obtain the most significant features of the detected vehicles. After that, a set of Growing Neural Gas neural networks is employed to determine their types. A qualitative and quantitative assessment of the proposal is carried out on a set of benchmark traffic video sequences, with favorable results.

Miguel A. Molina-Cabello, Rafael Marcos Luque-Baena, Ezequiel López-Rubio, Juan Miguel Ortiz-de-Lazcano-Lobato, Enrique Domínguez, José Muñoz Pérez

Recognizing Pedestrian Direction Using Convolutional Neural Networks

Pedestrian movement direction recognition is an important factor in autonomous driver assistance and security surveillance systems. Pedestrians are the most crucial and fragile moving objects in streets, roads and events where thousands of people may gather on a regular basis. People flow analysis on zebra crossings and in commercial centres or events such as demonstrations, are a key element to improve safety and to enable autonomous cars to drive in real life environments. This paper focuses on deep learning techniques such as Convolutional Neural Networks (CNN) to achieve a good and reliable detection of pedestrians moving in a particular direction. We present a novel input representation that leverages current pedestrian detection techniques to generate a sum of subtracted frames, which are used as an input for the proposed CNN. Moreover, we have also created a new dataset for this purpose.

Alex Dominguez-Sanchez, Sergio Orts-Escolano, Miguel Cazorla

XRAY Algorithm for Separable Nonnegative Tensor Factorization

Many computational problems in machine learning can be represented by separable matrix factorization models. In a geometric approach, linear separability means that the whole set of data points can be modeled by a convex combination of a few data points, referred to as the extreme rays. The aim of the XRAY algorithm is to find the extreme rays of the conic hull, generated by observed nonnegative vectors. In this paper, we extend the concept of this algorithm to a multi-linear data representation. Instead of searching into a vector space, we attempt to find the equivalent extreme rays in a space of tensors, under the linear separability assumption of subtensors, ordered along the selected mode. The proposed multi-way XRAY algorithm has been applied to Blind Source Separation (BSS) of natural images. The experiments demonstrate that if multi-way observations are at least one-mode linearly separable, the proposed algorithms can estimate the latent factors with high Signal-to-Interference (SIR) performance. The discussed methods may also be useful for analyzing video sequences.

Automatic Learning of Gait Signatures for People Identification

This work targets people identification in video based on the way they walk (i.e. gait). While classical methods typically derive gait signatures from sequences of binary silhouettes, in this work we explore the use of convolutional neural networks (CNN) for learning high-level descriptors from low-level motion features (i.e. optical flow components). We carry out a thorough experimental evaluation of the proposed CNN architecture on the challenging TUM-GAID dataset. The experimental results indicate that using spatio-temporal cuboids of optical flow as input data for CNN allows to obtain state-of-the-art results on the gait task, both for identification and gender recognition, with an image resolution eight times lower than the previously reported results (i.e. $$80\times 60$$ pixels).

Francisco Manuel Castro, Manuel J. Marín-Jiménez, Nicolás Guil, Nicolás Pérez de la Blanca

Comprehensive Evaluation of OpenCL-Based CNN Implementations for FPGAs

Deep learning has significantly advanced the state of the art in artificial intelligence, gaining wide popularity from both industry and academia. Special interest is around Convolutional Neural Networks (CNN), which take inspiration from the hierarchical structure of the visual cortex, to form deep layers of convolutional operations, along with fully connected classifiers. Hardware implementations of these deep CNN architectures are challenged with memory bottlenecks that require many convolution and fully-connected layers demanding large amount of communication for parallel computation. Multi-core CPU based solutions have demonstrated their inadequacy for this problem due to the memory wall and low parallelism. Many-core GPU architectures show superior performance but they consume high power and also have memory constraints due to inconsistencies between cache and main memory. OpenCL is commonly used to describe these architectures for their execution on GPGPUs or FPGAs. FPGA design solutions are also actively being explored, which allow implementing the memory hierarchy using embedded parallel BlockRAMs. This boosts the parallel use of shared memory elements between multiple processing units, avoiding data replicability and inconsistencies. This makes FPGAs potentially powerful solutions for real-time classification of CNNs. In this paper both Altera and Xilinx adopted OpenCL co-design frameworks for pseudo-automatic development solutions are evaluated. A comprehensive evaluation and comparison for a 5-layer deep CNN is presented. Hardware resources, temporal performance and the OpenCL architecture for CNNs are discussed. Xilinx demonstrates faster synthesis, better FPGA resource utilization and more compact boards. Altera provides multi-platforms tools, mature design community and better execution times.

Machine Learning Improves Human-Robot Interaction in Productive Environments: A Review

In the new generation of industries, including all the advances introduced by Industry 4.0, human robot interaction (HRI), by means of automatic learning and computer vision, become an important element to accomplish. HRI allows to create collaborative environments between people and robots, avoiding the latter generating a risk of occupational safety. In addition to the automatic systems, the interaction by mean of automated learning processes provides necessary information to increase productivity and minimize delivery response times by helping to optimize complex production planning processes. In this paper, it is presented a review of the technologies necessary to be considered as basic elements in all processes of industry 4.0 as a crucial linking element between humans, robots, intelligent and traditional machines.

Mauricio Zamora, Eldon Caldwell, Jose Garcia-Rodriguez, Jorge Azorin-Lopez, Miguel Cazorla

Machine Learning Methods from Group to Crowd Behaviour Analysis

The human behaviour analysis has been a subject of study in various fields of science (e.g. sociology, psychology, computer science). Specifically, the automated understanding of the behaviour of both individuals and groups remains a very challenging problem from the sensor systems to artificial intelligence techniques. Being aware of the extent of the topic, the objective of this paper is to review the state of the art focusing on machine learning techniques and computer vision as sensor system to the artificial intelligence techniques. Moreover, a lack of review comparing the level of abstraction in terms of activities duration is found in the literature. In this paper, a review of the methods and techniques based on machine learning to classify group behaviour in sequence of images is presented. The review take into account the different levels of understanding and the number of people in the group.

Luis Felipe Borja-Borja, Marcelo Saval-Calvo, Jorge Azorin-Lopez

Unsupervised Color Quantization with the Growing Neural Forest

Image processing has become a very common application for artificial intelligence-based algorithms. More precisely, color quantization has become an important issue when it comes to supply efficient transmission and storage for digital images, which consists of color indexing for minimal perceptual distortion image compression. Artificial Neural Networks have been consolidated as a powerful tool for unsupervised tasks, and therefore, for color quantization purposes. In this work we present a novel color quantization approach based on the Growing Neural Forest (GNF), which is a Growing Neural Gas (GNG) variation where a set of trees is learnt instead of a general graph. Its suitability for color quantization processes is supported by experimental results obtained, where the GNF outperforms other self-organizing models such as the GNG, GHSOM and SOM. As future work, more datasets and competitive models will be taken into account.

Esteban José Palomo, Jesús Benito-Picazo, Ezequiel López-Rubio, Enrique Domínguez

3D Body Registration from RGB-D Data with Unconstrained Movements and Single Sensor

In this paper, the problem of 3D body registration using a single RGB-D sensor is approached. It has been guided by three main requirements: low-cost, unconstrained movement and accuracy. In order to fit them, an iterative registration method for accurately aligning data from single RGB-D sensor is proposed. The data is acquired while a person rotates in front of the camera, without the need of any external marker or constraint about its pose. The articulated alignment is carried out in a model-free approach in order to be more consistent with the real data. The iterative method is divided in stages, contributing to each other by the refinement of a specific part of the acquired data. The exploratory results validate the proposed method that is able to feed on itself in each iteration improving the final result by a progressive iteration, with the required precision under the conditions of affordability and unconstrained movement acquisition.

Victor Villena-Martinez, Andres Fuster-Guillo, Marcelo Saval-Calvo, Jorge Azorin-Lopez

Posture Transitions Identification Based on a Triaxial Accelerometer and a Barometer Sensor

Posture transitions (PT) are important movements among the activities performed in daily life of older adults. Their analysis provides information related to the amount of activity performed by a patient over a day and, furthermore, they are useful for assessing symptoms in some movement disorders such as Parkinson’s disease. Many research works have attempted to automatically identify PT relying on the use of machine learning algorithms and light and small accelerometers, since they might be embedded into wearable systems, being unobtrusive for the users. However, distinguishing PTs through a single sensor results in complex classifiers requiring high computational resources, since some PT (such as Stand-to-Sit and Sit-to-Stand PT) may provide very similar acceleration signals. In this paper, we propose a barometer sensor with the aim of complementing the information provided by accelerometers. In addition, a hierarchical algorithm is presented, which is based on Support Vector Machines to detect PT including falls and Lying-to-Stand PT through a single sensor device. Results in 14 users show that the use of a barometer sensor enables the hierarchical algorithm to distinguish Sit-to-Stand from Stand-to-Sit transitions, and Falls from Lying-to-Stand with accuracies over 99%.

Daniel Rodríguez-Martín, Albert Samà, Carlos Pérez-López, Andreu Català

Deep Learning for Detecting Freezing of Gait Episodes in Parkinson’s Disease Based on Accelerometers

Freezing of gait (FOG) is one of the most incapacitating symptoms among the motor alterations of Parkinson’s disease (PD). Manifesting FOG episodes reduce patients’ quality of life and their autonomy to perform daily living activities, while it may provoke falls. Accurate ambulatory FOG assessment would enable non-pharmacologic support based on cues and would provide relevant information to neurologists on the disease evolution.This paper presents a method for FOG detection based on deep learning and signal processing techniques. This is, to the best of our knowledge, the first time that FOG detection is addressed with deep learning. The evaluation of the model has been done based on the data from 15 PD patients who manifested FOG. An inertial measurement unit placed at the left side of the waist recorded tri-axial accelerometer, gyroscope and magnetometer signals. Our approach achieved comparable results to the state-of-the-art, reaching validation performances of 88.6% and 78% for sensitivity and specificity respectively.

Julià Camps, Albert Samà, Mario Martín, Daniel Rodríguez-Martín, Carlos Pérez-López, Sheila Alcaine, Berta Mestre, Anna Prats, M. Cruz Crespo, Joan Cabestany, Àngels Bayés, Andreu Català

Presenting a Real-Time Activity-Based Bidirectional Framework for Improving Social Connectedness

New research on ambient displays within ambient assisted living (AAL) environments, demonstrates solid potential for the application of bidirectional activity-based context aware systems for promoting social connectedness between the elderly and their caregivers. Using visual, auditory or tactile modalities, such systems can reveal subtle information concerning health and well-being and stimulate co-presence between the pair. In this paper, we present the design and development of an activity-based framework aimed at enabling the real-time viewing of bidirectional activity states between the elderly and their caregivers. This framework seeks to overcome the limitations of existing ambient displays deployed in AAL settings, which are in most cases unidirectional and confined to the homes of its users. Our bidirectional activity-based framework, is based on an extensive literature review, expert advice and user feedback, which informed the design decisions about the product features and functionality. The system exploits a highly accurate activity recognition model to facilitate real-time activity awareness and an “always connected” service through portable interactive devices for stimulating social connectedness within the AAL domain.

Kadian Davis, Evans Owusu, Geert van den Boomen, Henk Apeldoorn, Lucio Marcenaro, Carlo Regazzoni, Loe Feijs, Jun Hu

Using Ants to Fight Wildfire

The control of fire spreading is a (research) challenge. The impact of the fire in the environment makes essential the study and analysis of fire spread with the goal of designing new tools that help to mitigate the wildfire expansion and, as a consequence, their effects. In this work we introduce a platform to deploy an algorithm, based on Ant Colony Optimization, to determine the best plan to attack fire focus. The framework is based on a theoretical model that allows us to represent the main elements of the environment in which fire evolves. The tool provides a visualisation component to model realistic landscapes.

Pablo C. Cañizares, Mercedes G. Merayo, Alberto Núñez

Using Evolutionary Computation to Improve Mutation Testing

The work on mutation testing has attracted a lot of attention during the last decades. Mutation testing is a powerful mechanism to improve the quality of test suites based on the injection of syntactic changes into the code of the original program. Several studies have focused on reducing the high computational cost of applying this technique and increasing its efficiency. Only some of them have tried to do it through the application of genetic algorithms. Genetic algorithms can guide through the generation of a reduced subset of mutants without significant loss of information. In this paper, we analyse recent advances in mutation testing that contribute to reduce the cost associated to this technique and propose to apply them for addressing current drawbacks in Evolutionary Mutation Testing (EMT), a genetic algorithm based technique with promising experimental results so far.

Towards Deterministic and Stochastic Computations with the Izhikevich Spiking-Neuron Model

In this paper we analyze simple computations with spiking neural networks (SNN), laying the foundation for more sophisticated calculations. We consider both a deterministic and a stochastic computation framework with SNNs, by utilizing the Izhikevich neuron model in various simulated experiments. Within the deterministic-computation framework, we design and implement fundamental mathematical operators such as addition, subtraction, multiplexing and multiplication. We show that cross-inhibition of groups of neurons in a winner-takes-all (WTA) network-configuration produces considerable computation power and results in the generation of selective behavior that can be exploited in various robotic control tasks. In the stochastic-computation framework, we discuss an alternative computation paradigm to the classic von Neumann architecture, which supports information storage and decision making. This paradigm uses the experimentally-verified property of networks of randomly connected spiking neurons, of storing information as a stationary probability distribution in each of the sub-network of the SNNs. We reproduce this property by simulating the behavior of a toy-network of randomly-connected stochastic Izhikevich neurons.

Ramin M. Hasani, Guodong Wang, Radu Grosu

A Formal Framework to Specify and Test Systems with Fuzzy-Time Information

We specify the behavior of a sensor network with different sensor stations distributed all along the region of Andalusia (South of Spain). The main goal of this network is the measure of air quality taking into account the maximum levels of certain pollutants. The problem that we try to solve with this formalization is the management of time inaccuracies between the components of a network with the aim of avoiding the malfunctions that are derived from them. We present the formal syntax and semantics of our variant of fuzzy-timed automata and define all the automata corresponding to the different parts of the netowrk.

Juan Boubeta-Puig, Azahara Camacho, Luis Llana, Manuel Núñez

Intelligent Transportation System to Control Air Pollution in Cities Using Complex Event Processing and Colored Petri Nets

Pollution due to road traffic in big cities is an important problem in our society, with consequences for human health. In this paper we deal with this problem, proposing an Intelligent Transportation System (ITS) model based on Complex Event Processes (CEP) technologies and Petri nets that takes into account the levels of environmental pollution according to the air quality levels accepted by the international recommendations. Thus, we are tackling a rather common problem in big cities nowadays, where traffic restrictions must be applied due the pollution. Petri nets are then used in this paper as a tool to make decisions about traffic regulations, so as to reduce pollution levels.

Gregorio Díaz, Hermenegilda Macià, Valentín Valero, Fernando Cuartero

Heuristics for ROSA’s LTS Searching

The authors have been aimed by the goal of reducing the computational cost of searching the Labeled Transition Systems, LTSs, generated by Process Algebras. In particular, we have been following the idea of moving the order of the computational cost required to find/reach a desired node/state from the exponential of the classical Breadth-First Search fashion to the polynomial produced either by the Depth-First Search or by the $$A^{*}$$ algorithm [6]. As usual, they both take as size of the problem the branching factor of the LTS.This paper, first, presents the Normal Formed ROSA processes required to, second, define a sound topological structure over this Process Algebra. The underlying notion of distance from this topology can be taken as the heuristics to guide the search for whatever node which reachability want to be studied, by means of an $$A^{*}$$ algorithm.

Fernando López Pelayo, Fernando Cuartero Gomez, Diego Cazorla, Pedro Valero-Lara, Mercedes Garcia Merayo

Suitable Number of Visual Stimuli for SSVEP-Based BCI Spelling Applications

Steady state visual evoked potentials (SSVEPs)-based Brain-Computer interfaces (BCIs) provide a pathway for re-establishing communication to people with severe disabilities. In the presented study, we compared accuracy and speed of three SSVEP-based BCI spelling applications in order to investigate the influence of the number of visual stimuli on the BCI performance. Three systems with four, six and 28 stimulating frequencies were tested. Ten subjects (one female) participated in this study. The highest ITR achieved in the experiment was 51.77 bpm. It is interesting, that it was achieved with the system based on six flickering targets. Our results confirm that the number of stimuli has high impact on classification accuracy and BCI literacy of SSVEP-based BCIs.

Felix Gembler, Piotr Stawicki, Ivan Volosyak

A Binary Bees Algorithm for P300-Based Brain-Computer Interfaces Channel Selection

Brain-computer interface (BCI) systems need to work in real-time with large amounts of data, which makes the channel selection procedures essential to reduce over-fitting and to increase users’ comfort. In that sense, metaheuristics based on swarm intelligence (SI) have demonstrated excellent performances solving complex optimization problems and, to the best of our knowledge, they have not been fully exploited in P300-BCI systems. In this study, we propose a modified SI method, called binary bees algorithm (b-BA), that allows users to select the most relevant channels in an evolutionary way. This method has been compared to particle swarm optimization (PSO) and tested with the ‘III BCI Competition 2005’ dataset II. Results show that b-BA is suitable for use in this kind of systems, reaching higher accuracies (mean of 96.0 ± 0.0%) than PSO (mean of 93.5 ± 2.1%) and the original ones (mean of 94.0 ± 2.8%) using less than the half of the initial channels.

Víctor Martínez-Cagigal, Roberto Hornero

A Comparison of a Brain-Computer Interface and an Eye Tracker: Is There a More Appropriate Technology for Controlling a Virtual Keyboard in an ALS Patient?

The ability of people affected by amyotrophic lateral sclerosis (ALS), muscular dystrophy or spinal cord injuries to physically interact with the environment, is usually reduced. In some cases, these patients suffer from a syndrome known as locked-in syndrome (LIS), defined by the patient’s inability to make any movement but blinks and eye movements. Tech communication systems available for people in LIS are very limited, being those based on eye-tracking and brain-computer interface (BCI) the most useful for these patients. A comparative study between both technologies in an ALS patient is carried out: an eye tracker and a visual P300-based BCI. The purpose of the study presented in this paper is to show that the choice of the technology could depend on user’s preference. The evaluation of performance, workload and other subjective measures will allow us to determine the usability of the systems. The obtained results suggest that, even if for this patient the BCI technology is more appropriate, the technology should be always tested and adapted for each user.

Liliana García, Ricardo Ron-Angevin, Bertrand Loubière, Loїc Renault, Gwendal Le Masson, Véronique Lespinet-Najib, Jean Marc André

SSVEP-Based BCI in a Smart Home Scenario

Steady state visual evoked potentials (SSVEPs)-based Brain-Computer Interfaces (BCIs) can provide hand-free human interaction with the environment. In the presented study, visual stimuli were displayed on Epson Moverio BT-200 augmented reality glasses, which can be easily used in smart homes. QR codes were used to identify the devices to be controlled with the BCI. In order to simulate a real life scenario, participants were instructed to go out of the lab to get a coffee. During this task light switches, elevator and a coffee machine were controlled by focusing on SSVEP stimuli displayed on the smart glasses. An average accuracy of 85.70% was achieved, which suggests that augmented reality may be used together with SSVEP to control external devices.

Abdul Saboor, Aya Rezeika, Piotr Stawicki, Felix Gembler, Mihaly Benda, Thomas Grunenberg, Ivan Volosyak

How to Reduce Classification Error in ERP-Based BCI: Maximum Relative Areas as a Feature for P300 Detection

Currently, one of the challenges in a Brain Computer Interface (BCI) technologies is the improvement real-time event-related potential (ERP) detection. Variability and low signal-to-noise ratio (SNR) impair detection methods. We hypothesized that if in a P300-based BCI we find the electrodes with the maximum relative voltage area (the “maximum relative” term refers to the area within each trial, but not between trials) where a P300 can be located, we will improve the performance of a classifier and reduce the number of trials necessary to achieve 100% success. We propose a method that calculates successively the maximum relative voltage areas in the P300 region of the EEG signal for each stimulus. In this way, differences between a target and a non-target stimulus are maximized. This method was tested with a linear classifier (LDA), known for its good performance and low computational cost. We observed that a single electrode with maximum relative voltage area in a P300 region can give more information than the traditional 4 electrode measurement. The preliminary results show that by detecting appropriate characteristics in the EEG signal, we can reduce the error by trial as well as the number of electrodes. The detection of the maximum relative voltage area in the EEG electrodes is a characteristic that can contribute to increase the SNR and decrease the prediction error with the smallest number of trials in the P300-based BCI systems. This type of methods that seek specific characteristics in the signals can also contribute to the management of the variability present in the BCI systems. This method can be used both for an online and offline analysis.

Vinicio Changoluisa, Pablo Varona, Francisco B. Rodriguez

Deep Fisher Discriminant Analysis

Fisher Discriminant Analysis’ linear nature and the usual eigen-analysis approach to its solution have limited the application of its underlying elegant idea. In this work we will take advantage of some recent partially equivalent formulations based on standard least squares regression to develop a simple Deep Neural Network (DNN) extension of Fisher’s analysis that greatly improves on its ability to cluster sample projections around their class means while keeping these apart. This is shown by the much better accuracies and g scores of class mean classifiers when applied to the features provided by simple DNN architectures than what can be achieved using Fisher’s linear ones.

David Díaz-Vico, Adil Omari, Alberto Torres-Barrán, José Ramón Dorronsoro

An Iterated Greedy Algorithm for Improving the Generation of Synthetic Patterns in Imbalanced Learning

Real-world classification datasets often present a skewed distribution of patterns, where one or more classes are under-represented with respect to the rest. One of the most successful approaches for alleviating this problem is the generation of synthetic minority samples by convex combination of available ones. Within this framework, adaptive synthetic (ADASYN) sampling is a relatively new method which imposes weights on minority examples according to their learning complexity, in such a way that difficult examples are more prone to be over-sampled. This paper proposes an improvement of the ADASYN method, where the learning complexity of these patterns is also used to decide which sample of the neighbourhood is selected. Moreover, to avoid suboptimal results when performing the random convex combination, this paper explores the application of an iterative greedy algorithm which refines the synthetic patterns by repeatedly replacing a part of them. For the experiments, six binary datasets and four over-sampling methods are considered. The results show that the new version of ADASYN leads to more robust results and that the application of the iterative greedy metaheuristic significantly improves the quality of the generated patterns, presenting a positive effect on the final classification model.

Francisco Javier Maestre-García, Carlos García-Martínez, María Pérez-Ortiz, Pedro Antonio Gutiérrez

Fine-to-Coarse Ranking in Ordinal and Imbalanced Domains: An Application to Liver Transplantation

Nowadays imbalanced learning represents one of the most vividly discussed challenges in machine learning. In these scenarios, one or some of the classes in the problem have a significantly lower a priori probability, usually leading to trivial or non-desirable classifiers. Because of this, imbalanced learning has been researched to a great extent by means of different approaches. Recently, the focus has switched from binary classification to other paradigms where imbalanced data also arise, such as ordinal classification. This paper tests the application of learning pairwise ranking with multiple granularity levels in an ordinal and imbalanced classification problem where the aim is to construct an accurate model for donor-recipient allocation in liver transplantation. Our experiments show that approaching the problem as ranking solves the imbalance issue and leads to a competitive performance.

María Pérez-Ortiz, Kelwin Fernandes, Ricardo Cruz, Jaime S. Cardoso, Javier Briceño, César Hervás-Martínez

Combining Ranking with Traditional Methods for Ordinal Class Imbalance

In classification problems, a dataset is said to be imbalanced when the distribution of the target variable is very unequal. Classes contribute unequally to the decision boundary, and special metrics are used to evaluate these datasets. In previous work, we presented pairwise ranking as a method for binary imbalanced classification, and extended to the ordinal case using weights. In this work, we extend ordinal classification using traditional balancing methods. A comparison is made against traditional and ordinal SVMs, in which the ranking adaption proposed is found to be competitive.

Ricardo Cruz, Kelwin Fernandes, Joaquim F. Pinto Costa, María Pérez Ortiz, Jaime S. Cardoso

Constraining Type II Error: Building Intentionally Biased Classifiers

In many applications, false positives (type I error) and false negatives (type II) have different impact. In medicine, it is not considered as bad to falsely diagnosticate someone healthy as sick (false positive) as it is to diagnosticate someone sick as healthy (false negative). But we are also willing to accept some rate of false negatives errors in order to make the classification task possible at all. Where the line is drawn is subjective and prone to controversy. Usually, this compromise is given by a cost matrix where an exchange rate between errors is defined. For many reasons, however, it might not be natural to think of this trade-off in terms of relative costs. We explore novel learning paradigms where this trade-off can be given in the form of the amount of false negatives we are willing to tolerate. The classifier then tries to minimize false positives while keeping false negatives within the acceptable bound. Here we consider classifiers based on kernel density estimation, gradient descent modifications and applying a threshold to classifying and ranking scores.

Ricardo Cruz, Kelwin Fernandes, Joaquim F. Pinto Costa, Jaime S. Cardoso

Pedestrian Detection for UAVs Using Cascade Classifiers and Saliency Maps

In this paper, we proposed algorithm and dataset for pedestrian detection focused on applications with micro multi rotors UAV (Unmanned Aerial Vehicles). For training dataset we capture images from surveillance cameras at different angles and altitudes. We propose a method based on HAAR-LBP (Local Binary Patterns) cascade classifiers with Adaboost (Adaptive Boosting) training and, additionally we combine cascade classifiers with saliency maps for improving the performance of the pedestrian detector. We evaluate our dataset by the implementation of the HOG (Histogram of oriented gradients) algorithm with Adaboost training and, finally, algorithm performance is compared with other approaches from the state of art. The results shows that our dataset is better for pedestrian detection in UAVs, HAAR-LBP have better characteristics than HAAR like features and the use of saliency maps improves the performance of detectors due to the elimination of false positives in the image.

Wilbert G. Aguilar, Marco A. Luna, Julio F. Moya, Vanessa Abad, Hugo Ruiz, Humberto Parra, Cecilio Angulo

Obstacle Avoidance for Flight Safety on Unmanned Aerial Vehicles

In this paper, we propose an obstacle avoidance system for UAVs using a monocular camera. For detecting obstacles, the system compares the image obtained in real-time from the UAV with a database of obstacles that must be avoided. In our proposal, we include the feature point detector Speeded Up Robust Features (SURF) for fast obstacle detection and a control law, with a defined obstacle as target. The system was tested in real-time on a micro aerial vehicle (MAV), to detect and avoid obstacles on unknown environment, and compared with related works.

Wilbert G. Aguilar, Verónica P. Casaliglla, José L. Pólit, Vanessa Abad, Hugo Ruiz

RRT* GL Based Optimal Path Planning for Real-Time Navigation of UAVs

In this paper, we propose a path planning system for autonomous navigation of unmanned aerial vehicle based on a Rapidly-exploring Random Trees (RRT) combination of RRT* Goal and Limit. The system includes a point cloud obtained from the vehicle workspace with a RGB-D sensor, an identification module for interest regions and obstacles of the environment, and a collision-free path planner based on RRT for a safe and optimal navigation of vehicles in 3D spaces.

Wilbert G. Aguilar, Stephanie Morales, Hugo Ruiz, Vanessa Abad

Visual SLAM with a RGB-D Camera on a Quadrotor UAV Using on-Board Processing

In this article, we present a high accuracy system for real-time localization and mapping using a RGB-D camera. With the use the RGB-D sensor Microsoft Kinect and the small and powerful computer Intel Stick Core M3 Processor, our system can run the computation and sensing required for SLAM on-board the UAV, removing the dependence on unreliable wireless communication. We use visual odometry, loop closure and graph optimization to achieve this purpose. Our approach is able to perform accurate and efficient on-board SLAM by analyzing data and maps generated on several tests of the system.

Wilbert G. Aguilar, Guillermo A. Rodríguez, Leandro Álvarez, Sebastián Sandoval, Fernando Quisaguano, Alex Limaico

An End-User Interface to Generate Homeostatic Behavior for NAO Robot in Robot-Assisted Social Therapies

Homeostatic drive theory is a popular approach for decision-making of robot behavior in social robotic research. It is potentially to be used in social therapies. To increase the involvement of end-users in the robot’s control, we present an end-user interface allowing the therapists to generate homeostatic behavior for NAO robot in social skills training for children. We demonstrate the system by two interactions in which the robot homeostatic behavior is adapted to children’s behavior. The result shows that the system provides a practical solution for therapists to implement interaction scenarios to robot behavior.

Hoang-Long Cao, Albert De Beir, Pablo Gómez Esteban, Ramona Simut, Greet Van de Perre, Dirk Lefeber, Bram Vanderborght

Graphical Programming Interface for Enabling Non-technical Professionals to Program Robots and Internet-of-Things Devices

This paper presents a graphical programming interface that enables non-technical professionals to program robots. Increasingly, robots are used by non-technical persons, such as service workers, therapists or marketers and graphical programming enables them to adjust robots to the situational needs through intuitive but expressive ways.We present our implementation of graphical environment for programming a set of internet-of-things (IoT) devices and robots for therapists of autism. It is based on Robot Operating System (ROS) and Snap, and is called Robokol. Compared to previous solutions our system is easily extensible to new devices, with an interface enabling plug-and-play device discoverability and uses nonproprietary, well-known tools.We detail two use cases of our interface, one where therapists of autism create sense-act loops for sensory therapy employing robot-like or IoT devices, and second where Robokol is used to create interfaces for Wizard-of-Oz scenario with robots.

Igor Zubrycki, Marcin Kolesiński, Grzegorz Granosik

Biologically inspired design consists in the creation of technological systems using as starting or inspirational point biological systems. Indeed, it has been used widely in robotics in different areas, such as mechanics, coordination or navigation. For example, in robot navigation, biomimetic algorithms can be specially useful in certain circumstances, such as when a robot needs to interacts closely with users. Using biomimetic navigation robot movements would be more similar to human ones but maintaining some basic navigation factors such as the safety. It is important in systems such as assistive systems in which human and robot control can be switch or combined –depending on the kind of system– to obtain the final command. Thus, in these systems interaction is very close to the user and it is advisable to make robot commands as similar as possible to the user ones. Otherwise, the user could even reject robot assistance depending on the disagreement between user and robot commands to reach a destiny. This disagreement provokes user’s frustration and stress and, in extreme, assistive system rejection. In this paper we propose a biomimetic navigation algorithm based on Case-Based Reasoning that learns from real traces –performed by volunteers– in order to achieve robot navigation as close as possible to the human one.

Jose Manuel Peula, Joaquín Ballesteros, Cristina Urdiales, Francisco Sandoval

A Pseudo-3D Vision-Based Dual Approach for Machine-Awareness in Indoor Environment Combining Multi-resolution Visual Information

In this paper we describe a pseudo-3D vision-based dual approach for Machine-Awareness in indoor environment. The so-called duality is provided by color and depth cameras of Kinect system, which presents an appealing potential for 3D robots vision. Placing the human-machine (including human-robot) interaction as a primary outcome of the intended visual Machine-Awareness in investigated system, we aspire proffering the machine the autonomy in awareness about its surrounding environment. Combining pseudo-3D vision, and salient objects’ detection algorithms, the investigated approach seeks an autonomous detection of relevant items in 3D environment. The pseudo-3D perception allows reducing computational complexity inherent to the 3D vision context into a 2D computational task by processing 3D visual information within a 2D-images’ framework. The statistical foundation of the investigated approach proffers it a solid and comprehensive theoretical basis, holding out a bottom-up nature making the issued system unconstrained regarding prior hypothesis. We provide experimental results validating the proposed system.

Hossam Fraihat, Kurosh Madani, Christophe Sabourin

Analysis of the Protocols Used to Assess Virtual Players in Multi-player Computer Games

Recently, the development of believable agents has gained a lot of interest and many solutions have been proposed by the research community to implement such bots. However, in order to make advances in this field, a generic and rigorous evaluation that would allow the comparison of new systems against existing ones is needed. This paper provides a summary of the existing believability assessments. Seven features characterising the protocols are identified. After a comprehensive analysis, recommendations and prospects for improvement are provided.

Cindy Even, Anne-Gwenn Bosser, Cédric Buche

The Long Path of Frustration: A Case Study with Dead by Daylight

Playability is a key factor in video-games. From a narrative standpoint, the play process is usually designed as sequences of episodes triggered by the player’s motivations, which unfold along a sense of suspense-relief. Suspense, as a factor on engagement, has a strong impact on the narrative of video-games: when it decreases, so does the engagement. This is a common pattern when players are aware that losing is unavoidable. As we point out, many players disconnect from the game in this situation. In this paper we evaluate how suspense affects playability, to analyse how the lack of uncertainty due to the knowledge of the rules may degrade Dead by Daylight game players experience when they are bound to fail. We have observed that players acknowledging that there are no chances to win tend to leave the game. Results also reveal that suspense is modulated by the player’s knowledge of the game.

Pablo Delatorre, Carlos León, Alberto Salguero, Cristina Mateo-Gil

Optimising Humanness: Designing the Best Human-Like Bot for Unreal Tournament 2004

This paper presents multiple hybridizations of the two best bots on the BotPrize 2014 competition, which sought for the best human-like bot playing the First Person Shooter game Unreal Tournament 2004. To this aim the participants were evaluated using a Turing test in the game. The work considers MirrorBot (the winner) and NizorBot (the second) codes and combines them in two different approaches, aiming to obtain a bot able to show the best behaviour overall. There is also an evolutionary version on MirrorBot, which has been optimized by means of a Genetic Algorithm. The new and the original bots have been tested in a new, open, and public Turing test whose results show that the evolutionary version of MirrorBot apparently improves the original bot, and also that one of the novel approaches gets a good humanness level.

Antonio M. Mora, Álvaro Gutiérrez-Rodríguez, Antonio J. Fernández-Leiva

Combining Neural Networks for Controlling Non-player Characters in Games

Creating the behavior for non-player characters in video games is a complex task that requires the collaboration among programmers and game designers.Usually game designers are only allowed to change certain parameters of the behavior, while programmers write new code whenever the behavior intended by designers cannot be achieved by just parameter tweaking. This becomes a time-consuming process that requires several iterations of designers testing the solution provided by programmers, followed by additional changes in the requirements that programmers must again re-implement.In this paper, we present an approach for creating the behavior of non-player characters in video games that gives more power to the game designer by combining program by demonstration and behavior trees. Our approach is able to build some parts of a behavior tree with the observed data in a previous training phase.

Ismael Sagredo-Olivenza, Pedro Pablo Gómez-Martín, Marco Antonio Gómez-Martín, Pedro Antonio González-Calero

A Classification System to Assess Low Back Muscle Endurance and Activity Using mHealth Technologies

Low back pain remains a major cause of absenteeism in the world. In addition to its socio-economic impact, the age at which the first symptoms appear is decreasing. Consequently, there are more experts who start incorporating prevention plans for the lumbar area in their work routines. In addition, the continued market growth of wearable sensors and the potential opened up by wearable technology allows experts to obtain a precise feedback from improvements in their patients in a daily basis. For this reason, this work wants to continue with the development and verification of the usefulness of mDurance, a novel mobile health system aimed at supporting specialists in the functional assessment of trunk endurance and muscle activity by using wearable and mobile devices. This work presents an extension of this system to classify low back muscle activity in the low back. mDurance has been tested into a professional football team. Clustering and data mining are applied in a new dataset of endurance and muscle activity data collected through mDurance. In addition, these results are cross-related with a questionnaire created to evaluate how the football players perceive themselves physically and mentally. The results show a clear correlation between the perception participants have about their low back endurance and the objective measurements conducted through mDurance. The results obtained through mDurance and the football players answers show a 68.3% of accuracy and 83.8% of specificity in the first approach to build a classifier to assess low back muscle endurance and activity using mDurance system.

Ignacio Diaz-Reyes, Miguel Damas, Jose Antonio Moral-Munoz, Oresti Banos

Probabilistic Leverage Scores for Parallelized Unsupervised Feature Selection

Dimensionality reduction is often crucial for the application of machine learning and data mining. Feature selection methods can be employed for this purpose, with the advantage of preserving interpretability. There exist unsupervised feature selection methods based on matrix factorization algorithms, which can help choose the most informative features in terms of approximation error. Randomized methods have been proposed recently to provide better theoretical guarantees and better approximation errors than their deterministic counterparts, but their computational costs can be significant when dealing with big, high dimensional data sets. Some existing randomized and deterministic approaches require the computation of the singular value decomposition in $$O(mn\min (m,n))$$ time (for m samples and n features) for providing leverage scores. This compromises their applicability to domains of even moderately high dimensionality. In this paper we propose the use of Probabilistic PCA to compute the leverage scores in O(mnk) time, enabling the applicability of some of these randomized methods to large, high-dimensional data sets. We show that using this approach, we can rapidly provide an approximation of the leverage scores that is works well in this context. In addition, we offer a parallelized version over the emerging Resilient Distributed Datasets paradigm (RDD) on Apache Spark, making it horizontally scalable for enormous numbers of data instances. We validate the performance of our approach on different data sets comprised of real-world and synthetic data.

Bruno Ordozgoiti, Sandra Gómez Canaval, Alberto Mozo

General Noise SVRs and Uncertainty Intervals

Building uncertainty estimates is still an open problem for most machine learning regression models. On the other hand, general noise–dependent cost functions have been recently proposed for Support Vector Regression, SVR, which should be more effective when applied to regression problems whose underlying noise distribution follows the one assumed for the cost function. Taking this into account, we first propose a framework that combines general noise SVR models trained by Naive Online R Minimization Algorithm, NORMA, optimization with uncertainty interval estimates for their predictions. We then provide the theoretical details required to implement this framework for several noise distributions and carry out experiments, whose results show an improvement over the ones obtained by classical $$\epsilon$$-SVR and also support the hypothesis that the model and error intervals with the noise distribution assumption closest to the real one yield the best results. Finally, and in accordance with the principle of reproducible research, we make the implementations developed and the datasets employed in the experiments publicly and easily available.

Towards Visual Training Set Generation Framework

Performance of trained computer vision algorithms is largely dependent on amounts of data, on which it is trained. Creating large labeled datasets is very expensive, and therefore many researchers use synthetically generated images with automatic annotations. To this purpose we have created a general framework, which allows researchers to generate practically infinite amount of images from a set of 3D models, textures and material settings. We leverage Voxel Cone Tracing technology implemented by NVIDIA to render photorealistic images in realtime without any kind of precomputation. We have build this framework with two use cases in mind: (i) for real world applications, where a database with synthetically generated images could compensate for small or non existent datasets, and (ii) for empirical testing of theoretical ideas by creating training sets with known inner structure.

Jan Hůla, Irina Perfilieva, Ali Ahsan Muhummad Muzaheed

Backmatter

Weitere Informationen