Skip to main content

Über dieses Buch

The two volume set LNCS 11486 and 11487 constitutes the proceedings of the International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2019, held in Almería, Spain,, in June 2019. The total of 103 contributions was carefully reviewed and selected from 190 submissions during two rounds of reviewing and improvement. The papers are organized in two volumes, one on understanding the brain function and emotions, addressing topics such as new tools for analyzing neural data, or detection emotional states, or interfacing with physical systems. The second volume deals with bioinspired systems and biomedical applications to machine learning and contains papers related bioinspired programming strategies and all the contributions oriented to the computational solutions to engineering problems in different applications domains, as biomedical systems, or big data solutions.





Towards a General Method for Logical Rule Extraction from Time Series

Extracting rules from temporal series is a well-established temporal data mining technique. The current literature contains a number of different algorithms and experiments that allow one to abstract temporal series and, later, extract meaningful rules from them. In this paper, we approach this problem in a rather general way, without resorting, as many other methods, to expert knowledge and ad-hoc solutions. Our very simple temporal abstraction method allows us to transform time series into timelines, which can be then used for logical temporal rule extraction using an already existing temporal adaptation of the algorithm APRIORI. We have tested this approach on real data, obtaining promising results.

Guido Sciavicco, Ionel Eduard Stan, Alessandro Vaccari

A Principled Two-Step Method for Example-Dependent Cost Binary Classification

This paper presents a principled two-step method for example-dependent cost binary classification problems. The first step obtains a consistent estimate of the posterior probabilities by training a Multi-Layer Perceptron with a Bregman surrogate cost. The second step uses the provided estimates in a Bayesian decision rule. When working with imbalanced datasets, neutral re-balancing allows getting better estimates of the posterior probabilities. Experiments with real datasets show the good performance of the proposed method in comparison with other procedures.

Javier Mediavilla-Relaño, Aitor Gutiérrez-López, Marcelino Lázaro, Aníbal R. Figueiras-Vidal

Symbiotic Autonomous Systems with Consciousness Using Digital Twins

The IEEE work-group for Symbiotic Autonomous Systems defined a Digital Twin as a digital representation or virtual model of any characteristics of a real entity (system, process or service), including human beings. Described characteristics are a subset of the overall characteristics of the real entity. The choice of which characteristics are considered depends on the purpose of the digital twin. This paper introduces the concept of Associative Cognitive Digital Twin, as a real time goal-oriented augmented virtual description, which explicitly includes the associated external relationships of the considered entity for the considered purpose. The corresponding graph data model, of the involved world, supports artificial consciousness, and allows an efficient understanding of involved ecosystems and related higher-level cognitive activities. The defined cognitive architecture for Symbiotic Autonomous Systems is mainly based on the consciousness framework developed. As a specific application example, an architecture for critical safety systems is shown.

Felipe Fernández, Ángel Sánchez, José F. Vélez, A. Belén Moreno

Deep Support Vector Classification and Regression

Support Vector Machines, SVM, are one of the most popular machine learning models for supervised problems and have proved to achieve great performance in a wide broad of predicting tasks. However, they can suffer from scalability issues when working with large sample sizes, a common situation in the big data era. On the other hand, Deep Neural Networks (DNNs) can handle large datasets with greater ease and in this paper we propose Deep SVM models that combine the highly non-linear feature processing of DNNs with SVM loss functions. As we will show, these models can achieve performances similar to those of standard SVM while having a greater sample scalability.

David Díaz-Vico, Jesús Prada, Adil Omari, José R. Dorronsoro

An Experimental Study on the Relationships Among Neural Codes and the Computational Properties of Neural Networks

Biological neural networks (BNNs) have inspired the creation of artificial neural networks (ANNs) [19]. One of the properties of BNNs is computational robustness, but this property is often overlooked in computer science because ANNs are usually virtualizations executed in a physical machine that lacks computational robustness. However, it was recently proposed that computational robustness could be a key feature that drives the selection of the computational model in the evolution of animals [20]. Until now, only energetic cost and processing time had been considered as the features that drove the evolution of the nervous system. The new standpoint leads us to consider whether computational robustness could have driven the evolution of not only the computational model but also other nervous system traits in animals through the process of natural selection. Because an important feature of an animal’s nervous system is its neural code, we tested the relationship among the computational properties of feed-forward neural networks and the neural codes through in silico experiments. We found two main results: There is a relationship between the number of epochs needed to train a feed-forward neural network using back-propagation and the neural code of the neural network, and a relationship exists between the computational robustness and the neural code of a feed-forward neural network. The first result is important to ANNs and the second to BNNs.

Sergio Miguel-Tomé

Uninformed Methods to Build Optimal Choice-Based Ensembles

The paper explores uninformed methods to build ensembles using aggregations of single choice models. The research aims at developing new models to combine the performance of ensembles with the transparency of choice models. The dataset used to fit the models included rational, emotional and attentional features that were used as explanatory variables of user’s choice. The results point out the superior performance of bagging methods to build optimal choice-based ensembles.

Ameed Almomani, Eduardo Sánchez



Design and Implementation of a Robotics Learning Environment to Teach Physics in Secondary Schools

Robotics have proved to be a very attractive tool for students specially in STEM areas that involve active exploration. Nevertheless, learning activities with robotics kits are usually isolated from official curriculum and no evaluation about the learning outcomes of students are provided. In this paper, we present IDEE, an integrated learning environment which uses robotics as a learning tool for a physics laboratory. Experiments in IDEE are proposed following a scientific experimental procedure. Students in IDEE have to achieve certain learning goals or skills when solving physics problems. The skills accomplished by a student while using IDEE are stored in students models. To this end, students’ performance data is stored in a database. On the basis of students’ skills, IDEE shows certain hints to help students. This information can be also bee shown to teachers to help them in supporting students’ learning process.

Samantha Orlando, Félix de la Paz López, Elena Gaudioso

Multi-robot User Interface for Cooperative Transportation Tasks

In this research we attempt to build a user interface for controlling a group of omnidirectional robots to realize the transportation of convex shape edge objects. Our method establishes a manual guidance to the robots initial positions, initializes the collective grasping/lifting process and finally, provides the user with a high level control over the velocity of the load during transportation to the required destination. The hardware and software structure of the system are described and a simulation is performed to convey the data from the robots sensors.

Majd Kassawat, Enric Cervera, Angel P. del Pobil

Precise Positioning and Heading for Autonomous Scouting Robots in a Harsh Environment

This document describes the design and verification of the GreenPatrol localization subsystem. Greenpatrol is an autonomous robot system intended to operate in light indoor environments, such as greenhouses, detecting and treating pests in high-value crops such as tomato and pepper. High accuracy positioning is required for performing this in a trustable and safety manner. The proposed localization solution is described hereafter. Test have been carried out in the real greenhouse environment. The proposed localization subsystem consists of two differentiate parts: (1) an absolute localization module which uses precise positioning GNSS techniques in combination with the robot proprioceptive sensors (i.e. inertial sensors and odometry) with an estimated position error lower than 30 cm, and (2) a relative localization module that takes the absolute solution as input and combines it with the robot range readings to generate a model of the environment and to estimate the robot position and heading inside it. From the analysis of the tests results it follows that the absolute localization is able to provide a heading solution with accuracy 5 $$^\circ $$ more than a 85% of the time. The relative localization algorithm, on the other hand, gives an estimation of the robot position inside the map which does not improve significantly the absolute solution, but it is able to refine the heading estimation and to absorb transitory error peaks.This paper is organized as follows: first an introduction describing the robot localization system purposed and the state of the art of the involved technologies, second a description of the main subsystems involved and the problems associated, then the tests carried out in a real scenario and the obtained results for check its behavior for each one of the subsystems, and finally conclusions and future work.

David Obregón, Raúl Arnau, María Campo-Cossio, Juan G. Arroyo-Parras, Michael Pattinson, Smita Tiwari, Iker Lluvia, Oscar Rey, Jeroen Verschoore, Libor Lenza, Joaquin Reyes



Gesture Control Wearables for Human-Machine Interaction in Industry 4.0

The deployment of Industry 4.0 will achieve great aims regarding production rate, control, data analysis, cost, energy consumption and flexibility. However, one of the most significant aspects is the human factor. Robots, machinery and knowledge needed could lead to a social problem for those operators who are not prepared to face such big technology challenges. This could cause a technological gap resulting in a rejection or disapproval of beneficial technology. To preserve this emerging paradigm’s balance, researchers and developers must consider using intelligent human-machine interaction capabilities before building novel industry deployments. This paper introduces a smart gesture control system that facilitates movements of a robotic arm with the aid of two wearables devices. By using this kind of control system, any worker should fit into the new paradigm where some precise, hazardous or heavy tasks incorporate robots. Furthermore, this proposal is suited to industry scenarios, since it fulfills fundamental requirements regarding success rate and real-time control as well as high flexibility and scalability, which are key factors in Industry 4.0.

Luis Roda-Sanchez, Teresa Olivares, Celia Garrido-Hidalgo, Antonio Fernández-Caballero

Computing the Missing Lexicon in Students Using Bayesian Networks

The available lexicon for a person usually increases according to their needs through their live evolution. It is especially important during the early stages in students formation; in every class one of the objectives is to get students capable of using an extensive vocabulary according to different topics in which they are involved. We use an online platform, Lexmath, which contains data (latent lexicon) of a significant number of students in a specific geographic region in Chile. This work introduces a software application which uses data from Lexmath to determine the missing lexicon in students, by using Bayesian networks. The goal of this development is to make available to teachers the lexical weaknesses of students, to generate recommendations to improve the available lexicon.

Pedro Salcedo L., M. Angélica Pinninghoff J., Ricardo Contreras A.

Control of Transitory Take-Off Regime in the Transportation of a Pendulum by a Quadrotor

In this article, a q-learning control strategy is proposed for a system consisting of a UAV lifting a damped pendulum from the ground. This dynamic system is highly nonlinear and thereafter, it represents a challenging task to get a smooth and precise behaviour. Aerial transportation of a pendulum in a stable way is a step forward in the state of art, which permits to study the delivery of different deformable linear objects.

Julián Estévez, Jose Manuel Loṕez-Guede

Improving Scheduling Performance of a Real-Time System by Incorporation of an Artificial Intelligence Planner

Scheduling is one of the classic problems in real-time systems. In real-time adaptive applications, the implementation of some sort of run-time intelligence is required, in order to build real-time intelligent systems capable of operating adequately in dynamic and complex environments. The incorporation of artificial intelligence planning techniques in a real-time architecture allows the on-line reaction to external and internal unexpected events. In this work a layered architecture integrating real-time scheduling and artificial intelligence planning techniques has been designed, in order to implement a real-time scheduler with capability to perform effectively in these scenarios. This multi-level scheduler has been implemented and evaluated in a simulated information access system destined to broadcast information to mobile users. Results show that incorporation of artificial intelligence to the real-time scheduler improves the performance, adaptiveness and responsiveness of the system.

Jesus Fernandez-Conde, Pedro Cuenca-Jimenez, Rafael Toledo-Moreo

Convolutional Neural Networks for Olive Oil Classification

The analysis of the quality of olive oil is a task that is having a lot of impact nowadays due to the large frauds that have been observed in the olive oil market. To solve this problem we have trained a Convolutional Neural Network (CNN) to classify 701 images obtained using GC-IMS methodology (gas chromatography coupled to ion mobility spectrometry). The aim of this study is to show that Deep Learning techniques can be a great alternative to traditional oil classification methods based on the subjectivity of the standardized sensory analysis according to the panel test method, and also to novel techniques provided by the chemical field, such as chemometric markers. This technique is quite expensive since the markers are manually extracted by an expert.The analyzed data includes instances belonging to two different crops, the first covers the years 2014–2015 and the second 2015–2016. Both harvests have instances classified in the three categories of existing oil, extra virgin olive oil (EVOO), virgin olive oil (VOO) and lampante olive oil (LOO). The aim of this study is to demonstrate that Deep Learning techniques in combination with chemical techniques are a good alternative to the panel test method, implying even better accuracy than results obtained in previous work.

Belén Vega-Márquez, Andrea Carminati, Natividad Jurado-Campos, Andrés Martín-Gómez, Lourdes Arce-Jiménez, Cristina Rubio-Escudero, Isabel A. Nepomuceno-Chamorro

An Indoor Illuminance Prediction Model Based on Neural Networks for Visual Comfort and Energy Efficiency Optimization Purposes

Energy and comfort management are becoming increasingly relevant topics into buildings operation, for example, looking for trade-off solutions to maintain adequate comfort conditions within an efficient energy use framework by means of appropriate control and optimization techniques. Moreover, these strategies can take advantage from predictions of the involved variables. In this regard, visual comfort conditions are a key aspect to consider. Hence, in this paper an indoor illuminance prediction model based on a divide-and-rule strategy which makes use of Artificial Neural Networks and polynomial interpolation is proposed. This model has been trained, validated and tested using real data gathered in a bioclimatic building. As a result, an acceptable forecast of indoor illuminance level was obtained with a mean absolute error equals to 8.9 lx and a relative error lower than $$2\%$$ .

M. Martell, M. Castilla, F. Rodríguez, M. Berenguel

Using Probabilistic Context Awareness in a Deliberative Planner System

When a Social Robot is deployed in a service environment it has to manage a highly dynamic scenarios that provide a set of unknown circumstances: objects in different places and humans walking around. These conditions are challenging for an autonomous robot that needs to accomplish assistive tasks. These partially known scenarios has negative effects on hybrid architectures with deliberative planning systems adding extra sub-tasks to main goal or continuous re-planing with deadlocks. This paper proposes the use of a probabilistic Context Awareness System that provides a set of belief states of the environment to a symbolic planner enabling PDDL metrics. The Context Awareness System is composed by a Deep Learning classifier to process audio input from the environment, and an inference probabilistic module for generating symbolic knowledge. This approach delivers a method to generate correct plans efficiently. The solution presented in this paper is being successfully applied in a robot running Robot Operating System (ROS) on two experimental scenarios that illustrates the utility of the technique showing a reduction on execution time.

Jonatan Gines Clavero, Francisco J. Rodriguez, Francisco Martín Rico, Angel Manuel Guerrero, Vicente Matellán

Combining Data-Driven and Domain Knowledge Components in an Intelligent Assistant to Build Personalized Menus

In this paper, some new components that have been integrated in the Diet4You system for the generation of nutritional plans are introduced. Negative user preferences have been modelled and introduced in the system. Furthermore, the cultural eating styles originated from the location where the user lives have been taken into account dividing the original menu plan in sub-plans. Each sub-plan is in charge to optimize one of the meals of one day in the personal menu of the user. The main latent reasoning mechanism used is case-based reasoning, which reuses previous menu configurations according to the nutritional plan and the corresponding hard constraints and the user preferences to meet a personalized recommendation menu for a given user. It uses the cognitive analogical reasoning technique in addition to ontologies, nutritional databases and expert knowledge. The preliminary results with some examples of application to test the new contextual components have been very satisfactory according to the evaluation of the experts.

Miquel Sànchez-Marrè, Karina Gibert, Beatriz Sevilla-Villaneva

Robust Heading Estimation in Mobile Phones

Nowadays, mobile phones are used more and more for purposes that have nothing to do with phone calls or simple data transfers. One example is indoor inertial navigation. Within this task, a central problem is to obtain a good estimation of the user heading, robust to magnetic interference and changes in the position of the mobile device with respect to the user. In this paper we propose a method able to provide a robust user heading as a result of detecting the relative position of the mobile phone with respect to the user, together with a heuristic computation of the heading from different Euler representations. We have performed an experimental validation of our proposal comparing it with the Android default compass. The results confirm the good performance of our method.

Fernando E. Casado, Adrián Nieto, Roberto Iglesias, Carlos V. Regueiro, Senén Barro

Bioinspired Systems


Crowding Differential Evolution for Protein Structure Prediction

A hybrid combination between differential evolution and a local refinement of protein structures provided by fragment replacements was performed for protein structure prediction. The coarse-grained protein conformation representation of the Rosetta environment was used. Given the deceptiveness of the Rosetta energy model, an evolutionary computing niching method, crowding, was incorporated in the evolutionary algorithm with the aim to obtain optimized solutions that at the same time provide a set of diverse protein folds. Thus, the probability to obtain optimized conformations close to the native structure is increased.

Daniel Varela, José Santos

Bacterial Resistance Algorithm. An Application to CVRP

This work considers an approach called Bacterial Antibiotic Resistance Algorithm (BARA) in which a bacteria colony represents a set of candidates solutions subjected to the presence of an antibiotic as a pressure factor for separating good and wrong answers. In our terms, the classification allows us to have two groups: resistant and non-resistant bacteria. Then, by using genetic variation mechanisms (conjugation, transformation, and mutation), it is expected that non-resistant bacteria may improve their defense capability to enhance their probability of survival. The proposed algorithm implements and evaluates instances of the Capacitated Vehicle Routing Problem (CVRP). Results are comparable to those obtained in similar approaches.

M. Angélica Pinninghoff J., José Orellana M., Ricardo Contreras A.

Conceptual Description of Nature-Inspired Cognitive Cities: Properties and Challenges

Smart cities result from the wide adoption of information and communication technologies aimed at addressing challenges arising from overpopulation and resources shortage. Despite their important and fundamental contributions, ICT alone can hardly cope with all the challenges posed by growing demands of overpopulated cities. Hence, novel approaches based on innovative paradigms are needed.In this article we revise the concept of Cognitive City founded on Siemens’ Connectivism and understood as the evolution of current smart cities augmented with artificial intelligence, internet of things, and ubiquitous computing. We present the concept of cognitive city as a complex system of systems resembling complex adaptive systems with natural resilient capabilities. We build and propose our model upon the principles of decentralized control, stigmergy and locality, multi-directional networking, randomness, specialization and redundancy. Also, we show a realistic application of our model.With the aim to set the ground for further research, we summarize the main challenges that remain open, both from a societal and technical perspective. Hence, the goal of this article is not to provide a solution to those challenges but, to raise awareness on the problem, and foster further research in the multiple lines that remain open.

Juvenal Machin, Agusti Solanas

Genetic Algorithm to Evolve Ensembles of Rules for On-Line Scheduling on Single Machine with Variable Capacity

On-line scheduling is often required in real life situations. This is the case of the one machine scheduling with variable capacity and tardiness minimization problem, denoted $$(1,Cap(t)||\sum T_i)$$ . This problem arose from a charging station where the charging periods for large fleets of electric vehicles (EV) must be scheduled under limited power and other technological constraints. The control system of the charging station requires solving many instances of this problem on-line. The characteristics of these instances being strongly dependent on the load and restrictions of the charging station at a given time. In this paper, the goal is to evolve small ensembles of priority rules such that for any instance of the problem at least one of the rules in the ensemble has high chance to produce a good solution. To do that, we propose a Genetic Algorithm (GA) that evolves ensembles of rules from a large set of rules previously calculated by a Genetic Program (GP). We conducted an experimental study showing that the GA is able to evolve ensembles or rules covering instances with different characteristics and so they outperform ensembles of both classic priority rules and the best rules obtained by GP.

Francisco J. Gil-Gala, Ramiro Varela

Multivariate Approach to Alcohol Detection in Drivers by Sensors and Artificial Vision

This work presents a system for detecting excess alcohol in drivers to reduce road traffic accidents. To do so, criteria such as alcohol concentration the environment, a facial temperature of the driver and width of the pupil are considered. To measure the corresponding variables, the data acquisition procedure uses sensors and artificial vision. Subsequently, data analysis is performed into stages for prototype selection and supervised classification algorithms. Accordingly, the acquired data can be stored and processed in a system with low-computational resources. As a remarkable result, the amount of training samples is significantly reduced, while an admissible classification performance is achieved - reaching then suitable settings regarding the given device’s conditions.

Paul D. Rosero-Montalvo, Vivian F. López-Batista, Diego H. Peluffo-Ordóñez, Vanessa C. Erazo-Chamorro, Ricardo P. Arciniega-Rocha

Optimization of Bridges Reinforcements with Tied-Arch Using Moth Search Algorithm

The deterioration in the bridges that cross the watercourses is a situation that must be resolved in a timely manner to avoid the collapse of its structure. Its repair can mean a high cost, road and environmental alteration. An effective solution, which minimizes this impact, is the installation of a superstructure in the form of an arch that covers the entire length of the bridge and which, by means of a hook anchored to the deck of the bridge, allows the arch to support the weight. This structure must try to maintain the original properties of the bridge, so the calculation of the magnitude of tension of the hangers and the order in which it is applied should not cause damage to the structure. In this document, we propose to optimize the process of calculating the hanger magnitudes and the order in which they must be applied using the moth search algorithm, in order to obtain one or several satisfactory solutions. Finally, we present the results obtained for an arch bridge and three hangers and, thus, evaluate the efficiency and effectiveness in the process of obtaining results in comparison with the Black Hole Algorithm.

Óscar Carrasco, Broderick Crawford, Ricardo Soto, José Lemus-Romani, Gino Astorga, Agustín Salas-Fernández

Repairing Infeasibility in Scheduling via Genetic Algorithms

Scheduling problems arise in an ever increasing number of application domains. Although efficient algorithms exist for a variety of such problems, sometimes it is necessary to satisfy hard constraints that make the problem unfeasible. In this situation, identifying possible ways of repairing infeasibility represents a task of utmost interest. We consider this scenario in the context of job shop scheduling with a hard makespan constraint and address the problem of finding the largest possible subset of the jobs that can be scheduled within such constraint. To this aim, we develop a genetic algorithm that looks for solutions in the search space defined by an efficient solution builder, also proposed in the paper. Experimental results show the suitability of our approach.

Raúl Mencía, Carlos Mencía, Ramiro Varela

Application of Koniocortex-Like Networks to Cardiac Arrhythmias Classification

KLN (Koniocortex Like Network) is a novel Bioinspired Artificial Neural Network that models relevant biological properties of neurons as Synaptic Directionality, Long Term Potenciation, Long Term Depression, Metaplasticity and Intrinsic plasticity, together with natural normalization of sensory inputs and Winner-Take-All competitive learning. As a result, KLN performs a Deeper Learning on DataSets showing several high order properties of biological brains as: associative memory, scalability and even continuous learning. KLN learning is originally unsupervised and its architecture is inspired in the koniocortex, the first cortical layer receiving sensory inputs where map reorganization and feature extraction have been identified, as is the case of the visual cortex. This new model has shown big potential on synthetic inputs and research is now on application performance in complex problems involving real data in comparison with state-of-art supervised and unsupervised techniques. In this paper we apply KLN to explore its capabilities on one of the biggest problem of nowadays society and medical community, as it is the early detection of cardiovascular disease. The world’s number one killer, with 17,9 million deaths every year. Results of KLN on the classification of Cardiac arrhythmias from the well-known MIT-BIH cardiac arrhythmias database are reported.

Santiago Torres-Alegre, Yasmine Benchaib, José Manuel Ferrández Vicente, Diego Andina

Machine Learning for Big Data and Visualization


Content Based Image Retrieval by Convolutional Neural Networks

In this paper, we present a Convolutional Neural Network (CNN) for feature extraction in Content Based Image Retrieval (CBIR). The proposed CNN aims at reducing the semantic gap between low-level and high-level features. Thus, improving retrieval results. Our CNN is the result of a transfer learning technique using Alexnet pretrained network. It learns how to extract representative features from a learning database and then uses this knowledge in query feature extraction. Experimentations performed on Wang (Corel 1K) database show a significant improvement in terms of precision over the state of the art classic approaches.

Safa Hamreras, Rafaela Benítez-Rochel, Bachir Boucheham, Miguel A. Molina-Cabello, Ezequiel López-Rubio

Deep Learning Networks with p-norm Loss Layers for Spatial Resolution Enhancement of 3D Medical Images

Nowadays, obtaining high-quality magnetic resonance (MR) images is a complex problem due to several acquisition factors, but is crucial in order to perform good diagnostics. The enhancement of the resolution is a typical procedure applied after the image generation. State-of-the-art works gather a large variety of methods for super-resolution (SR), among which deep learning has become very popular during the last years. Most of the SR deep-learning methods are based on the minimization of the residuals by the use of Euclidean loss layers. In this paper, we propose an SR model based on the use of a p-norm loss layer to improve the learning process and obtain a better high-resolution (HR) image. This method was implemented using a three-dimensional convolutional neural network (CNN), and tested for several norms in order to determine the most robust fit. The proposed methodology was trained and tested with sets of MR structural T1-weighted images and showed better outcomes quantitatively, in terms of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the restored and the calculated residual images showed better CNN outputs.

Karl Thurnhofer-Hemsi, Ezequiel López-Rubio, Núria Roé-Vellvé, Miguel A. Molina-Cabello

Analysis of Dogs’s Abandonment Problem Using Georeferenced Multi-agent Systems

This paper evaluates the social and public health impact of the abandonment of dogs in one of the tourist sectors of the city of Quito, “El Panecillo”. Some of the consequences of this abandonment are analyzed: overpopulation, traffic accidents, spread of diseases, among others. Through the georeferenced multi-agents modeling, different factors that intervene in this problematic are analyzed and a simulation that allows to visualize the growth of the canine population in condition of abandonment and the consequences that this generates over time is generated. The GIS layers based environment allows a real analysis of terrain conditions and geographical limitations, this gives greater simulation realism with a complex systems approach.

Zoila Ruiz-Chavez, Jaime Salvador-Meneses, Cristina Mejí­a-Astudillo, Soledad Diaz-Quilachamin

Background Modeling by Shifted Tilings of Stacked Denoising Autoencoders

The effective processing of visual data without interruption is currently of supreme importance. For that purpose, the analysis system must adapt to events that may affect the data quality and maintain its performance level over time. A methodology for background modeling and foreground detection, whose main characteristic is its robustness against stationary noise, is presented in the paper. The system is based on a stacked denoising autoencoder which extracts a set of significant features for each patch of several shifted tilings of the video frame. A probabilistic model for each patch is learned. The distinct patches which include a particular pixel are considered for that pixel classification. The experiments show that classical methods existing in the literature experience drastic performance drops when noise is present in the video sequences, whereas the proposed one seems to be slightly affected. This fact corroborates the idea of robustness of our proposal, in addition to its usefulness for the processing and analysis of continuous data during uninterrupted periods of time.

Jorge García-González, Juan M. Ortiz-de-Lazcano-Lobato, Rafael M. Luque-Baena, Ezequiel López-Rubio

Deep Learning-Based Security System Powered by Low Cost Hardware and Panoramic Cameras

Automatic video surveillance systems are usually designed to detect anomalous objects being present in a scene or behaving dangerously. In order to perform adequately, they must incorporate models able to achieve accurate pattern recognition in an image, and deep learning neural networks excel at this task. However, exhaustive scan of the full image results in multiple image blocks or windows to analyze, which could make the time performance of the system very poor when implemented on low cost devices. This paper presents a system which attempts to detect abnormal moving objects within an area covered by a 360 $$^\circ $$ camera. The decision about the block of the image to analyze is based on a mixture distribution composed of two components: a uniform probability distribution, which represents a blind random selection, and a mixture of Gaussian probability distributions. Gaussian distributions represent windows in the image where anomalous objects were detected previously and contribute to generate the next window to analyze close to those windows of interest. The system is implemented on a Raspberry Pi microcontroller-based board, which enables the design and implementation of a low-cost monitoring system that is able to perform image processing.

Jesus Benito-Picazo, Enrique Domínguez, Esteban J. Palomo, Ezequiel López-Rubio

Biomedical Applications


Neuroacoustical Stimulation of Parkinson’s Disease Patients: A Case Study

It is a well-known fact that Parkinson’s Disease (PD) patients present important alterations in speech and phonation. Recent studies have shown that neuroestimulation using binaural beats have an effect on the neuromotor and cognitive conditions of patients suffering from PD, at least temporarily after stimulation. The present study aims to test if this phenomenon has any observable effect on phonation as a manifestation of the patient’s neuromotor conditions. With this aim in mind, an experimental framework has been set up, consisting in the stimulation of PD patients with two types of signals, the first consists in a supposedly active signal (binaural beats and pink noise) and the second an inert signal (consisting only of pink noise), recording specific sustained vowels and reading text before and after each stimulation process. The sustained vowels were analyzed in further depth to estimate phonation features associated with instability (jitter, shimmer, biomechanical unbalances, and tremor in different bands). A specific study case is presented, for which the analysis shows statistically significant changes in phonation before and after active neurostimulation, whereas these changes were not detected when inert neurostimulation was used. This effect could open the possibility for the development of neuroacoustic rehabilitative therapies, based on low-cost portable acoustical stimulation devices.

Gerardo Gálvez-García, Andrés Gómez-Rodellar, Daniel Palacios-Alonso, Guillermo de Arcas-Castro, Pedro Gómez-Vilda

Evaluating Instability on Phonation in Parkinson’s Disease and Aging Speech

Speech is controlled by axial neuromotor systems, highly sensible to certain neurodegenerative illnesses as Parkinson’s Disease (PD). Patients suffering PD present important alterations in speech, which manifest in phonation, articulation, prosody and fluency. Usually phonation and articulation alterations are estimated using different statistical frameworks and methods. The present study introduces a new paradigm based on Information Theory fundamentals to use common statistical tools to differentiate and score PD speech on phonation and articulation estimates. A study describing the performance of a methodology based on this common framework on a database including 16 PD patients, 16 age-paired healthy controls (HC) and 16 mid-age normative subjects (NS) is presented. The results point out to the clear separation between PD patients and HC subjects with respect to NS, but an unclear differentiation between PD and HC. The most important conclusion is that special effort is needed to establish differentiating features between PD, and organic laryngeal, from aging speech.

Andrés Gómez-Rodellar, Daniel Palacios-Alonso, José Manuel Ferrández Vicente, J. Mekyska, Agustín Álvarez Marquina, Pedro Gómez-Vilda

Differentiation Between Ischemic and Heart Rate Related Events Using the Continuous Wavelet Transform

Cardiovascular diseases are one of the main causes of death in the world, as a result much efforts have been made to detect early ischemia. Traditionally changes produced in the ST or STT segments of the heartbeat were analyzed. The main difficulty relies on alterations produced in the ST or STT segment because of non ischemic events, such as changes in the heart rate, the ventricular conduction or the cardiac electrical axis. The aim of this work is to differentiate between ischemic and heart rate related events using the information provided by the continuous wavelet transform of the electrocardiogram. To evaluate the performance of the classifier, the Long Term ST Database was used, with ischemic and non ischemic differentiated events annotated by specialists. The analysis was performed over 77 events (52 ischemic and 25 heart rate related), obtaining a sensitivity and positive predictivity of 86.64% for both indicators.

Carolina Fernández Biscay, Pedro David Arini, Anderson Iván Rincón Soler, María Paula Bonomini

Automatic Measurement of ISNT and CDR on Retinal Images by Means of a Fast and Efficient Method Based on Mathematical Morphology and Active Contours

This paper describes a fast and efficient method to automatically measure the ISNT and CDR in retinal images. The method is based on a robust detection of the optic disk and excavation in a enhanced retinal image by means of morphological operators. Using this coarse segmentation as initialization, two parametric active contours implemented in the frequency domain perform a fine segmentation of the optic disk and excavation. The resulting curves allow the automatic calculation of the ISNT and CDR values, which are important features to consider in the early detection of glaucoma. The accuracy and precision of the method has been tested and compared with the evaluation of two ophthalmologists in a preliminary set of images.

Rafael Verdú-Monedero, Juan Morales-Sánchez, Rafael Berenguer-Vidal, Inmaculada Sellés-Navarro, Ana Palazón-Cabanes

Bihemispheric Beta Desynchronization During an Upper-Limb Motor Task in Chronic Stroke Survivors

For severe motor paralysis patients, most rehabilitation strategies require residual movements that, however, are lacking in up to 30–50% of stroke survivors. In these patients, motor imagery based BCI systems might play a substantial role in rehabilitation strategies. 11 severely motor-injured stroke patients and 6 healthy participants participated in this study. During an unilateral upper-hand motor task, stroke patients shown significant modulation of sensorimotor rhythms in both hemispheres, shown that EEG signals of both hemispheres can be used for control of BMI systems. Main findings were that ERD amplitude was reduced in affected hemisphere, and that ERD when using affected hand was lateralized to and more marked in ipsilateral (unaffected) hemisphere. Significant activation differences between healthy and affected hemisphere were found, suggesting participation of different physiological mechanisms in both, that might be explored in future experimentation for improving the design and implementation of EEG-based BMI systems and use of these systems in neurorehabilitation of stroke.

Santiago Ezquerro, Juan A. Barios, Arturo Bertomeu-Motos, Jorge Diez, Jose M. Sanchez-Aparicio, Luis Donis-Barber, Eduardo Fernández-Jover, N. Garcia-Aracil

Modeling and Estimation of Non-functional Properties: Leveraging the Power of QoS Metrics

Non-Functional Properties (e.g., safety, dependability or resource consumption, just to name a few), play a key role in most software systems. The RoQME Integrated Technical Project, funded by the EU H2020 RobMoSys Project, aims at contributing a model-driven tool-chain for dealing with system-level non-functional properties through the specification of global quality-of-service (QoS) metrics. The estimation of these metrics at runtime, in terms of the available contextual information, can then be used for different purposes, such as dynamic software adaptation or benchmarking. This paper describes the advances achieved in RoQME and presents one of the pilot experiments developed to showcase the tool-chain developed as part of the project.

Cristina Vicente-Chicote, Daniel García-Pérez, Pablo García-Ojeda, Juan F. Inglés-Romero, Adrián Romero-Garcés, Jesús Martínez

Machine-Health Application Based on Machine Learning Techniques for Prediction of Valve Wear in a Manufacturing Plant

The wear of mechanical components and its eventual failure in manufacturing plants, results in companies spending time and resources that, if not scheduled with predictive or preventive maintenance, can lead to production deviation or loss with dire consequences. Nonetheless, current modern plants are frequently highly monitored and automated, generating great quantities of data from a variety of sensors and actuators. Using this raw data, Machine Learning (ML) techniques can be implemented to achieve predictive maintenance. In this work, a method to predict and estimate the wear of a valve using the data related to an opening valve in Iberian Lube Base Oils Company, S.A. (ILBOC) is proposed. The dataset has been built from sensor data in the plant and formatted to use with Tensorflow package in Python. Then a Multi-Layer Perceptron (MLP) neural network is used to predict and estimate the ideal behavior of the valve without wear and a Recurrent Neural Network (RNN) to predict the real behavior of the valve with wear. Comparing both predictions an estimation of the valve wear is realized. Finally, this work closes with a discussion on an early alert system to schedule and plan the replacement of the valve, conclusions and future research.

María-Elena Fernández-García, Jorge Larrey-Ruiz, Antonio Ros-Ros, Aníbal R. Figueiras-Vidal, José-Luis Sancho-Gómez

Deep Learning


Artificial Semantic Memory with Autonomous Learning Applied to Social Robots

Semantic memory stores knowledge about the meanings of words and the relationships between these meanings. In recent years, Artificial Intelligence, in particular Deep Learning, has successfully resolved the identification of classes of elements in images, and even instances of a class, providing a basic form of semantic memory. Unfortunately, incorporating new instances of a class requires a complex and long process of labeling and offline training. We are convinced that the combination of convolutional networks and statistical classifiers allows us to create a long-term semantic memory that is capable of learning online. To validate this hypothesis, we have implemented a long-term semantic memory in a social robot. The robot initially only recognizes people, but, after interacting with different people, is able to distinguish them from each other. The advantage of our approach is that the process of long-term memorization is done autonomously without the need for offline processing.

Francisco Martin-Rico, Francisco Gomez-Donoso, Felix Escalona, Miguel Cazorla, Jose Garcia-Rodriguez

A Showcase of the Use of Autoencoders in Feature Learning Applications

Autoencoders are techniques for data representation learning based on artificial neural networks. Differently to other feature learning methods which may be focused on finding specific transformations of the feature space, they can be adapted to fulfill many purposes, such as data visualization, denoising, anomaly detection and semantic hashing.This work presents these applications and provides details on how autoencoders can perform them, including code samples making use of an R package with an easy-to-use interface for autoencoder design and training, ruta. Along the way, the explanations on how each learning task has been achieved are provided with the aim to help the reader design their own autoencoders for these or other objectives.

David Charte, Francisco Charte, María J. del Jesus, Francisco Herrera

Automatic Image-Based Waste Classification

The management of solid waste in large urban environments has become a complex problem due to increasing amount of waste generated every day by citizens and companies. Current Computer Vision and Deep Learning techniques can help in the automatic detection and classification of waste types for further recycling tasks. In this work, we use the TrashNet dataset to train and compare different deep learning architectures for automatic classification of garbage types. In particular, several Convolutional Neural Networks (CNN) architectures were compared: VGG, Inception and ResNet. The best classification results were obtained using a combined Inception-ResNet model that achieved 88.6% of accuracy. These are the best results obtained with the considered dataset.

Victoria Ruiz, Ángel Sánchez, José F. Vélez, Bogdan Raducanu

Propositional Rules Generated at the Top Layers of a CNN

So far, many rule extraction techniques have been proposed to explain the classifications of shallow Multi Layer Perceptrons (MLPs), but very few methods have been introduced for Convolutional Neural Networks (CNNs). To fill this gap, this work presents a new technique applied to a CNN architecture including two convolutional layers. This neural network is trained with the MNIST dataset, representing images of digits. Rule extraction is performed at the first fully connected layer, by means of the Discretized Interpretable Multi Layer Perceptron (DIMLP). This transparent MLP architecture allows us to generate symbolic rules, by precisely locating axis-parallel hyperplanes. The antecedents of the extracted rules represent responses of convolutional filters that makes it possible to determine for each rule the covered samples. Hence, we can visualize the centroid of each rule, which gives us some insight into how the network works. This represents a first step towards the explanation of CNN responses, since the final explanation would be obtained in a further processing by generating propositional rules with respect to the input layer. In the experiments we illustrate a generated ruleset with its characteristics in terms of accuracy, complexity and fidelity, which is the degree of matching between CNN classifications and rules classifications. Overall, rules reach very high fidelity. Finally, several examples of rules are visualized and discussed.

Guido Bologna

Deep Ordinal Classification Based on the Proportional Odds Model

This paper proposes a deep neural network model for ordinal regression problems based on the use of a probabilistic ordinal link function in the output layer. This link function reproduces the Proportional Odds Model (POM), a statistical linear model which projects each pattern into a 1-dimensional space. In our case, the projection is estimated by a non-linear deep neural network. After that, patterns are classified using a set of ordered thresholds. In order to further improve the results, we combine this link function with a loss cost that takes the distance between classes into account, based on the weighted Kappa index. The experiments are based on two ordinal classification problems, and the statistical tests confirm that our ordinal network outperforms the nominal version and other proposals considered in the literature.

Víctor Manuel Vargas, Pedro Antonio Gutiérrez, César Hervás

Data Preprocessing for Automatic WMH Segmentation with FCNNs

Automatic segmentation of brain white matter hyperintensities (WMH) is a challenging problem. Recently, the proposals based on Fully Convolutional Neural Networks (FCNN) are giving very good results, as it is demostrated by the top WMH challenge architectures. However, the problem is non completely solved yet. In this paper we analyze the influence of preprocessing stages of the input data on a fully convolutional network (FCNN) based on the U-NET architecture. Results demostrate that standarization, skull stripping and contrast enhancement significantly influence the results of segmentation.

P. Duque, J. M. Cuadra, E. Jiménez, Mariano Rincón-Zamorano

FER in Primary School Children for Affective Robot Tutors

In the last few years, robotics has attracted much interest as a tool to support education through social interaction. Since Social- Emotional Learning (SEL) influences academic success, affective robot tutors have a great potential within education. In this article we report on our research in recognition of facial emotional expressions, aimed at improving ARTIE, an integrated environment for the development of affective robot tutors. A Full Convolutional Neural Network (FCNN) model has been trained with the Fer2013 dataset, and then validated with another dataset containing facial images of primary school children, which has been compiled during computing lab sessions. Our first prototype recognizes primary school children facial emotional expressions with 69,15% accuracy. As a future work we intend to further refine the ARTIE Emotional Component with a view to integrating the main singularities of primary school children emotional expression.

Luis-Eduardo Imbernón Cuadrado, Ángeles Manjarrés Riesco, Félix de la Paz López


Weitere Informationen