Skip to main content

Über dieses Buch

This volume constitutes the refereed proceedings of the Third International Conference on Optimization and Learning, OLA 2020, held in Cádiz, Spain, in February 2020.
The 23 full papers were carefully reviewed and selected from 55 submissions. The papers presented in the volume focus on the future challenges of optimization and learning methods, identifying and exploiting their synergies,and analyzing their applications in different fields, such as health, industry 4.0, games, logistics, etc.



Optimization and Learning


Multi-Agent Reinforcement Learning Tool for Job Shop Scheduling Problems

The emergence of Industry 4.0 allows for new approaches to solve industrial problems such as the Job Shop Scheduling Problem. It has been demonstrated that Multi-Agent Reinforcement Learning approaches are highly promising to handle complex scheduling scenarios. In this work we propose a user friendly Multi-Agent Reinforcement Learning tool, more appealing for industry. It allows the users to interact with the learning algorithms in such a way that all the constraints in the production floor are carefully included and the objectives can be adapted to real world scenarios. The user can either keep the best schedule obtained by a Q-Learning algorithm or adjust it by fixing some operations in order to meet certain constraints, then the tool will optimize the modified solution respecting the user preferences using two possible alternatives. These alternatives are validated using OR-Library benchmarks, the experiments show that the modified Q-Learning algorithm is able to obtain the best results.
Yailen Martínez Jiménez, Jessica Coto Palacio, Ann Nowé

Evolving a Deep Neural Network Training Time Estimator

We present a procedure for the design of a Deep Neural Network (DNN) that estimates the execution time for training a deep neural network per batch on GPU accelerators. The estimator is destined to be embedded in the scheduler of a shared GPU infrastructure, capable of providing estimated training times for a wide range of network architectures, when the user submits a training job. To this end, a very short and simple representation for a given DNN is chosen. In order to compensate for the limited degree of description of the basic network representation, a novel co-evolutionary approach is taken to fit the estimator. The training set for the estimator, i.e. DNNs, is evolved by an evolutionary algorithm that optimizes the accuracy of the estimator. In the process, the genetic algorithm evolves DNNs, generates Python-Keras programs and projects them onto the simple representation. The genetic operators are dynamic, they change with the estimator’s accuracy in order to balance accuracy with generalization. Results show that despite the low degree of information in the representation and the simple initial design for the predictor, co-evolving the training set performs better than near random generated population of DNNs.
Frédéric Pinel, Jian-xiong Yin, Christian Hundt, Emmanuel Kieffer, Sébastien Varrette, Pascal Bouvry, Simon See

Automatic Structural Search for Multi-task Learning VALPs

The neural network research field is still producing novel and improved models which continuously outperform their predecessors. However, a large portion of the best-performing architectures are still fully hand-engineered by experts. Recently, methods that automatize the search for optimal structures have started to reach the level of state-of-the-art hand-crafted structures. Nevertheless, replacing the expert knowledge requires high efficiency from the search algorithm, and flexibility on the part of the model concept. This work proposes a set of model structure-modifying operators designed specifically for the VALP, a recently introduced multi-network model for heterogeneous multi-task problems. These modifiers are employed in a greedy multi-objective search algorithm which employs a non dominance-based acceptance criterion in order to test the viability of a structure-exploring method built on the operators. The results obtained from the experiments carried out in this work indicate that the modifiers can indeed form part of intelligent searches over the space of VALP structures, which encourages more research in this direction.
Unai Garciarena, Alexander Mendiburu, Roberto Santana

Optimizing the Performance of an Unpredictable UAV Swarm for Intruder Detection

In this paper we present the parameterisation and optimisation of the CACOC (Chaotic Ant Colony Optimisation for Coverage) mobility model applied to Unmanned Aerial Vehicles (UAV) in order to perform surveillance tasks. The use of unpredictable routes based on the chaotic solutions of a dynamic system as well as pheromone trails improves the area coverage performed by a swarm of UAVs. We propose this new application of CACOC to detect intruders entering an area under surveillance. Having identified several parameters to be optimised with the aim of increasing intruder detection rate, we address the optimisation of this model using a Cooperative Coevolutionary Genetic Algorithm (CCGA). Twelve case studies (120 scenarios in total) have been optimised by performing 30 independent runs (360 in total) of our algorithm. Finally, we tested our proposal in 100 unseen scenarios of each case study (1200 in total) to find out how robust is our proposal against unexpected intruders.
Daniel H. Stolfi, Matthias R. Brust, Grégoire Danoy, Pascal Bouvry

Demystifying Batch Normalization: Analysis of Normalizing Layer Inputs in Neural Networks

Batch normalization was introduced as a novel solution to help with training fully-connected feed-forward deep neural networks. It proposes to normalize each training-batch in order to alleviate the problem caused by internal covariate shift. The original method claimed that Batch Normalization must be performed before the ReLu activation in the training process for optimal results. However, a second method has since gained ground which stresses the importance of performing BN after the ReLu activation in order to maximize performance. In fact, in the source code of PyTorch, common architectures such as VGG16, ResNet and DenseNet have Batch Normalization layer after the ReLU activation layer. Our work is the first to demystify the aforementioned debate and offer a comprehensive answer as to the proper order for Batch Normalization in the neural network training process. We demonstrate that for convolutional neural networks (CNNs) without skip connections, it is optimal to do ReLu activation before Batch Normalization as a result of higher gradient flow. In Residual Networks with skip connections, the order does not affect the performance or the gradient flow between the layers.
Dinko D. Franceschi, Jun Hyek Jang

Learning Variables Structure Using Evolutionary Algorithms to Improve Predictive Performance

Several previous works have shown how using prior knowledge within machine learning models helps to overcome the curse of dimensionality issue in high dimensional settings. However, most of these works are based on simple linear models (or variations) or do make the assumption of knowing a pre-defined variable grouping structure in advance, something that will not always be possible. This paper presents a hybrid genetic algorithm and machine learning approach which aims to learn variables grouping structure during the model estimation process, thus taking advantage of the benefits introduced by models based on problem-specific information but with no requirement of having a priory any information about variables structure. This approach has been tested on four synthetic datasets and its performance has been compared against two well-known reference models (LASSO and Group-LASSO). The results of the analysis showed how that the proposed approach, called GAGL, considerably outperformed LASSO and performed as well as Group-LASSO in high dimensional settings, with the added benefit of learning the variables grouping structure from data instead of requiring this information a priory before estimating the model.
Damián Nimo, Bernabé Dorronsoro, Ignacio J. Turias, Daniel Urda

Container Demand Forecasting at Border Posts of Ports: A Hybrid SARIMA-SOM-SVR Approach

An accurate forecast of freight demand at sanitary facilities of ports is one of the key challeng-es for transport policymakers to better allocate resources and to improve planning operations. This paper proposes a combined hybrid approach to predict the short-term volume of containers passing through the sanitary facilities of a maritime port. The proposed methodology is based on a three-stage process. First, the time series is decomposed into similar smaller regions easier to predict using a self-organizing map (SOM) clustering. Then, a seasonal auto-regressive integrated moving averages (SARIMA) model is fitted to each cluster, obtaining predicted values and residuals of each cluster. A support vector regression (SVR) model is finally applied in each cluster using the historical data clustered and the predicted variables from the SARIMA step, testing different hybrid configurations. The experimental results demonstrated that the proposed model outperforms other methodologies based on SVR. The proposed model can be used as an automatic decision-making tool by seaport or airport management due to its capacity to plan resources in advance.
Juan Jesús Ruiz-Aguilar, Daniel Urda, José Antonio Moscoso-López, Javier González-Enrique, Ignacio J. Turias



Intelligent Electric Drive Management for Plug-in Hybrid Buses

Plug-in hybrid (PH) buses offer range and operating flexibility of buses with conventional internal combustion engines with environmental. However, when they are frequently charged, they also enable societal benefits (emissions- and noise-related) associated with electric buses. Thanks to geofencing, pure electric drive of PH buses can be assigned to specific locations via a back-office system. As a result, PH buses not only can fulfil zero-emission (ZE) zones set by city authorities, but they can also minimize total energy use thanks to selection of locations favouring (from energy perspective) electric drive. Such a location-controlled behaviour allows executing targeted air quality improvement and noise reduction strategies as well reducing energy consumption. However, current ZE zone assignment strategies used by PH buses are static—they are based on the first-come-first serve rule and do not consider traffic conditions. In this article, we propose a novel recommendation system, based on artificial intelligence, that allows PH buses operating efficiently in a dynamic environment, making the best use of the available resources so that emission- and noise-pollution levels are minimized.
Patricia Ruiz, Aarón Arias, Renzo Massobrio, Juan Carlos de la Torre, Marcin Seredynski, Bernabé Dorronsoro

A Heuristic Algorithm for the Set k-Cover Problem

The set k-cover problem (SkCP) is an extension of the classical set cover problem (SCP), in which each row needs to be covered by at least k columns while the coverage cost is minimized. The case of \(k=1\) refers to the classical SCP. SkCP has many applications including in computational biology. We develop a simple and effective heuristic for both weighted and unweighted SkCP. In the weighted SkCP, there is a cost associated with a column and in the unweighted variant, all columns have the identical cost. The proposed heuristic first generates a lower bound and then builds a feasible solution from the lower bound. We improve the feasible solution through several procedures including a removal local search. We consider three different values for k and test the heuristic on 45 benchmark instances of SCP from OR library. Therefore, we solve 135 instances. Over the solved instances, we show that our proposed heuristic obtains quality solutions.
Amir Salehipour



Computational Intelligence for Evaluating the Air Quality in the Center of Madrid, Spain

This article presents the application of data analysis and computational intelligence techniques for evaluating the air quality in the center of Madrid, Spain. Polynomial regression and deep learning methods to analyze the time series of nitrogen dioxide concentration, in order to evaluate the effectiveness of Madrid Central, a set of road traffic limitation measures applied in downtown Madrid. According to the reported results, Madrid Central was able to significantly reduce the nitrogen dioxide concentration, thus effectively improving air quality.
Jamal Toutouh, Irene Lebrusán, Sergio Nesmachnow

Intelligent System for the Reduction of Injuries in Archery

Archery is one of these sports in which the athletes repeat the same body postures over and over again. This means that tiny wrong habits could cause serious long-term health injuries. Consequently, learning a correct shooting technique is very important for both beginner archers and elite athletes. In this work, we present a system that uses machine learning to automatically detect anomalous postures and return to the archer a shooting score, that works by giving the archer a feedback on his own body configuration. We use a neural network to analyze images of archers during the firing and return the place of their different body joints. With this information, the system can detect wrong postures which might lead to injuries. This feedback is very important to the archer when learning the shooting technique. In addition, the system is not intrusive for the archer, so she/he can fire arrows freely. Preliminary results show the usefulness of the system, which is able to detect 4 spine misalignment and 4 raised elbow analyzing only 9 shots.
Christian Cintrano, Javier Ferrer, Enrique Alba

Learning Patterns for Complex Event Detection in Robot Sensor Data

We present an approach for learning patterns for Complex Event Processing (CEP) in robot sensor data. While the robot executes a certain task, sensor data is recorded. The sensor data recordings are classified in terms of events or outcomes that characterize the task. These classified recordings are then used to learn simple rules that describe the events using a simple, domain specific language, in a human-readable and interpretable way.
Bernhard G. Humm, Marco Hutter



Comparison Between Stochastic Gradient Descent and VLE Metaheuristic for Optimizing Matrix Factorization

Matrix factorization is used by recommender systems in collaborative filtering for building prediction models based on a couple of matrices. These models are usually generated by stochastic gradient descent algorithm, which learns the model minimizing the error done. Finally, the obtained models are validated according to an error criterion by predicting test data. Since the model generation can be tackled as an optimization problem where there is a huge set of possible solutions, we propose to use metaheuristics as alternative solving methods for matrix factorization. In this work we applied a novel metaheuristic for continuous optimization, which works inspired by the vapour-liquid equilibrium. We considered a particular case were matrix factorization was applied: the prediction student performance problem. The obtained results surpassed thoroughly the accuracy provided by stochastic gradient descent.
Juan A. Gómez-Pulido, Enrique Cortés-Toro, Arturo Durán-Domínguez, José M. Lanza-Gutiérrez, Broderick Crawford, Ricardo Soto

A Capacity-Enhanced Local Search for the 5G Cell Switch-off Problem

Network densification with deployments of many small base stations (SBSs) is a key enabler technology for the fifth generation (5G) cellular networks, and it is also clearly in conflict with one of the target design requirements of 5G systems: a 90% reduction of the power consumption. In order to address this issue, switching off a number of SBSs in periods of low traffic demand has been standardized as an recognized strategy to save energy. But this poses a challenging NP-complete optimization problem to the system designers, which do also have to provide the users with maxima capacity. This is a multi-objective optimization problem that has been tackled with multi-objective evolutionary algorithms (MOEAs). In particular, a problem-specific search operator with problem-domain information has been devised so as to engineer hybrid MOEAs. It is based on promoting solutions that activate SBSs which may serve users with higher data rates, while also deactivating those not serving any user at all. That is, it tries to improve the two problem objectives simultaneously. The resulting hybrid algorithms have shown to reach better approximations to the Pareto fronts than the canonical algorithms over a set of nine scenarios with increasing diversity in SBSs and users.
Francisco Luna, Pablo H. Zapata-Cano, Ángel Palomares-Caballero, Juan F. Valenzuela-Valdés

Clustering a 2d Pareto Front: P-center Problems Are Solvable in Polynomial Time

Having many non dominated solutions in bi-objective optimization problems, this paper aims to cluster the Pareto front using Euclidean distances. The p-center problems, both in the discrete and continuous versions, become solvable with a dynamic programming algorithm. Having N points, the complexity of clustering is \(O(KN\log N)\) (resp. \(O(KN\log ^2 N)\)) time and O(N) memory space for the continuous (resp. discrete) K-center problem for \(K\geqslant 3\), and in \(O(N\log N)\) time for such 2-center problems. Furthermore, parallel implementations allow quasi-linear speed-up for the practical applications.
Nicolas Dupin, Frank Nielsen, El-Ghazali Talbi

Security and Games


A Distributed Digital Object Architecture to Support Secure IoT Ecosystems

Security is one of the most challenging issues facing the Internet of Things. One of the most usual architecture for IoT ecosystems has three layers (Acquisition, Networks and Applications), and provides the security to the different elements of the IoT ecosystems through specific technology or techniques, available in the different layers. However, the deployment of security technology at each layer complicates the management and maintainability of the security credentials increasing the risk of information leak, greater manual intervention and complicates the maintainability of consistency of the sensitive data. In this paper we propose a new architecture model, where a fourth security layer has been added, containing all the security technology traditionally delegated to the other layers, removing them from other layers. This new model is supported by the widespread use of Digital Objects, covering all aspects including physical components, processes and sensed data.
Angel Ruiz-Zafra, Roberto Magán-Carrión

Checking the Difficulty of Evolutionary-Generated Maps in a N-Body Inspired Mobile Game

This paper presents the design and development of an Android application (using Unreal Engine 4) called GravityVolve. It is a two-dimensions game based on the N-Body problem previously presented by some of the authors. In order to complete a map, the player will have to push the particle from its initial position until it reaches the circumference’s position. Thus, the maps of GravityVolve are made up of a particle, a circumference and a set of planets. These maps are procedurally generated by an evolutionary algorithm, and are assigned a difficulty level (‘Easy’, ‘Medium’, ‘Hard’). When a player completes a map, he/she will have access to a selection system where he/she will have to choose the level of difficulty he/she considers appropriate. So, the objectives of this study are two: first, to gather a considerable amount of votes from players with respect to their perception about the difficulty of every map; and two, to compare both, the user’s difficulty feeling and the difficulty given by the algorithm in order to check their correlation and reach some conclusions regarding the quality of the proposed method.
Carlos López-Rodríguez, Antonio J. Fernández-Leiva, Raúl Lara-Cabrera, Antonio M. Mora, Pablo García-Sánchez

A Framework to Create Conversational Agents for the Development of Video Games by End-Users

Video game development is still a difficult task today, requiring strong programming skills and knowledge of multiple technologies. To tackle this problem, some visual tools such as Unity or Unreal have appeared. These tools are effective and easy to use, but they are not entirely aimed at end-users with little knowledge of software engineering. Currently, there is a resurgence in the use of chatbots thanks to the recent advances in fields such as artificial intelligence or language processing. However, there is no evidence about the use of conversational agents for developing video games with domain-specific languages (DSLs). This work states the following two hypotheses: (i) Conversational agents based on natural language can be used to work with DSL for the creation of video games; (ii) these conversational agents can be automatically created by extracting the concepts, properties and relationships from their abstract syntax. To demonstrate the hypotheses, we propose and detail the implementation of a framework to work with DSLs through a chatbot, its implementation details and a systematic method to automate its construction. This approach could be also suitable for other disciplines, in addition to video games development.
Rubén Baena-Perez, Iván Ruiz-Rube, Juan Manuel Dodero, Miguel Angel Bolivar



Breast Cancer Diagnostic Tool Using Deep Feedforward Neural Network and Mother Tree Optimization

Automatic diagnostic tools have been extensively implemented in medical diagnosis processes of different diseases. In this regard, breast cancer diagnosis is particularly important as it becomes one of the most dangerous diseases for women. Consequently, regular and preemptive screening for breast cancer could help initiate treatment earlier and more effectively. In this regard, hospitals and clinics are in need to a robust diagnostic tool that could provide reliable results. The accuracy of diagnostic tools is an important factor that should be taken into consideration when designing a new system. This has motivated us to develop an automatic diagnostic system combining two methodologies: Deep Feedforward Neural Networks (DFNNs) and swarm intelligence algorithms. Swarm intelligence techniques are based on Particle Swarm Optimization (PSO) as well as the Mother Tree Optimization (MTO) algorithm we proposed in the past. In order to asses the performance, in terms of accuracy, of the proposed system, we have conducted several experiments using the Wisconsin Breast Cancer Dataset (WBCD). The results show that the DFNN combined with a variant of our MTO attains a high classification performance, reaching 100% precision.
Wael Korani, Malek Mouhoub

A Mathematical Model for Three-Dimensional Open Dimension Packing Problem with Product Stability Constraints

This paper presents a logistical study using a mathematical model based on the Three-dimensional Open Dimension Rectangular Packing Problem (3D-ODRPP) to optimize the arrangement of products in a packaging with the practical constraint “product stability”. The proposed model aims at seeking the minimal volume rectangular bounding box for a set of rectangular products. Our model deals with orthogonal rotation, static stability and overhang constraints of products, three of the most important real-world conditions that ensure the feasibility of solution. Literature test instances are given to demonstrate that the proposed method can find the feasible global optimum of a 3D-ODRPP. Experimental results show the improvement of solution quality in terms of box volume and packaging stability comparing to existing models in the literature.
Cong-Tan-Trinh Truong, Lionel Amodeo, Farouk Yalaoui

Designing a Flexible Evaluation of Container Loading Using Physics Simulation

In this work, an optimization method for 3D container loading problem with multiple constraints is proposed. The method consists of a genetic algorithm to generate an arrangement of cargoes and a fitness evaluation using physics simulation. The fitness function considers not only the maximization of container density or value but also a few different constraints such as stability and fragility of the cargoes during transportation. We employed a container shaking simulation to include the effect of the constraints to the fitness evaluation. We verified that the proposed method successfully provides the optimal cargo arrangement to the small-scale problem with 10 cargoes.
Shuhei Nishiyama, Chonho Lee, Tomohiro Mashita

Preventing Overloading Incidents on Smart Grids: A Multiobjective Combinatorial Optimization Approach

Cable overloading is one of the most critical disturbances that may occur in smart grids, as it can cause damage to the distribution power lines. Therefore, the circuits are protected by fuses so that, the overload could trip the fuse, opening the circuit, and stopping the flow and heating. However, sustained overloads, even if they are below the safety limits, could also damage the wires. To prevent overload, smart grid operators can switch the fuses on or off to protect the circuits, or remotely curtail the over-producing/over-consuming users. Nevertheless, making the most appropriate decision is a daunting decision-making task, notably due to contractual and technical obligations. In this paper, we define and formulate the overloading prevention problem as a Multiobjective Mixed Integer Quadratically Constrained Program. We also suggest a solution method using a combinatorial optimization approach with a state-of-the-art exact solver. We evaluate this approach for this real-world problem together with Creos Luxembourg S.A., the leading grid operator in Luxembourg, and show that our method can suggest optimal countermeasures to operators facing potential overloading incidents.
Nikolaos Antoniadis, Maxime Cordy, Angelo Sifaleras, Yves Le Traon

Analysis of Data Generated by an Automated Platform for Aggregation of Distributed Energy Resources

The irruption of Distributed Energy Resources (DER) in the power system involves new scenarios where domestic consumers (end-users) would participate aggregated in energy markets, acting as prosumers. Amongst the different possible scenarios, this work is focused on the analysis of the results of a case study which is composed by 40 homes equipped with energy generation units including Li-Ion batteries, HESS systems and second life vehicle batteries to hydrogen storages. Software tools have been developed and deployed in the pilot to allow the domestic prosumers to participate into wholesale energy markets so that operations would be aggregated (all DERs acting as single instance), optimal (optimizing profit and reducing penalties) and smart managed (helping operators in the decision making process). Participating in energy markets is not trivial due to different technical requirements that every participant must comply. Amongst the different existent markets, this paper is focused on the participation in the day-ahead market and the grid operation during the following day to reduce penalties and comply with the energy profile committed. This paper presents an analysis of the data generated during the pilot operation deployed in a real environment. This valuable analysis will be developed in Sect. 4 Results, which raises important conclusions that will be presented. Netfficient is a project funded by the European Union’s Horizon 2020 research and innovation program, with the main objective of the deployment and testing of heterogeneous storages at different levels of the grid on the German Island of Borkum.
Juan Aguilar, Alicia Arce


Weitere Informationen

Premium Partner