Skip to main content

2017 | Buch

Learning and Intelligent Optimization

11th International Conference, LION 11, Nizhny Novgorod, Russia, June 19-21, 2017, Revised Selected Papers

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-conference proceedings of the 11th International Conference on Learning and Intelligent Optimization, LION 11, held in Nizhny,Novgorod, Russia, in June 2017.

The 20 full papers (among these one GENOPT paper) and 15 short papers presented have been carefully reviewed and selected from 73 submissions. The papers explore the advanced research developments in such interconnected fields as mathematical programming, global optimization, machine learning, and artificial intelligence. Special focus is given to advanced ideas, technologies, methods, and applications in optimization and machine learning.

Inhaltsverzeichnis

Frontmatter

Long Papers

Frontmatter
An Importance Sampling Approach to the Estimation of Algorithm Performance in Automated Algorithm Design

In this paper we consider the problem of estimating the relative performance of a given set of related algorithms. The predominant, general approach of doing so involves executing each algorithm instance multiple times, and computing independent estimates based on the performance observations made for each of them. A single execution might be expensive, making this a time-consuming process. We show how an algorithm in general can be viewed as a distribution over executions; and its performance as the expectation of some measure of desirability of an execution, over this distribution. Subsequently, we describe how Importance Sampling can be used to generalize performance observations across algorithms with partially overlapping distributions, amortizing the cost of obtaining them. Finally, we implement the proposed approach as a Proof of Concept and validate it experimentally.

Steven Adriaensen, Filip Moons, Ann Nowé
Test Problems for Parallel Algorithms of Constrained Global Optimization

This work considers the problem of building a class of test problems for global optimization algorithms. The authors present an approach to building multidimensional multiextremal problems, which can clearly demonstrate the nature of the best current approximation, regardless of the problems dimensionality. As part of this approach, the objective function and constraints arise in the process of solving an auxiliary approximation problem. The proposed generator allows the problem to be simplified or complicated, which results in changes to its dimensionality and changes in the feasible domain. The generator was tested by building and solving 100 problems using a parallel global optimization index algorithm. The algorithm’s results are presented using different numbers of computing cores, which clearly demonstrate its acceleration and non-redundancy.

Konstantin Barkalov, Roman Strongin
Automatic Configuration of Kernel-Based Clustering: An Optimization Approach

This paper generalizes a method originally developed by the authors to perform data driven localization of leakages in urban Water Distribution Networks. The method is based on clustering to perform exploratory analysis and a pool of Support Vector Machines to process on line sensors readings. The performance depends on certain hyperparameters which have been considered as decision variables in a sequential model based optimization process. The objective function is related to clustering performance, computed through an external validity index defined according to the leakage localization goal. Thus, as usual in hyperparameters tuning of machine learning algorithms, the objective function is black box. In this paper it is shown how a Bayesian framework offers not only a good performance but also the flexibility to consider in the optimization loop also the automatic configuration of the algorithm. Both Gaussian Processes and Random Forests have been considered to fit the surrogate model of the objective function, while results from a simple grid search have been considered as baseline.

Antonio Candelieri, Ilaria Giordani, Francesco Archetti
Solution of the Convergecast Scheduling Problem on a Square Unit Grid When the Transmission Range is 2

In this paper a conflict-free data aggregation problem, known as a Convergecast Scheduling Problem, is considered. It is NP-hard in the arbitrary wireless network. The paper deals with a special case of the problem when the communication graph is a square grid with unit cells and when the transmission range is 2 (in $$L_1$$L1 metric). Earlier for the case under consideration we proposed a polynomial time algorithm with a guaranteed accuracy bound. In this paper we have shown that the proposed algorithm constructs an optimal solution to the problem.

Adil Erzin
A GRASP for the Minimum Cost SAT Problem

A substantial connection exists between supervised learning from data represented in logic form and the solution of the Minimum Cost Satisfiability Problem (MinCostSAT). Methods based on such connection have been developed and successfully applied in many contexts. The deployment of such methods to large-scale learning problem is often hindered by the computational challenge of solving MinCostSAT, a problem well known to be NP-complete. In this paper, we propose a GRASP-based metaheuristic designed for such problem, that proves successful in leveraging the very distinctive structure of the MinCostSAT problems arising in supervised learning. The algorithm is equipped with an original stopping criterion based on probabilistic assumptions which results very effective for deciding when the search space has been explored enough. Although the proposed solver may approach MinCostSAT of general form, in this paper we limit our analysis to some instances that have been created from artificial supervised learning problems, and show that our method outperforms more general purpose well established solvers.

Giovanni Felici, Daniele Ferone, Paola Festa, Antonio Napoletano, Tommaso Pastore
A New Local Search for the p-Center Problem Based on the Critical Vertex Concept

For the p-center problem, we propose a new smart local search based on the critical vertex concept and embed it in a GRASP framework. Experimental results attest the robustness of the proposed search procedure and confirm that for benchmark instances it converges to optimal or near/optimal solutions faster than the best known state-of-the-art local search.

Daniele Ferone, Paola Festa, Antonio Napoletano, Mauricio G. C. Resende
An Iterated Local Search Framework with Adaptive Operator Selection for Nurse Rostering

Considerable attention has been paid to selective hyper-heuristic frameworks for addressing computationally hard scheduling problems. By using selective hyper-heuristics, we can derive benefits from the strength of low level heuristics and their components at different stages of the heuristic search. In this paper, a simple, general and effective selective hyper heuristic is presented. We introduce an iterated local search based hyper-heuristic framework that incorporates the adaptive operator selection scheme to learn through the search process. The considered iterative approach employs an action selection model to decide the perturbation strategy to apply in each step and a credit assignment module to score its performance. The designed framework allows us to employ any action selection model and credit assignment mechanism used in the literature. Empirical results and an analysis of six different action selection models against state-of-the-art approaches, across 39 problem instances, highlight the significant potential of the proposed selection hyper-heuristics. Further analysis on the adaptive behavior of the model suggests that two of the six models are able to learn the best performing perturbation strategy, resulting in significant performance gains.

Angeliki Gretsista, Edmund K. Burke
Learning a Reactive Restart Strategy to Improve Stochastic Search

Building on the recent success of bet-and-run approaches for restarted local search solvers, we introduce the idea of learning online adaptive restart strategies. Universal restart strategies deploy a fixed schedule that runs with utter disregard of the characteristics that each individual run exhibits. Whether a run looks promising or abysmal, it gets run exactly until the predetermined limit is reached. Bet-and-run strategies are at least slightly less ignorant as they decide which trial to use for a long run based on the performance achieved so far. We introduce the idea of learning fully adaptive restart strategies for black-box solvers, whereby the learning is performed by a parameter tuner. Numerical results show that adaptive strategies can be learned effectively and that these significantly outperform bet-and-run strategies.

Serdar Kadioglu, Meinolf Sellmann, Markus Wagner
Efficient Adaptive Implementation of the Serial Schedule Generation Scheme Using Preprocessing and Bloom Filters

The majority of scheduling metaheuristics use indirect representation of solutions as a way to efficiently explore the search space. Thus, a crucial part of such metaheuristics is a “schedule generation scheme” – procedure translating the indirect solution representation into a schedule. Schedule generation scheme is used every time a new candidate solution needs to be evaluated. Being relatively slow, it eats up most of the running time of the metaheuristic and, thus, its speed plays significant role in performance of the metaheuristic. Despite its importance, little attention has been paid in the literature to efficient implementation of schedule generation schemes. We give detailed description of serial schedule generation scheme, including new improvements, and propose a new approach for speeding it up, by using Bloom filters. The results are further strengthened by automated control of parameters. Finally, we employ online algorithm selection to dynamically choose which of the two implementations to use. This hybrid approach significantly outperforms conventional implementation on a wide range of instances.

Daniel Karapetyan, Alexei Vernitski
Interior Point and Newton Methods in Solving High Dimensional Flow Distribution Problems for Pipe Networks

In this paper optimal flow distribution problem in pipe network is considered. The investigated problem is a convex sparse optimization problem with linear equality and inequality constrains. Newton method is used for problem with equality constrains only and obtains an approximate solution, which may not satisfy inequality constraints. Then Dikin Interior Point Method starts from the approximate solution and finds an optimal one. For problems of high dimension sparse matrix methods, namely Conjugate Gradient and Cholesky method with nested dissection, are applied. Since Dikin Interior Point Method works much slower then Newton Method on the matrices of big size, such approach allows us to obtain good starting point for this method by using comparatively fast Newton Method. Results of numerical experiments are presented.

Oleg O. Khamisov, Valery A. Stennikov
Hierarchical Clustering and Multilevel Refinement for the Bike-Sharing Station Planning Problem

We investigate the Bike-Sharing Station Planning Problem (BSSPP). A bike-sharing system consists of a set of rental stations, each with a certain number of parking slots, distributed over a geographical region. Customers can rent available bikes at any station and return them at any other station with free parking slots. The initial decision process where to build stations of which size or how to extend an existing system by new stations and/or changing existing station configurations is crucial as it actually determines the satisfiable customer demand, costs, as well as the rebalancing effort arising by the need to regularly move bikes from some stations tending to run full to stations tending to run empty. We consider as objective the maximization of the satisfied customer demand under budget constraints for fixed and variable costs, including the costs for rebalancing. As bike-sharing stations are usually implemented within larger cities and the potential station locations are manifold, the size of practical instances of the underlying optimization problem is rather large, which makes a manual decision process a hardly comprehensible and understandable task but also a computational optimization very challenging. We therefore propose to state the BSSPP on the basis of a hierarchical clustering of the considered underlying geographical cells with potential customers and possible stations. In this way the estimated existing demand can be more compactly expressed by a relatively sparse weighted graph instead of a complete matrix with mostly small non-zero entries. For this advanced problem formulation we describe an efficient linear programming approach for evaluating candidate solutions, and for solving the problem a first multilevel refinement heuristic based on mixed integer linear programming. Our experiments show that it is possible to approach instances with up to 2000 geographical cells in reasonable computation times.

Christian Kloimüllner, Günther R. Raidl
Decomposition Descent Method for Limit Optimization Problems

We consider a general limit optimization problem whose goal function need not be smooth in general and only approximation sequences are known instead of exact values of this function. We suggest to apply a two-level approach where approximate solutions of a sequence of mixed variational inequality problems are inserted in the iterative scheme of a selective decomposition descent method. Its convergence is attained under coercivity type conditions.

Igor Konnov
RAMBO: Resource-Aware Model-Based Optimization with Scheduling for Heterogeneous Runtimes and a Comparison with Asynchronous Model-Based Optimization

Sequential model-based optimization is a popular technique for global optimization of expensive black-box functions. It uses a regression model to approximate the objective function and iteratively proposes new interesting points. Deviating from the original formulation, it is often indispensable to apply parallelization to speed up the computation. This is usually achieved by evaluating as many points per iteration as there are workers available. However, if runtimes of the objective function are heterogeneous, resources might be wasted by idle workers. Our new knapsack-based scheduling approach aims at increasing the effectiveness of parallel optimization by efficient resource utilization. Derived from an extra regression model we use runtime predictions of point evaluations to efficiently map evaluations to workers and reduce idling. We compare our approach to five established parallelization strategies on a set of continuous functions with heterogeneous runtimes. Our benchmark covers comparisons of synchronous and asynchronous model-based approaches and investigates the scalability.

Helena Kotthaus, Jakob Richter, Andreas Lang, Janek Thomas, Bernd Bischl, Peter Marwedel, Jörg Rahnenführer, Michel Lang
A New Constructive Heuristic for the No-Wait Flowshop Scheduling Problem

Constructive heuristics have a great interest as they manage to find in a very short time, solutions of relatively good quality. Such solutions may be used as initial solutions for metaheuristics for example. In this work, we propose a new efficient constructive heuristic for the No-Wait Flowshop Scheduling Problem. This proposed heuristic is based on observations on the structure of best solutions of small instances as well as on analyzes of efficient constructive heuristics principles of the literature. Experiments have been conducted and results show the efficiency of the proposed heuristic compared to ones from the literature.

Lucien Mousin, Marie-Eléonore Kessaci, Clarisse Dhaenens
Sharp Penalty Mappings for Variational Inequality Problems

First, this paper introduces a notion of a sharp penalty mapping which can replace more common exact penalty function for convex feasibility problems. Second, it uses it for solution of variational inequalities with monotone operators or pseudo-varitional inequalities with oriented operators. Appropriately scaled the sharp penalty mapping can be used as an exact penalty in variational inequalities to turn them into fixed point problems. Then they can be approximately solved by simple iteration method.

Evgeni Nurminski
A Nonconvex Optimization Approach to Quadratic Bilevel Problems

This paper addresses one of the classes of bilevel optimization problems in their optimistic statement. The reduction of the bilevel problem to a series of nonconvex mathematical optimization problems, together with the specialized Global Search Theory, is used for developing methods of local and global searches to find optimistic solutions. Illustrative examples show that the approach proposed is prospective and performs well.

Andrei Orlov
An Experimental Study of Adaptive Capping in irace

The $${\textsf {irace}} $$irace package is a widely used for automatic algorithm configuration and implements various iterated racing procedures. The original $${\textsf {irace}} $$irace was designed for the optimisation of the solution quality reached within a given running time, a situation frequently arising when configuring algorithms such as stochastic local search procedures. However, when applied to configuration scenarios that involve minimising the running time of a given target algorithm, $${\textsf {irace}} $$irace falls short of reaching the performance of other general-purpose configuration approaches, since it tends to spend too much time evaluating poor configurations. In this article, we improve the efficacy of $${\textsf {irace}} $$irace in running time minimisation by integrating an adaptive capping mechanism into $${\textsf {irace}} $$irace, inspired by the one used by ParamILS. We demonstrate that the resulting $${\textsf {irace}}_{\textsf {cap}} $$iracecap reaches performance levels competitive with those of state-of-the-art algorithm configurators that have been designed to perform well on running time minimisation scenarios. We also investigate the behaviour of $${\textsf {irace}}_{\textsf {cap}} $$iracecap in detail and contrast different ways of integrating adaptive capping.

Leslie Pérez Cáceres, Manuel López-Ibáñez, Holger Hoos, Thomas Stützle
Duality Gap Analysis of Weak Relaxed Greedy Algorithms

Many problems in machine learning can be presented in the form of convex optimization problems with objective function as a loss function. The paper examines two weak relaxed greedy algorithms for finding the solutions of convex optimization problems over convex hulls of atomic sets. Such problems arise as the natural convex relaxations of cardinality-type constrained problems, many of which are well-known to be NP-hard. Both algorithms utilize one atom from a dictionary per iteration, and therefore, guarantee designed sparsity of the approximate solutions. Algorithms employ the so called ‘gradient greedy step’ that maximizes a linear functional which uses gradient information of the element obtained in the previous iteration. Both algorithms are ‘weak’ in the sense that they solve the linear subproblems at the gradient greedy step only approximately. Moreover, the second algorithm employs an approximate solution at the line-search step. Following ideas of [5] we put up the notion of the duality gap, the values of which are computed at the gradient greedy step of the algorithms on each iteration, and therefore, they are inherent upper bounds for primal errors, i.e. differences between values of objective function at current and optimal points on each step. We obtain dual convergence estimates for the weak relaxed greedy algorithms.

Sergei P. Sidorov, Sergei V. Mironov
Controlling Some Statistical Properties of Business Rules Programs

Business Rules programs encode decision-making processes using “if-then” constructs in a way that is easy for non-programmers to manipulate. A common example is the process of automatic validation of a loan request for a bank. The decision process is defined by bank managers relying on the bank strategy and their own experience. Bank-side, such processes are often required to meet goals of a statistical nature, such as having at most some given percentage of rejected loans, or having the distribution of requests that are accepted, rejected, and flagged for examination by a bank manager be as uniform as possible. We propose a mathematical programming-based formulation for the cases where the goals involve constraining or comparing values from the quantized output distribution. We then examine a simulation for the specific goals of (1) a max percentage for a given output interval and (2) an almost uniform distribution of the quantized output. The proposed methodology rests on solving mathematical programs encoding a statistically supervised machine learning process where known labels are an encoding of the required distribution.

Olivier Wang, Leo Liberti

GENOPT Paper

Frontmatter
Hybridization and Discretization Techniques to Speed Up Genetic Algorithm and Solve GENOPT Problems

One of the challenges in global optimization is to use heuristic techniques to improve the behaviour of the algorithms on a wide spectrum of problems. With the aim of reducing the probabilistic component and performing a broader and orderly search in the feasible domain, this paper presents how discretization techniques can enhance significantly the behaviour of a genetic algorithm (GA). Moreover, hybridizing GA with local searches has shown how the convergence toward better values of the objective function can be improved. The resulting algorithm performance has been evaluated during the Generalization-based Contest in Global Optimization (GENOPT 2017), on a test suite of 1800 multidimensional problems.

Francesco Romito

Short Papers

Frontmatter
Identification of Discontinuous Thermal Conductivity Coefficient Using Fast Automatic Differentiation

The problem of determining the thermal conductivity coefficient that depends on the temperature is studied. The consideration is based on the Dirichlet boundary value problem for the one-dimensional unsteady-state heat equation. The mean-root-square deviation of the temperature distribution field and the heat flux from the empirical data on the left boundary of the domain is used as the objective functional. An algorithm for the numerical solution of the problem based on the modern approach of Fast Automatic Differentiation is proposed. The case of discontinuous thermal conductivity coefficient is considered. Examples of solving the problem are discussed.

Alla F. Albu, Yury G. Evtushenko, Vladimir I. Zubov
Comparing Two Approaches for Solving Constrained Global Optimization Problems

In the present study, a method for solving the multiextremal problems with non-convex constrains without the use of the penalty or barrier functions is considered. This index method is featured by a separate accounting for each constraint. The check of the constraint fulfillment sequentially performed in every iteration point is terminated upon the first constraint violation occurs. The results of numerical comparing of the index method with the penalty function one are presented. The comparing has been carried out by means of the numerical solving of several hundred multidimensional multiextremal problems with non-convex constrains generated randomly.

Konstantin Barkalov, Ilya Lebedev
Towards a Universal Modeller of Chaotic Systems

This paper proposes a Machine Learning (ML) algorithm and examines its dynamic properties when trained on chaotic time series. It will be demonstrated that the output of the ML system is itself more chaotic if it is trained on a chaotic input than if it is trained on non-chaotic input. The proposed ML system builds on to the Parameter-Less Self-Organising Map 2 (PLSOM2) and introduces new developments.

Erik Berglund
An Approach for Generating Test Problems of Constrained Global Optimization

In the present paper, a novel approach to the constructing of the test global optimization problems with non-convex constraints is considered. The proposed approach is featured by a capability to construct the sets of such problems for carrying out multiple computational experiments in order to obtain a reliable evaluation of the efficiency of the optimization algorithms. When generating the test problems, the necessary number of constraints and desired fraction of the feasible domain relative to the whole global search domain can be specified. The locations of the global minimizers in the generated problems are known a priori that simplifies the evaluation of the results of the computational experiments essentially. A demonstration of the developed approach in the application to well-known index method for solving complex multiextremal optimization problems with non-convex constraints is presented.

Victor Gergel
Global Optimization Using Numerical Approximations of Derivatives

This paper presents an efficient method for solving global optimization problems. The new method unlike previous methods, was developed, based on numerical estimations of derivative values. The effect of using numerical estimations of derivative values was studied and the results of computational experiments prove the potential of such approach.

Victor Gergel, Alexey Goryachih
Global Optimization Challenges in Structured Low Rank Approximation

In this paper, we investigate the complexity of the numerical construction of the so-called Hankel structured low-rank approximation (HSLRA). Briefly, HSLRA is the problem of finding a rank r approximation of a given Hankel matrix, which is also of Hankel structure.

Jonathan Gillard, Anatoly Zhigljavsky
A D.C. Programming Approach to Fractional Problems

This paper addresses a rather general fractional optimization problem. There are two ways to reduce the original problem. The first one is a solution of an equation with the optimal value of an auxiliary d.c. optimization problem with a vector parameter. The second one is to solve the second auxiliary problem with nonlinear inequality constraints. Both auxiliary problems turn out to be d.c. optimization problems, which allows to apply Global Optimization Theory [11, 12] and develop two corresponding global search algorithms that have been tested on a number of test problems from the recent publications.

Tatiana Gruzdeva, Alexander Strekalovsky
Objective Function Decomposition in Global Optimization

In this paper we consider global optimization problems in which objective functions are explicitly given and can be represented as compositions of some other functions. We discuss an approach of reducing the complexity of the objective by introducing new variables and adding new constraints.

Oleg V. Khamisov
Projection Approach Versus Gradient Descent for Network’s Flows Assignment Problem

The paper is devoted to comparison of two methodologically different types of mathematical techniques for coping with network’s flows assignment problem. Gradient descent and projection approach are implemented to the simple network of parallel routes (there are no common arcs for any pair of routes). Gradient descent demonstrates zig-zagging behavior in some cases, while projection algorithm converge quadratically in the same conditions. Methodological interpretation of such phenomena is given.

Alexander Yu. Krylatov, Anastasiya P. Shirokolobova
An Approximation Algorithm for Preemptive Speed Scaling Scheduling of Parallel Jobs with Migration

In this paper we consider a problem of preemptive scheduling rigid parallel jobs on speed scalable processors. Each job is specified by its release date, its deadline, its processing volume and the number of processors, which are required for execution of the job. We propose a new strongly polynomial approximation algorithm for the energy minimization problem with migration of jobs.

Alexander Kononov, Yulia Kovalenko
Learning and Intelligent Optimization for Material Design Innovation

Learning and intelligent optimization (LION) techniques enable problem-specific solvers with vast potential applications in industry and business. This paper explores such potentials for material design innovation and presents a review of the state of the art and a proposal of a method to use LION in this context. The research on material design innovation is crucial for the long-lasting success of any technological sector and industry and it is a rapidly evolving field of challenges and opportunities aiming at development and application of multi-scale methods to simulate, predict and select innovative materials with high accuracy. The LION way is proposed as an adaptive solver toolbox for the virtual optimal design and simulation of innovative materials to model the fundamental properties and behavior of a wide range of multi-scale materials design problems.

Amir Mosavi, Timon Rabczuk
Statistical Estimation in Global Random Search Algorithms in Case of Large Dimensions

We study asymptotic properties of optimal statistical estimators in global random search algorithms when the dimension of the feasible domain is large. The results obtained can be helpful in deciding what sample size is required for achieving a given accuracy of estimation.

Andrey Pepelyshev, Vladimir Kornikov, Anatoly Zhigljavsky
A Model of FPGA Massively Parallel Calculations for Hard Problem of Scheduling in Transportation Systems

The FPGA calculation hardware was estimated in terms of performance of solving a hard nonlinear discrete optimization problem of scheduling in transportation systems. The dynamic programming algorithm for a large-scale problem is considered and a model of its decomposition into a set of smaller problems is investigated.

Mikhail Reznikov, Yuri Fedosenko
Accelerating Gradient Descent with Projective Response Surface Methodology

We present a new modification of gradient descent algorithm based on surrogate optimization with projection into low-dimensional space. It consequently builds an approximation of the target function in low-dimensional space and takes the approximation optimum point mapped back to original parameter space as the next parameter estimate. An additional projection step is used to fight the curse of dimensionality. Major advantage of the proposed modification is that it does not change gradient descent iterations, thus it may be used with almost any zero- or first-order iterative method. We give a theoretical motivation for the proposed algorithm and experimentally illustrate its properties on modelled data.

Alexander Senov
Emmental-Type GKLS-Based Multiextremal Smooth Test Problems with Non-linear Constraints

In this paper, multidimensional test problems for methods solving constrained Lipschitz global optimization problems are proposed. A new class of GKLS-based multidimensional test problems with continuously differentiable multiextremal objective functions and non-linear constraints is described. In these constrained problems, the global minimizer does not coincide with the global minimizer of the respective unconstrained test problem, and is always located on the boundaries of the admissible region. Two types of constraints are introduced. The possibility to choose the difficulty of the admissible region is available.

Ya. D. Sergeyev, D. E. Kvasov, M. S. Mukhametzhanov
Backmatter
Metadaten
Titel
Learning and Intelligent Optimization
herausgegeben von
Roberto Battiti
Dmitri E. Kvasov
Yaroslav D. Sergeyev
Copyright-Jahr
2017
Electronic ISBN
978-3-319-69404-7
Print ISBN
978-3-319-69403-0
DOI
https://doi.org/10.1007/978-3-319-69404-7