Skip to main content

Über dieses Buch

Recent developments in theory, algorithms, and applications in optimization and control are discussed in this proceedings, based on selected talks from the ‘Optimization Control and Applications in the Information Age’ conference, organized in honor of Panos Pardalos’s 60th birthday. This volume contains numerous applications to optimal decision making in energy production and fuel management, data mining, logistics, supply chain management, market network analysis, risk analysis, and community network analysis. In addition, a short biography is included describing Dr. Pardalos’s path from a shepherd village on the high mountains of Thessaly to academic success.

Due to the wide range of topics such as global optimization, combinatorial optimization, game theory, stochastics and programming contained in this publication, scientists, researchers, and students in optimization, operations research, analytics, mathematics and computer science will be interested in this volume.



Modular Lipschitzian and Contractive Maps

In the context of metric modular spaces, introduced recently by the author, we define the notion of modular Lipschitzian maps between modular spaces, as an extension of the notion of Lipschitzian maps between metric spaces, and address a modular version of Banach’s Fixed Point Theorem for modular contractive maps. We show that the assumptions in our fixed point theorem are sharp and that it guarantees the existence of fixed points in cases when Banach’s Theorem is inapplicable.
Vyacheslav V. Chistyakov

A Taxonomy for the Flexible Job Shop Scheduling Problem

This chapter aims at developing a taxonomic framework to classify the studies on the flexible job shop scheduling problem (FJSP). The FJSP is a generalization of the classical job shop scheduling problem (JSP), which is one of the oldest NP-hard problems. Although various solution methodologies have been developed to obtain good solutions in reasonable time for FSJPs with different objective functions and constraints, no study which systematically reviews the FJSP literature has been encountered. In the proposed taxonomy, the type of study, type of problem, objective, methodology, data characteristics, and benchmarking are the main categories. In order to verify the proposed taxonomy, a variety of papers from the literature are classified. Using this classification, several inferences are drawn and gaps in the FJSP literature are specified. With the proposed taxonomy, the aim is to develop a framework for a broad view of the FJSP literature and construct a basis for future studies.
Didem Cinar, Y. Ilker Topcu, José António Oliveira

Sensitivity Analysis of Welfare, Equity, and Acceptability Level of Transport Policies

Transport planners face a major challenge to devise policies to meet multiple expectations and objectives. While we know that transport networks are complex, multi-modal, and spatially distributed systems, there is now a long history of mathematical tools which assist planners in understanding travel movements. However, the objectives that they are asked to achieve do not always admit such a quantification, and so there is a potential mismatch between seemingly qualitatively driven objectives and quantitatively expressed models of the transport system. In the present chapter we address this mismatch, by focusing on three objectives that we believe represent the typical interests of a planner. These are namely: is the policy economically justifiable (efficient), is it “fair” (equitable), and is it justifiable to a democratic society (acceptable)? We provide mathematical representations of these three objectives and link them to mathematical theory of transport networks, in which we may explore the sensitivity of travel behaviour (and hence the objectives) to various multi-modal transport policies. The detailed steps for representing the policy objectives and sensitivities in the network are set out, and the results of a case study reported in which road tolls, road capacities, and bus fares are the policy variables. Overall, the chapter sets out a systematic method for planners to choose between multi-modal policies based on these three objectives.
R. Connors, M. Patriksson, C. Rydergren, A. Sumalee, D. Watling

Calibration in Survey Sampling as an Optimization Problem

Calibration is a technique of adjusting sample weights routinely used in sample surveys. In this chapter, we consider calibration as an optimization problem and show that the choice of optimization function has an effect on the calibrated weights. We propose a class of functions that have several desirable properties, which includes satisfying necessary range restrictions for the weights. In this chapter, we explore the effect these new functions have on the calibrated weights.
Gareth Davies, Jonathan Gillard, Anatoly Zhigljavsky

On the Sensitivity of Least Squares Data Fitting by Nonnegative Second Divided Differences

Let measurements of a real function of one variable be given. If the function is convex but convexity has been lost due to errors of measurement, then we make the least sum of squares change to the data so that the second divided differences of the smoothed values are nonnegative. The underlying calculation is a quadratic programming algorithm and the piecewise linear interpolant to the solution components is a convex curve. Problems of this structure arise in various contexts in research and applications in science, engineering and social sciences. The sensitivity of the solution is investigated when the data are slightly altered. The sensitivity question arises in the utilization of the method. First some theory is presented and then an illustrative example shows the effect of specific as well as random changes of the data to the solution. As an application to real data, an experiment on the sensitivity of the convex estimate to the Gini coefficient in the USA for the time period 1947–1996 is presented. The measurements of the Gini coefficient are considered uncertain, with a uniform probability distribution over a certain interval. Some consequences of this uncertainty are investigated with the aid of a simulation technique.
Ioannis C. Demetriou

Modeling and Solving Vehicle Routing Problems with Many Available Vehicle Types

Vehicle routing problems (VRPs) involving the selection of vehicles from a large set of vehicle types are hitherto not well studied in the literature. Such problems arise at Volvo Group Trucks Technology, that faces an immense set of possible vehicle configurations, of which an optimal set needs to be chosen for each specific combination of transport missions. Another property of real-world VRPs that is often neglected in the literature is that the fuel resources required to drive a vehicle along a route is highly dependent on the actual load of the vehicle.
We define the fleet size and mix VRP with many available vehicle types, called many-FSMVRP, and suggest an extended set-partitioning model of this computationally demanding combinatorial optimization problem. To solve the extended model, we have developed a method based on Benders decomposition, the subproblems of which are solved using column generation, and the column generation subproblems being solved using dynamic programming; the method is implemented with a so-called projection-of-routes procedure. The resulting method is compared with a column generation approach for the standard set-partitioning model. Our method for the extended model performs on par with column generation applied to the standard model for instances such that the two models are equivalent. In addition, the utility of the extended model for instances with very many available vehicle types is demonstrated. Our method is also shown to efficiently handle cases in which the costs are dependent on the load of the vehicle.
Computational tests on a set of extended standard test instances show that our method, based on Benders’ algorithm, is able to determine combinations of vehicles and routes that are optimal to a relaxation (w.r.t. the route decision variables) of the extended model. Our exact implementation of Benders’ algorithm appears, however, too slow when the number of customers grows. To improve its performance, we suggest that relaxed versions of the column generation subproblems are solved and that the set-partitioning model is replaced by a set-covering model.
Sandra Eriksson Barman, Peter Lindroth, Ann-Brith Strömberg

A Genetic Algorithm for Scheduling Alternative Tasks Subject to Technical Failure

Nowadays, organizations are often faced with the development of complex and innovative projects. This type of projects often involves performing tasks which are subject to failure. Thus, in many such projects several possible alternative actions are considered and performed simultaneously. Each alternative is characterized by cost, duration, and probability of technical success. The cost of each alternative is paid at the beginning of the alternative and the project payoff is obtained whenever an alternative has been completed successfully. For this problem one wishes to find the optimal schedule, i.e., the starting time of each alternative, such that the expected net present value is maximized. This problem has been recently proposed in Ranjbar (Int Trans Oper Res 20(2):251–266, 2013), where a branch-and-bound approach is reported. Since the problem is NP-Hard, here we propose to solve the problem using genetic algorithms.
Dalila B. M. M. Fontes, José Fernando Gonçalves

Discrete Competitive Facility Location: Modeling and Optimization Approaches

Competitive facility location problems are concerned with the following situation: a firm wants to locate a predefined number of facilities to serve customers located in some region where there already exist (or will be) other firms offering the same service. Both new and existing firms compete for optimizing their market share of profit. A discrete version of such problems arises when it is assumed that there are a (rather small) finite number of candidate locations and the markets consist of point demands. We review modeling and optimization approaches for this type of problems and we emphasize and develop the bi-level programming methodology.
Athanasia Karakitsiou

On Nash Equilibria in Stochastic Positional Games with Average Payoffs

We consider a class of stochastic positional games that extends deterministic positional games with average payoffs. The considered class of games we formulate and study applies the game-theoretical concept to finite state space Markov decision processes with an average cost optimization criterion. Necessary and sufficient conditions for the existence of Nash equilibria in stochastic positional games with average payoffs are proven and some approaches for determining the optimal stationary strategies of the players are analyzed. For antagonistic positional games are proposed. Iterative algorithms for determining the saddle points. Additionally we show that the obtained results can be used for studying the problem of the existence of Nash equilibria in Shapley stochastic games with average payoffs.
Dmitrii Lozovanu, Stefan Pickl

Adaptive Tunning of All Parameters in a Multi-Swarm Particle Swarm Optimization Algorithm: An Application to the Probabilistic Traveling Salesman Problem

One of the main issues in the application of a particle swarm optimization (PSO) algorithm and of every evolutionary optimization algorithm is the finding of the suitable parameters of the algorithm. Usually, a trial and error procedure is used but, also, a number of different procedures have been applied in the past. In this chapter, we use a new adaptive version of a PSO algorithm where random values are assigned in the initialization of the algorithm and, then, during the iterations the parameters are optimized together and simultaneously with the optimization of the objective function of the problem. This idea is used for the solution of the probabilistic traveling salesman problem (PTSP). The algorithm is tested on a number of benchmark instances and it is compared with a number of algorithms from the literature.
Yannis Marinakis, Magdalene Marinaki, Athanasios Migdalas

Eigendecomposition of the Mean-Variance Portfolio Optimization Model

We provide new insights into the mean-variance portfolio optimization problem, based on performing eigendecomposition of the covariance matrix. The result of this decomposition can be given an interpretation in terms of uncorrelated eigenportfolios. When only some of the eigenvalues and eigenvectors are used, the resulting mean-variance problem is an approximation of the original one. A solution to the approximation yields lower and upper bounds on the original mean-variance problem; these bounds are tight if sufficiently many eigenvalues and eigenvectors are used in the approximation. Even tighter bounds are obtained through the use of a linearized error term of the unused eigenvalues and eigenvectors.
We provide theoretical results for the upper bounding quality of the approximate problem and the cardinality of the portfolio obtained, and also numerical illustrations of these results. Finally, we propose an ad hoc linear transformation of the mean-variance problem, which in practice significantly strengthens the bounds obtained from the approximate mean-variance problem.
Fred Mayambala, Elina Rönnberg, Torbjörn Larsson

Three Aspects of the Research Impact by a Scientist: Measurement Methods and an Empirical Evaluation

Three different approaches for evaluation of the research impact by a scientist are considered. Two of them are conventional ones, scoring the impact over (a) citation metrics and (b) merit metrics. The third one relates to the level of results. It involves a taxonomy of the research field, that is, a hierarchy representing its composition. The impact is evaluated according to the taxonomy ranks of the subjects that have emerged or have been crucially transformed due to the results by the scientist under consideration Mirkin (Control Large Syst Spec Issue 44:292–307, 2013). To aggregate criteria in approaches (a) and (b) we use an in-house automated criteria weighting method oriented towards as tight a representation of the strata as possible Orlov (Bus Inf, 2014). To compare the approaches empirically, we use publicly available data of about 30 scientists in the areas of data analysis and machine learning. As our taxonomy of the field, we invoke a corresponding part of the ACM Computing Classification System 2012 and slightly modify it to better reflect results by the scientists in our sample. The obtained ABC stratifications are rather far each other. This supports the view that all the three approaches (citations, merits, taxonomic rank) should be considered as different aspects, and, therefore, a good method for scoring research impact should involve all the three.
Boris Mirkin, Michael Orlov

SVM Classification of Uncertain Data Using Robust Multi-Kernel Methods

In this study we have developed a robust Support Vector Machines (SVM) scheme of classifying uncertain data. In SVM classification data uncertainty is not addressed efficiently. Furthermore, while traditional SVM methods use a single kernel for learning, multiple kernel schemes are being developed to incorporate a better understanding of all the data features. We combine the multiple kernel learning methods with the robust optimization concepts to formulate the SVM classification problem as a semi-definite programming (SDP) problem and develop its robust counterparts under bounded data uncertainties. We present some preliminary experimental results with some known datasets by introducing noise in the data. Initial analysis shows the robust SDP-SVM model improves classification accuracy for uncertain data.
Raghav Pant, Theodore B. Trafalis

Multi-Objective Optimization and Multi-Attribute Decision Making for a Novel Batch Scheduling Problem Based on Mould Capabilities

This chapter investigates a novel multi-objective model of a batch scheduling problem with constraint of the mould capability, and the objective is to minimize both the total completion time of the jobs and the total cost of the moulds. It is extremely difficult to obtain an optimal solution to this type of complex problems in a reasonable computational time. In view of this, this chapter presents a new multi-objective algorithm based on the features of Gravitational Search Algorithm to find Pareto optimal solutions for the given problem. In the proposed algorithm a novel Pareto frontier adjustment strategy is designed and proven to improve the convergence of solutions and increase convergence speed. We examined a set of test problems to validate the high efficiency of the proposed multi-objective gravitational search algorithm based on a variety of metrics. Finally, a multi-attribute decision making method is employed to determine the trade-off solutions derived from the Pareto optimal set and thus solve the problem optimally.
Jun Pei, Athanasios Migdalas, Wenjuan Fan, Xinbao Liu

A Time-Indexed Generalized Vehicle Routing Model and Stabilized Column Generation for Military Aircraft Mission Planning

We introduce a time-indexed mixed-integer linear programming model for a military aircraft mission planning problem, where a fleet of cooperating aircraft should attack a number of ground targets so that the total expected effect is maximized. The model is a rich vehicle routing problem and the direct application of a general solver is practical only for scenarios of very moderate sizes. We propose a Dantzig–Wolfe reformulation and column generation approach. A column here represents a specific sequence of tasks at certain times for an aircraft, and to generate columns a longest path problem with side constraints is solved. We compare the column generation approach with the time-indexed model with respect to upper bounding quality of their linear programming relaxations and conclude that the former provides a much stronger formulation of the problem.
Nils-Hassan Quttineh, Torbjörn Larsson, Jorne Van den Bergh, Jeroen Beliën

On Deterministic Diagonal Methods for Solving Global Optimization Problems with Lipschitz Gradients

In this chapter, a global optimization problem is considered where both the objective function f(x) and its gradient ∇f(x) are multidimensional black-box functions. It is supposed that ∇f(x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. Different techniques for estimating K are presented and their advantages and disadvantages are emphasized. In what regards exploring the multidimensional search domain, several adaptive partitioning strategies are discussed that can be applied in Lipschitz global optimization methods: (1) one-point-based algorithms evaluating the objective function and its gradient at one point within each subregion; (2) diagonal partitions where f(x) and ∇f(x) are evaluated at two points within each subregion; (3) more complex partitions based, for instance, on simplices or auxiliary functions of various nature. This chapter deals with diagonal deterministic methods that show a promising performance both in tests and applications. Several geometric methods based on diagonal partitions and auxiliary functions are presented and compared on eight hundred of differentiable problems randomly produced by the GKLS-generator of classes of test functions.
Yaroslav D. Sergeyev, Dmitri E. Kvasov

Optimization of Design Parameters for Active Control of Smart Piezoelectric Structures

The objective of this work is to design an optimal controller for plate structures to control their response under the influence of external excitation. The finite element method based on the Mindlin–Reissner plate theory has been extended to incorporate the piezoelectric effects. A genetic algorithm is applied to find the optimal placement of piezoelectric actuators and input voltages for static shape control. The objective function is the error in transverse displacements between the desired and the achieved shape.In addition, the optimal placement of actuators and sensors for vibration control of laminated plates is studied. The objective taken into consideration is the controllability index, which is the singular value decomposition of a control matrix as can be found at the bibliography. The index measures the input energy required to achieve the desired structural control using piezoelectric actuators.Finally, the linear quadratic regulator (LQR) closed loop control is applied to study the control effectiveness. A comparison is made between the optimal locations of piezoactuators obtained through controllability index and a nonoptimal case.
Georgios Stavroulakis, Georgia Foutsitzi, Christos Gogos

Stable EEG Features

The aim of this chapter is to identify stable points and stationary wavelets in EEG signals. Generally an EEG signal is a very complex nonstationary signal. It is very difficult to recognize specific EEG features such as Biometric patterns and Pathological changes. Using a repeated autocorrelation procedure and symmetry features of EEG time series on real EEG Time Series Data, we experimentally investigate stable points in EEG signals. Also we investigate standing waves shafts around these stable points, which reveals the existence of stationary wavelets in EEG signals.
V. Stefanidis, G. Anogiannakis, A. Evangelou, M. Poulos

Deriving Pandemic Disease Mitigation Strategies by Mining Social Contact Networks

In this chapter we propose a robust approach to deriving disease mitigation strategies from contact networks that are generated from publicly available census data. The goal is to provide public policy makers additional information concerning the type of people they should aim to target vaccination, quarantine, or isolation measures towards. We focus on pandemic disease mitigation, but the approach can be applied to other domains, such as bioterrorism. The approach begins by constructing a representative contact network for the geographic area (we use the Greater Toronto Area of ≈ 5.5 million individuals) from census information. Then, network centrality measures are employed to ascertain the importance of each individual to the proper topological functioning of the network. The top-ranked individuals’ characteristics, as defined by census information, are then used as input to decision tree classifiers. The resulting output is a set of high-level rules that identify potential types of individuals to target in order to mitigate disease spread. Experimental evidence for the efficacy of the approach is also provided.
M. Ventresca, A. Szatan, B. Say, D. Aleman

On an Asymptotic Property of a Simplicial Statistical Model of Global Optimization

A homogeneous isotropic Gaussian random field is accepted as a statistical model of objective functions, aiming to construct global optimization algorithms. The asymptotic of the conditional mean and variance is considered, assuming that the random field values are known at the vertices of a simplex, and that the latter is contracting. The obtained result theoretically substantiates the construction of the recently proposed bi-variate global optimization algorithm, which arouses interest due to good performance in testing experiments and the established convergence rate. The obtained result also enhances motivation to extend the aforementioned algorithm to higher dimensions.
Antanas Žilinskas, Gražina Gimbutienė

Advanced Statistical Tools for Modelling of Composition and Processing Parameters for Alloy Development

The paper presents new statistical approaches for modeling highly variable mechanical properties and screening specimens in development of new materials. Particularly, for steels, Charpy V-Notch (CVN) exhibits substantial scatter which complicates prediction of impact toughness. The paper proposes to use Conditional Value-at-Risk (CVaR) for screening specimens with respect to CVN. Two approaches to estimation of CVaR are discussed. The first approach is based on linear regression coming from the Mixed-Quantile Quadrangle, and the second approach builds CVN distribution with percentile regression, and then directly calculates CVaR. The accuracy of estimated CVaR is assessed with some variant of the coefficient of multiple determination. We estimated discrepancy between estimates derived by two approaches with the Mean Absolute Percentage error. We compared VaR and CVaR risk measures in the screening process. We proposed a modified procedure for ranking specimens, which takes into account the uncertainty in estimates of CVaR.
Greg Zrazhevsky, Alex Golodnikov, Stan Uryasev, Alex Zrazhevsky
Weitere Informationen

Premium Partner