Skip to main content
Top

2019 | Book

Approximation and Optimization

Algorithms, Complexity and Applications

insite
SEARCH

About this book

This book focuses on the development of approximation-related algorithms and their relevant applications. Individual contributions are written by leading experts and reflect emerging directions and connections in data approximation and optimization. Chapters discuss state of the art topics with highly relevant applications throughout science, engineering, technology and social sciences. Academics, researchers, data science practitioners, business analysts, social sciences investigators and graduate students will find the number of illustrations, applications, and examples provided useful.

This volume is based on the conference Approximation and Optimization: Algorithms, Complexity, and Applications, which was held in the National and Kapodistrian University of Athens, Greece, June 29–30, 2017. The mix of survey and research content includes topics in approximations to discrete noisy data; binary sequences; design of networks and energy systems; fuzzy control; large scale optimization; noisy data; data-dependent approximation; networked control systems; machine learning ; optimal design; no free lunch theorem; non-linearly constrained optimization; spectroscopy.

Table of Contents

Frontmatter
Introduction
Abstract
A brief survey is given to the papers of this volume that explore various aspects of approximation and optimization.
Ioannis C. Demetriou, Panos M. Pardalos
Evaluation Complexity Bounds for Smooth Constrained Nonlinear Optimization Using Scaled KKT Conditions and High-Order Models
Abstract
Evaluation complexity for convexly constrained optimization is considered and it is shown first that the complexity bound of O(𝜖 −3∕2) proved by Cartis et al. (IMA J Numer Anal 32:1662–1695, 2012) for computing an 𝜖-approximate first-order critical point can be obtained under significantly weaker assumptions. Moreover, the result is generalized to the case where high-order derivatives are used, resulting in a bound of O(𝜖 −(p+1)∕p) evaluations whenever derivatives of order p are available. It is also shown that the bound of \(O(\epsilon _{\mbox{ P}}^{-1/2}\epsilon _{\mbox{ D}}^{-3/2})\) evaluations (𝜖 P and 𝜖 D being primal and dual accuracy thresholds) suggested by Cartis et al. (SIAM J. Numer. Anal. 53:836–851, 2015) for the general nonconvex case involving both equality and inequality constraints can be generalized to yield a bound of \(O(\epsilon _{\mbox{ P}}^{-1/p}\epsilon _{\mbox{ D}}^{-(p+1)/p})\) evaluations under similarly weakened assumptions.
Coralia Cartis, Nicholas I. M. Gould, Philippe L. Toint
Data-Dependent Approximation in Social Computing
Abstract
Data-dependent approximation is a new approach for the study of nonsubmodular optimization problems. This approach has attracted a lot of research especially in the area of social computing, where nonsubmodular combinatorial optimization problems have been recently formulated. In this chapter, we present some theoretical results on the data-dependent approximation approach. In addition, some related open problems are discussed.
Weili Wu, Yi Li, Panos M. Pardalos, Ding-Zhu Du
Multi-Objective Evolutionary Optimization Algorithms for Machine Learning: A Recent Survey
Abstract
The machine learning algorithms exploit a given dataset in order to build an efficient predictive or descriptive model. Multi-objective evolutionary optimization assists machine learning algorithms to optimize their hyper-parameters, usually under conflicting performance objectives and selects the best model for a given task. In this paper, recent multi-objective evolutionary approaches for four major data mining and machine learning tasks, namely: (a) data preprocessing, (b) classification, (c) clustering, and (d) association rules, are surveyed.
Stamatios-Aggelos N. Alexandropoulos, Christos K. Aridas, Sotiris B. Kotsiantis, Michael N. Vrahatis
No Free Lunch Theorem: A Review
Abstract
The “No Free Lunch” theorem states that, averaged over all optimization problems, without re-sampling, all optimization algorithms perform equally well. Optimization, search, and supervised learning are the areas that have benefited more from this important theoretical concept. Formulation of the initial No Free Lunch theorem, very soon, gave rise to a number of research works which resulted in a suite of theorems that define an entire research field with significant results in other scientific areas where successfully exploring a search space is an essential and critical task. The objective of this paper is to go through the main research efforts that contributed to this research field, reveal the main issues, and disclose those points that are helpful in understanding the hypotheses, the restrictions, or even the inability of applying No Free Lunch theorems.
Stavros P. Adam, Stamatios-Aggelos N. Alexandropoulos, Panos M. Pardalos, Michael N. Vrahatis
Piecewise Convex–Concave Approximation in the Minimax Norm
Abstract
Suppose that \(\mathbf {f} \in \mathbb {R}^{n}\) is a vector of n error-contaminated measurements of n smooth function values measured at distinct, strictly ascending abscissæ. The following projective technique is proposed for obtaining a vector of smooth approximations to these values. Find y minimizing ∥y −f subject to the constraints that the consecutive second-order divided differences of the components of y change sign at most q times. This optimization problem (which is also of general geometrical interest) does not suffer from the disadvantage of the existence of purely local minima and allows a solution to be constructed in only \(O(nq \log n)\) operations. A new algorithm for doing this is developed and its effectiveness is proved. Some results of applying it to undulating and peaky data are presented, showing that it is fast and can give very good results, particularly for large densely packed data, even when the errors are quite large.
Michael P. Cullinan
A Decomposition Theorem for the Least Squares Piecewise Monotonic Data Approximation Problem
Abstract
We consider the problem of calculating the least squares approximation to n measurements that contain random errors of function values subject to the condition that the first differences of the approximated values have at most k − 1 sign changes, where k is a given positive integer. The positions of the sign changes are integer variables whose optimal values are to be determined automatically. Since the number of trials of all possible combinations of positions in order to find an optimal one is of magnitude n k−1, it would not be practicable to consider each one separately. We give a characterization theorem which shows that the problem reduces to partitioning the data into at most k disjoint sets of adjacent data and solving a k = 1 problem for each set (monotonic approximation case). The important computational consequence of this theorem is that it allows dynamic programming to be applied for obtaining the partition and solving the whole problem in only a quadratic number of operations. However, shorter computation times in practice are confirmed by our numerical results. Further, an example is given, which shows that the time required by the dynamic programming method to locate optimally peaks when k = 50 in a NMR spectrum that consists of about 110,000 data points is less than a minute, but the number of trials of all possible combinations would be of magnitude 10250.
Ioannis C. Demetriou
Recent Progress in Optimization of Multiband Electrical Filters
Abstract
The best uniform rational approximation of the sign function on two intervals was explicitly found by Russian mathematician E.I. Zolotarëv in 1877. The progress in math eventually led to the progress in technology: half a century later German electrical engineer and physicist W. Cauer on the basis of this solution has invented low- and high-pass electrical filters known today as elliptic or Cauer-Zolotarëv filters and possessing the unbeatable quality. We discuss a recently developed approach for the solution of optimization problem naturally arising in the synthesis of multi-band (analogue, digital or microwave) electrical filters. The approach is based on techniques from algebraic geometry and generalizes the effective representation of Zolotarëv fraction.
Andrei Bogatyrëv
Impact of Error in Parameter Estimations on Large Scale Portfolio Optimization
Abstract
Portfolio selection is construction of portfolios that maximize level of the expected returns from investments, but at the same time have low involved risks. One fundamental approach for quantifying the risk–return trade-off of assets is mean–variance analysis. In this case, it is crucial to accurately estimate parameters of the model. We examine how estimation error for means and covariance matrix of stock returns may affect the results of selected portfolios. One of the main contributions of this research are different experiments conducted using both the real data from different stock markets and generated samples to compare the out-of-sample performance of the estimators and how estimation error may affect results of portfolio selection. A new surprising phenomenon is observed for large scale portfolio optimization: efficiency of obtained optimal portfolio is biased with respect to the true optimal portfolio. Different aspects of this phenomenon and possible ways to reduce its negative effect are discussed.
Valery A. Kalyagin, Sergey V. Slashchinin
Optimal Design of Smart Composites
Abstract
In the present chapter, optimal design problems related to smart composites are investigated. First, the mechanical properties of a smart composite can be tailored to meet required specifications. Beyond classical shape and layout optimization related to the layers of a composite, pointwise optimization leading to functionally graded composites or even topology optimization can be applied. A cantilever beam with two materials is briefly presented. Furthermore, the control subsystem has several parameters to be optimized: number and position of sensors and actuators, as well as the parameters of the controller. Here, some basic techniques regarding soft control based on fuzzy and neuro-fuzzy strategies are presented, along with optimization options and methods which can be used for the fine-tuning of the parameters of the system. The main concept of the present chapter is to provide stimuli to those who deal with design, optimization, and control issues on smart structures.
Georgios K. Tairidis, Georgia Foutsitzi, Georgios E. Stavroulakis
Tax Evasion as an Optimal Solution to a Partially Observable Markov Decision Process
Abstract
Motivated by the persistent phenomenon of tax evasion and the challenge of tax collection during economic crises, we explore the behavior of a risk-neutral self-interested firm that may engage in tax evasion to maximize its profits. The firm evolves in a tax system which includes many of “standard” features such as audits, penalties, and occasional tax amnesties, and may be uncertain as to its tax status (not knowing, for example, whether a tax amnesty may be imminent). We show that the firm’s dynamics can be expressed via a partially observable Markov decision process and use that model to compute the firm’s optimal behavior and expected long-term discounted rewards in a variety of scenarios of practical interest. Going beyond previous work, we are able to investigate the effect of “leaks” or “pre-announcements” of any tax amnesties on the firm’s behavior (and thus on tax revenues). We also compute the effect on firm behavior of any extensions of the statute of limitations within which the firm’s tax filings can be audited, and show that such extensions can be a significant deterrent against tax evasion.
Paraskevi Papadopoulou, Dimitrios Hristu-Varsakelis
Metadata
Title
Approximation and Optimization
Editors
Prof. Ioannis C. Demetriou
Prof. Panos M. Pardalos
Copyright Year
2019
Electronic ISBN
978-3-030-12767-1
Print ISBN
978-3-030-12766-4
DOI
https://doi.org/10.1007/978-3-030-12767-1

Premium Partner