Skip to main content
main-content

2022 | Buch

High-Dimensional Optimization and Probability

With a View Towards Data Science

herausgegeben von: Ashkan Nikeghbali, Panos M. Pardalos, Andrei M. Raigorodskii, Michael Th. Rassias

Verlag: Springer International Publishing

Buchreihe: Springer Optimization and Its Applications

share
TEILEN
insite
SUCHEN

Über dieses Buch

This volume presents extensive research devoted to a broad spectrum of mathematics with emphasis on interdisciplinary aspects of Optimization and Probability. Chapters also emphasize applications to Data Science, a timely field with a high impact in our modern society. The discussion presents modern, state-of-the-art, research results and advances in areas including non-convex optimization, decentralized distributed convex optimization, topics on surrogate-based reduced dimension global optimization in process systems engineering, the projection of a point onto a convex set, optimal sampling for learning sparse approximations in high dimensions, the split feasibility problem, higher order embeddings, codifferentials and quasidifferentials of the expectation of nonsmooth random integrands, adjoint circuit chains associated with a random walk, analysis of the trade-off between sample size and precision in truncated ordinary least squares, spatial deep learning, efficient location-based tracking for IoT devices using compressive sensing and machine learning techniques, and nonsmooth mathematical programs with vanishing constraints in Banach spaces.

The book is a valuable source for graduate students as well as researchers working on Optimization, Probability and their various interconnections with a variety of other areas.

Chapter 12 is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.

Inhaltsverzeichnis

Frontmatter
Projection of a Point onto a Convex Set via Charged Balls Method
Abstract
Finding the projection of a point onto a convex set is a problem of computational geometry that plays an essential role in mathematical programming and nondifferentiable optimization. Recently proposed Charged Balls Method allows one to solve this problem for the case when the boundary of the set is given by smooth function. In this chapter, we give an overview of Charged Balls Method and show that the method is applicable also for nonsmooth case. More specifically, we consider the set that is defined by a maximum of a finite number of smooth functions. Obtained results show that even in case of the set with nonsmooth boundary, Charged Balls Method still solves the problem quite competitive effectively in comparison with other algorithms. This is confirmed by the results of numerical experiments.
Majid E. Abbasov
Towards Optimal Sampling for Learning Sparse Approximations in High Dimensions
Abstract
In this chapter, we discuss recent work on learning sparse approximations to high-dimensional functions on data, where the target functions may be scalar-,’ vector- or even Hilbert space-valued. Our main objective is to study how the sampling strategy affects the sample complexity—that is, the number of samples that suffice for accurate and stable recovery—and to use this insight to obtain optimal or near-optimal sampling procedures. We consider two settings. First, when a target sparse representation is known, in which case we present a near-complete answer based on drawing independent random samples from carefully designed probability measures. Second, we consider the more challenging scenario when such representation is unknown. In this case, while not giving a full answer, we describe a general construction of sampling measures that improves over standard Monte Carlo sampling. We present examples using algebraic and trigonometric polynomials, and for the former, we also introduce a new procedure for function approximation on irregular (i.e. nontensorial) domains. The effectiveness of this procedure is shown through numerical examples. Finally, we discuss a number of structured sparsity models and how they may lead to better approximations.
Ben Adcock, Juan M. Cardenas, Nick Dexter, Sebastian Moraga
Recent Theoretical Advances in Non-Convex Optimization
Abstract
Motivated by recent increased interest in optimization algorithms for non-convex optimization in application to training deep neural networks and other optimization problems in data analysis, we give an overview of recent theoretical results on global performance guarantees of optimization algorithms for non-convex optimization. We start with classical arguments showing that general non-convex problems could not be solved efficiently in a reasonable time. Then we give a list of problems that can be solved efficiently to find the global minimizer by exploiting the structure of the problem as much as it is possible. Another way to deal with non-convexity is to relax the goal from finding the global minimum to finding a stationary point or a local minimum. For this setting, we first present known results for the convergence rates of deterministic first-order methods, which are then followed by a general theoretical analysis of optimal stochastic and randomized gradient schemes, and an overview of the stochastic first-order methods. After that, we discuss quite general classes of non-convex problems, such as minimization of α-weakly quasi-convex functions and functions that satisfy Polyak–Łojasiewicz condition, which still allow obtaining theoretical convergence guarantees of first-order methods. Then we consider higher-order and zeroth-order/derivative-free methods and their convergence rates for non-convex optimization problems.
Marina Danilova, Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov, Sergey Guminov, Dmitry Kamzolov, Innokentiy Shibaev
Higher Order Embeddings for the Composition of the Harmonic Projection and Homotopy Operators
Abstract
In this chapter, the higher order embedding estimates for the composition of the homotopy and harmonic projection operators on differential forms are constructed, the higher regularity of this composition is discussed, and some applications of the main results are presented.
Shusen Ding, Guannan Shi, Donna Sylvester
Codifferentials and Quasidifferentials of the Expectation of Nonsmooth Random Integrands and Two-Stage Stochastic Programming
Abstract
This work is devoted to an analysis of exact penalty functions and optimality conditions for nonsmooth two-stage stochastic programming problems. To this end, we first study the co/quasidifferentiability of the expectation of nonsmooth random integrands and obtain explicit formulae for its co and quasidifferential under some natural assumptions on the integrand. Then, we analyse exact penalty functions for a variational reformulation of two-stage stochastic programming problems and obtain sufficient conditions for the global exactness of these functions with two different penalty terms. In the end of the chapter, we combine our results on the co/quasidifferentiability of the expectation of nonsmooth random integrands and exact penalty functions to derive optimality conditions for nonsmooth two-stage stochastic programming problems in terms of codifferentials.
M. V. Dolgopolik
On the Expected Extinction Time for the Adjoint Circuit Chains Associated with a Random Walk with Jumps in Random Environments
Abstract
We study appropriate expressions of the expected extinction time for the “adjoint” Markov chains describing uniquely a nonhomogeneous random walk with jumps (with step − 1 or + 1 or in the same position having a right elastic barrier at 0) through their unique representations by directed circuits and weights (circuit chains) in random environments.
Chrysoula Ganatsiou
A Statistical Learning Theory Approach for the Analysis of the Trade-off Between Sample Size and Precision in Truncated Ordinary Least Squares
Abstract
This chapter deals with linear regression problems for which one has the possibility of varying the supervision cost per example, by controlling the conditional variance of the output given the feature vector. For a fixed upper bound on the total available supervision cost, the trade-off between the number of training examples and their precision of supervision is investigated, using a nonasymptotic data-independent bound from the literature in statistical learning theory. This bound is related to the truncated output of the ordinary least squares regression algorithm. The results of the analysis are also compared theoretically with the ones obtained in a previous work, based on a large-sample approximation of the untruncated output of ordinary least squares. Advantages and disadvantages of the investigated approach are discussed.
Giorgio Gnecco, Fabio Raciti, Daniela Selvi
Recent Theoretical Advances in Decentralized Distributed Convex Optimization
Abstract
In the last few years, the theory of decentralized distributed convex optimization has made significant progress. The lower bounds on communications rounds and oracle calls have appeared, as well as methods that reach both of these bounds. In this paper, we focus on how these results can be explained based on optimal algorithms for the non-distributed setup. In particular, we provide our recent results that have not been published yet and that could be found in detail only in arXiv preprints.
Eduard Gorbunov, Alexander Rogozin, Aleksandr Beznosikov, Darina Dvinskikh, Alexander Gasnikov
On Training Set Selection in Spatial Deep Learning
Abstract
The careful design of experiments in spatial statistics aims at estimating models in an accurate way. In the field of spatial deep learning to classify spatial observations, the training set used to calibrate a model or network is usually determined in a random way in order to obtain a representative sample. This chapter will sketch with examples that this is not necessarily the best way to proceed. Moreover, as in some cases windows are used to smooth signals, overlap may occur in the spatial data. On the one hand, this implies auto-correlation in the training set and, on the other hand, a correlation among pixels used for training and for testing. Our question is how to measure such an overlap and how to steer the selection of training sets. We describe an optimization problem to model and minimize the auto-correlation. A simple example is used to capture the concepts of design of experiments versus training set selection and the measurement of the overlap.
Eligius M. T. Hendrix, Mercedes Paoletti, Juan Mario Haut
Surrogate-Based Reduced-Dimension Global Optimization in Process Systems Engineering
Abstract
High dimensional global optimization problems arise frequently in process systems engineering. This is a result of the complex mechanistic relationships that describe process systems, and/or their large-scale nature. High dimensional optimization problems can often be more easily solved by instead solving a sequence of reduced-dimension subproblems. Surrogate models can allow the formulation of reduced-dimension subproblems by approximating the key features of the original model. Surrogate-based optimization (SBO) is to use surrogate modeling to solve a sequence of approximate reduced-dimension subproblems, in order to converge to a high quality solution to the original high dimensional problem. Here we review the key characteristics of SBO frameworks and their application to process systems optimization problems.
Kody Kazda, Xiang Li
A Viscosity Iterative Method with Alternated Inertial Terms for Solving the Split Feasibility Problem
Abstract
In this paper, we propose a viscosity iterative algorithm with alternated inertial extrapolation step to solve the split feasibility problem, where the self-adaptive stepsize is used. Under appropriate conditions, the proposed algorithm is proved to converge to a solution of the split feasibility problem, which is also the unique solution of a variational inequality problem. Finally, we demonstrate the effectiveness of the algorithm by a numerical example.
Lulu Liu, Qiao-Li Dong, Shen Wang, Michael Th. Rassias

Open Access

Efficient Location-Based Tracking for IoT Devices Using Compressive Sensing and Machine Learning Techniques
Abstract
In this chapter, a scheme based on compressive sensing (CS) for the sparse reconstruction of down-sampled location data is presented for the first time. The underlying sparsity properties of the location data are explored and two algorithms based on LASSO regression and neural networks are shown to be able to efficiently reconstruct paths with only ∼20% sampling of the GPS receiver. An implementation for iOS devices is discussed and results from it are shown as proof of concept of the applicability of CS in location-based tracking for Internet of Things (IoT) devices.
Ramy Aboushelbaya, Taimir Aguacil, Qiuting Huang, Peter A. Norreys
Nonsmooth Mathematical Programs with Vanishing Constraints in Banach Spaces
Abstract
In this chapter, we study the optimization problems with equality, inequality, and vanishing constraints in a Banach space where the objective function and the binding constraints are either differentiable at the optimal solution or Lipschitz near the optimal solution. We derive nonsmooth Karush–Kuhn–Tucker (KKT) type necessary optimality conditions for the above problem where Fréchet (or Gâteaux or Hadamard) derivatives are used for the differentiable functions and the Michel-Penot (M-P) subdifferentials are used for the Lipschitz continuous functions. We also introduce several modifications of some known constraint qualifications like Abadie constraint qualification, Cottle constraint qualification, Slater constraint qualification, Mangasarian–Fromovitz constraint qualification, and linear independence constraint qualification for the above mentioned problem which is called as the nonsmooth mathematical programs with vanishing constraints (NMPVC) in terms of the M-P subdifferentials and establish relationships among them.
Vivek Laha, Vinay Singh, Yogendra Pandey, S. K. Mishra
Metadaten
Titel
High-Dimensional Optimization and Probability
herausgegeben von
Ashkan Nikeghbali
Panos M. Pardalos
Andrei M. Raigorodskii
Michael Th. Rassias
Copyright-Jahr
2022
Electronic ISBN
978-3-031-00832-0
Print ISBN
978-3-031-00831-3
DOI
https://doi.org/10.1007/978-3-031-00832-0

Premium Partner