Skip to main content

2017 | Buch

Modeling and Optimization: Theory and Applications

MOPTA, Bethlehem, PA, USA, August 2016 Selected Contributions

insite
SUCHEN

Über dieses Buch

This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 17-19, 2016. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, health, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

Inhaltsverzeichnis

Frontmatter
Stochastic Decision Problems with Multiple Risk-Averse Agents
Abstract
We consider a stochastic decision problem, with dynamic risk measures, in which multiple risk-averse agents make their decisions to minimize their individual accumulated risk-costs over a finite-time horizon. Specifically, we introduce multi-structure dynamic risk measures induced from conditional g-expectations, where the latter are associated with the generator functionals of certain BSDEs that implicitly take into account the risk-cost functionals of the risk-averse agents. Here, we also assume that the solutions for such BSDEs almost surely satisfy a stochastic viability property w.r.t. a certain given closed convex set. Using a result similar to that of the Arrow–Barankin–Blackwell theorem, we establish the existence of consistent optimal decisions for the risk-averse agents, when the set of all Pareto optimal solutions, in the sense of viscosity solutions, for the associated dynamic programming equations is dense in the given closed convex set. Finally, we comment on the characteristics of acceptable risks w.r.t. some uncertain future outcomes or costs, where results from the dynamic risk analysis are part of the information used in the risk-averse decision criteria.
Getachew K. Befekadu, Alexander Veremyev, Vladimir Boginski, Eduardo L. Pasiliao
Optimal Packing of General Ellipses in a Circle
Abstract
Our objective is to find the optimal non-overlapping packing of a collection of general (non-identical) ellipses, with respect to a container circle that has minimal radius. Following the review of selected topical literature, we introduce a model development approach based on using embedded Lagrange multipliers. Our optimization model has been implemented using the computing system Mathematica. We present illustrative numerical results using the LGO nonlinear (global and local) optimization software package linked to Mathematica. Our study shows that the Lagrangian modeling approach combined with nonlinear optimization tools can effectively handle challenging ellipse packing problems with hundreds of decision variables and non-convex constraints.
Frank J. Kampas, János D. Pintér, Ignacio Castillo
Column Generation Approach to the Convex Recoloring Problem on a Tree
Abstract
The convex recoloring (CR) problem is to recolor the nodes of a colored graph at minimum number of color changes such that each color induces a connected subgraph. We adjust to the convex recoloring problem the column generation framework developed by Johnson et al. (Math Program 62:133–151, 1993). For the convex recoloring problem on a tree, the subproblem to generate columns can be solved in polynomial time by a dynamic programming algorithm. The column generation framework solves the convex recoloring problem on a tree with a large number of colors extremely fast.
Sunil Chopra, Ergin Erdem, Eunseok Kim, Sangho Shim
A Variational Inequality Formulation of a Migration Model with Random Data
Abstract
In this note, we consider a simple model of populations distribution based on utility functions theory. The novelty of our approach is the use of a recent theory of random variational inequalities in refining a previous model by allowing random fluctuations in the data of the problem. We first present the random equilibrium conditions and prove their equivalence to a parametric random variational inequality. Then, we provide a formulation of the problem in a Lebesgue space with a probability measure. Finally, we work out a simple example, which can be solved exactly and allows us to test an approximation procedure.
Baasansuren Jadamba, Fabio Raciti
Identification in Mixed Variational Problems by Adjoint Methods with Applications
Abstract
This work develops a fast and reliable computational framework for the inverse problem of identifying variable parameters in abstract mixed variational problems. One of the main contributions of this work is a thorough derivation of efficient computation schemes for the evaluation of the gradient and the Hessian of the output least-squares (OLS) functional. The derivation of all the formulas is given in continuous as well as discrete setting. Detailed numerical results for different classes of problems are presented.
M. Cho, B. Jadamba, A. A. Khan, A. A. Oberai, M. Sama
Minimization of the L p -Norm, p ≥ 1 of Dirichlet-Type Boundary Controls for the 1D Wave Equation
Abstract
This paper provides a modified method that allows one to solve the problem of the optimal boundary controls μ(t) and ν(t) of displacements at two ends of a string for a large time interval T = 2ln, where n = 1, 2, 3,  It should be noted that the minimization was made in the space L p with p ≥ 1. Besides, it was found that the derivatives of above-mentioned functions are 2l-periodic functions.
Ilya Smirnov, Anastasia Dmitrieva
Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme Under Weak Strong Convexity Assumption
Abstract
We propose a projected semi-stochastic gradient descent method with mini-batch for improving both the theoretical complexity and practical performance of the general stochastic gradient descent method (SGD). We are able to prove linear convergence under weak strong convexity assumption. This requires no strong convexity assumption for minimizing the sum of smooth convex functions subject to a compact polyhedral set, which remains popular across machine learning community. Our PS2GD preserves the low-cost per iteration and high optimization accuracy via stochastic gradient variance-reduced technique, and admits a simple parallel implementation with mini-batches. Moreover, PS2GD is also applicable to dual problem of SVM with hinge loss.
Jie Liu, Martin Takáč
Exact Separation of k-Projection Polytope Constraints
Abstract
A critical step of any cutting plane algorithm is to find valid inequalities, or more generally valid constraints, that improve the current relaxation of the integer-constrained problem. We consider the k-projection polytope constraints that are a family of constraints based on an inner description of the cut polytope of size k and are applied to k × k principal minors of the matrix variable of a semidefinite optimization relaxation. We propose a bilevel second order cone optimization approach to find the maximally violated k-projection polytope constraint according to a specific depth measure, and reformulate the bilevel problem as a single-level mixed binary second order cone optimization problem. We report computational results using the proposed approach within a cutting plane algorithm on instances of max-cut with 500 and 600 nodes.
Elspeth Adams, Miguel F. Anjos
Univariate Polynomial Optimization with Sum-of-Squares Interpolants
Abstract
One of the most common tools in polynomial optimization is the approximation of the cone of nonnegative polynomials with the cone of sum-of-squares polynomials. This leads to polynomial-time solvable approximations for many NP-hard optimization problems using semidefinite programming (SDP). While theoretically satisfactory, the translation of optimization problems involving sum-of-squares polynomials to SDPs is not always practical. First, in the common SDP formulation, the dual variables are semidefinite matrices whose condition numbers grow exponentially with the degree of the polynomials involved, which is detrimental for a floating-point implementation. Second, the SDP representation of sum-of-squares polynomials roughly squares the number of optimization variables, increasing the time and memory complexity of the solution algorithms by several orders of magnitude. In this paper we focus on the first, numerical, issue. We show that a reformulation of the sum-of-squares SDP using polynomial interpolants yields a substantial improvement over the standard formulation, and problems involving sum-of-squares interpolants of hundreds of degrees can be handled without difficulty by commonly used semidefinite programming solvers. Preliminary numerical results using semi-infinite optimization problems align with the theoretical predictions. In all problems considered, available memory is the only factor limiting the degrees of polynomials.
Dávid Papp
Metadaten
Titel
Modeling and Optimization: Theory and Applications
herausgegeben von
Martin Takáč
Tamás Terlaky
Copyright-Jahr
2017
Electronic ISBN
978-3-319-66616-7
Print ISBN
978-3-319-66615-0
DOI
https://doi.org/10.1007/978-3-319-66616-7