Skip to main content

1996 | Buch

Stochastic Decomposition

A Statistical Method for Large Scale Stochastic Linear Programming

verfasst von: Julia L. Higle, Suvrajeet Sen

Verlag: Springer US

Buchreihe : Nonconvex Optimization and Its Applications

insite
SUCHEN

Über dieses Buch

Motivation Stochastic Linear Programming with recourse represents one of the more widely applicable models for incorporating uncertainty within in which the SLP optimization models. There are several arenas model is appropriate, and such models have found applications in air­ line yield management, capacity planning, electric power generation planning, financial planning, logistics, telecommunications network planning, and many more. In some of these applications, modelers represent uncertainty in terms of only a few seenarios and formulate a large scale linear program which is then solved using LP software. However, there are many applications, such as the telecommunications planning problem discussed in this book, where a handful of seenarios do not capture variability well enough to provide a reasonable model of the actual decision-making problem. Problems of this type easily exceed the capabilities of LP software by several orders of magnitude. Their solution requires the use of algorithmic methods that exploit the structure of the SLP model in a manner that will accommodate large scale applications.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Two Stage Stochastic Linear Programs
Abstract
Over the past several decades, linear programming (LP) has established itself as one of the most fundamental tools for planning. Its applications have become routine in several disciplines including those within engineering, business, economics, environmental studies and many others. One may attribute this wide spread acceptance to: (a) an understanding of the power and scope of LP among practitioners, (b) good algorithms, and (c) widely available and reliable software. Furthermore, research on specialized problems (e.g. assignment, transportation, networks etc.) has made LP methodology indispensible to numerous industries including transportation, energy, manufacturing and telecommunications, to name a few. Notwithstanding its success, we note that traditional LP models are deterministic models. That is, all objective function and constraint coefficients are assumed to be known with precision. The assumption that all model parameters are known with certainty serves to limit the usefulness of the approach when planning under uncertainty.
Julia L. Higle, Suvrajeet Sen
Chapter 2. Sampling Within Stochastic Linear Programming
Abstract
We are faced with the task of solving two stage stochastic linear programs with recourse (SLP), such as those discussed in Chapter 1. As noted in Theorem 1.1, this class of problems are convex programs and in principle, any of a number of convex programming algorithms (e.g. subgradient methods, cutting plane methods, Lagrangian based methods etc..) can be used to solve SLPs. From the discussions in Chapter 1, it is clear that in most cases, the stochastic nature of the problem precludes the precise determination of subgradients and objective function values. When the presence of random variables prevents the precise determination of such quantities, it is natural to use random samples in the development of statistical estimates of these quantities. Until recently, only subgradient methods were incorporated within a sampling framework, and the resulting methods became known as stochastic quasi-gradient (SQG) methods (see Er-moliev [1988] for a survey). While SQG methods are applicable to very general stochastic convex programs, they suffer from many of the drawbacks of deterministic subgradient methods. In particular, the choice of effective steplengths is often problem dependent. In addition, the incorporation of optimality criteria within these algorithms remains elusive. Nevertheless, because of the incorporation of sampling within the algorithm, SQG methods are able to address SLPs with a large number of outcomes, as well as problems with continous random variables. In developing the Stochastic Decomposition (SD) method, our goal is to bestow these advantages on cutting plane algorithms, which have remained the mainstay for SLPs for several decades (Van Slyke and Wets [1969], Ruszczyński [1986], Birge and Louveaux [1988], Gassmann [1990] etc). The randomization of cutting plane methods has provided the capability of solving truly large scale stochastic programs such as SSN and STORM presented in Chapter 1.
Julia L. Higle, Suvrajeet Sen
Chapter 3. Foundations of Stochastic Decomposition
Abstract
In Chapter 2, we explored the Successive Sample Mean Optimization method, SSMO, whereby the stochastic linear program (P) could be solved using a sequence of randomly generated observations of \(\tilde \omega \), {ω t }. This method yields a sequence of iterates, all accumulation points of which are optimal solutions (with probability one). In our discussions of SSMO, we identified two major characteristics of SSMO that prevent it from being computationally efficient. That is, it becomes increasingly difficult to update the sample mean objective function from one iteration so that it can be used in the next iteration. In addition, as the sample size increases, the evaluation of the sample mean function and its sub gradients becomes increasingly difficult. Given the number of such evaluations typically required in cutting plane methods, the prospects for SSMO as an efficient solution procedure are limited.
Julia L. Higle, Suvrajeet Sen
Chapter 4. Stabilizing Stochastic Decomposition
Abstract
The principles developed in Chapter 3 lay the foundation for Stochastic Decomposition (SD) algorithms. We note that although the objective function approximations developed by an SD algorithm are considerably less accurate than the sample mean function, the SD approximations are sufficiently accurate to ensure asymptotic optimality for a subsequence of iterates. Moreover, the manner in which the objective function approximation is updated as additional observations of \( \bar{\omega } \) are obtained streamlines the computational requirements of the method. Example 3.2, which is depicted in Figure 3.4, illustrates the impact of the procedure for updating the cutting planes. Specifically, while this mechanism ensures that the objective function approximation is asymptotically accurate near the iterates, it also ensures that any given cutting plane will eventually become redundant.
Julia L. Higle, Suvrajeet Sen
Chapter 5. Stopping Rules for Stochastic Decomposition
Abstract
In our development thus far, we have concentrated on objective function approximations within a Stochastic Decomposition algorithm and on establishing asymptotic optimality of the solutions provided by SD. While such asymptotic properties provide the necessary analytic foundation for the algorithm, any practical computer implementation requires effective stopping criteria. It is important to recognize that when using sampled data to solve a problem, standard deterministic stopping rules are inadequate. We will develop specialized optimality tests that take advantage of the information generated during the course of the SD algorithm.
Julia L. Higle, Suvrajeet Sen
Chapter 6. Guidelines for Computer Implementation
Abstract
The Stochastic Decomposition algorithms presented in previous chapters are designed to take computational advantage of the special structure of a two stage stochastic linear program with recourse. This structure resides in the large “core” of subproblem data that is common to all realizations of the random variable \(\tilde \omega \). To see this, recall that a two stage SLP may be stated as follows.
Julia L. Higle, Suvrajeet Sen
Chapter 7. Illustrative Computational Experiments
Abstract
Throughout this book, we have presented the Stochastic Decomposition algorithm and a number of features that have been designed to enhance its computational effectiveness. In this chapter we illustrate SD’s computational viability through the results of various computational experiments that have been conducted. As one might expect, the SD computer programs have evolved over time, and the results reported in this chapter were obtained from various generations of the program. To facilitate comparisons, we have attempted to report in a manner that is not dependent upon the particular implementation used. We note that the tasks in each iteration are well defined (i.e., solve a subproblem, execute the argmax procedure, update the cut coefficients, etc.), and the time required to implement each task depends critically upon the care with which they are implemented, as well as the machine on which the program is run. In this chapter, we are primarily focussed on measures related to the number of iterations required. Additionally, because SD uses sampled data in its quest for an optimal solution, it is important to review the quality of the solutions produced. For this reason, we report the extent to which the objective value associated with the SD solution deviates from the optimal objective value whenever possible. Finally, we explore run time characteristics, such as solution times and memory requirements for our large scale implementation as well.
Julia L. Higle, Suvrajeet Sen
Backmatter
Metadaten
Titel
Stochastic Decomposition
verfasst von
Julia L. Higle
Suvrajeet Sen
Copyright-Jahr
1996
Verlag
Springer US
Electronic ISBN
978-1-4615-4115-8
Print ISBN
978-1-4613-6845-8
DOI
https://doi.org/10.1007/978-1-4615-4115-8