Skip to main content

1982 | Buch

Evaluating Mathematical Programming Techniques

Proceedings of a Conference Held at the National Bureau of Standards Boulder, Colorado January 5–6, 1981

herausgegeben von: Prof. John M. Mulvey

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Economics and Mathematical Systems

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Opening Address

Opening Address

During the past decade, algorithmic research has focused on the design and development of efficient digital computer optimization software. From this research, an important interface between mathe­matics and computer science, called computer implementation technology, has evolved. While the origins of this technology date back to 1952 and the implementation of the stepping-stone method on the National Bureau of Standards’ Eastern Automatic Computer (Hoffman [4]), it has only been recently recognized as a major discipline in its own right.

Darwin Klingman

Design and Use of Problem Generators and Hand Selected Test Cases

Test Problems for Computational Experiments -- Issues and Techniques

This paper compares and contrasts real world and random generated test problems as sources of standard tests for mathematical programming algorithms. Real problems are viewed as realizations from a test population of interest, and random problems are treated as models for the population. Methodological advantages and difficulties inherent in the alternatives are highlighted, and methods for dealing with the limitations discussed.

Ronald L. Rardin, Benjamin W. Lin
Netgen-II: A System for Generating Structured Network-Based Mathematical Programming Test Problems

The increased importance of designing and implementing algorithms to solve particular management problems has created the need for more robust test problem generators that can match the overall structure and parameter values of these problems. Of particular interest are management problems that can be modeled using a network structure. This paper discusses the design of a system for generating network-based mathematical programming test problems that conform to user-supplied structural and parameter characteristics.

Joyce J. Elam, Darwin Klingman
The Definition and Generation of Geometrically Random Linear Constraint Sets

The conventional procedure for generating random linear constraint sets independently and uniformly selects the constraint coefficients. Structure is usually imposed through some kind of rejection technique. Recent work of Van Dam and Telgen indicates that this type of generator tends to produce geometrically atypical polvtopes. We define and construct a generator that produces geometrically random constraint sets; that is, their probability measure is invariant to the choice of coordinate system used. Moreover, an extremely efficient technique is presented for making an unbiased selection of a feasible polytope. Conventional approaches often guarantee feasibility by implicitly selecting that randomly generated polytope covering the origin. Such a procedure hisses the selection in favor of large polytopes.

Jerrold H. May, Robert I. Smith
Construction of Nonlinear Programming Test Problems with Known Solution Characteristics

The ability to supply realistic-looking test problems with known characteristics is essential for testing the efficiency and robustness of nonlinear programming algorithms and codes. This paper deals with methods of constructing (random) nonlinear programming test problems with known solutions and specified characteristics which usually affect the success of algorithms and their related software. In particular, it is shown how test problems with given multiple stationary points can be constructed, allowing for programs with several local optima thus providing for more rigorous testing.

Gideon Lidor
A Comparison of Real-World Linear Programs and Their Randomly Generated Analogs

The intent of the study is to determine the ability of randomly-generated problems to simulate real-world problems for the purposes of testing, benchmarking and comparing software implementations of solution algorithms, and to determine if the “degree of randomness” is related to the difficulty in obtaining a solution. Randomly-generated analogs are defined to be problems created with some of the characteristics of a real-world (actual application) problem, but containing data with random elements. These analogs fall into classes that can be characterized by “nearness” to the real-world problem. The first class is obtained by randomly perturbing the problem data. The second class is obtained by randomizing the ones in the Boolean image of the problem data. The third class consists of problems obtained from software generator that accepts the problem characteristics as input. The random elements are generated from uniform and normal distributions.The measures of difficulty include central processor time, iterations and bumps in the optimal basis. Several optimizers are used in the study. In general, the results show that the difficulty increases with the “degree of randomness” and difficulty difference increases as a function of size.

Richard P. O’Neill

Nonlinear Optimization Codes and Empirical Tests

Evidence of Fundamental Difficulties in Nonlinear Optimization Code Comparisons

Several nonlinear optimization code comparisons have been published, providing data for picking a code for a given application or developing new codes. The common performance measures (number of function evaluations, standardized computer time, number of problems solved) are intuitively machine independent, which encourages such use. Unfortunately, the relative performance of optimization codes does depend on the computer and compiler used for testing, and this dependence is evident regardless of the performance measure. In addition, relative performance measured on a single machine may depend significantly on the desired degree of accuracy, the choice of test problem(s), the chosen performance measure, and even the time of day (machine workload) when the tests are run. Numerical evidence of these difficulties is presented, based on tests of the same problem and algorithm decks on several different computers, with various compilers, problem sets, accuracy levels, and performance measures.

E. D. Eason
A Statistical Review of the Sandgren-Ragsdell Comparative Study

A statistical analysis of the solution times of the algorithms in the Sandgren-Ragsdell study is conducted. An analysis of variance is performed to demonstrate that there is statistical evidence that selected codes are superior to others on the basis of their relative solution times. A logarithmic transformation is used to produce a seminormal distribution of the solution times with a variance assumed to be equal for all of the algorithms. A paired comparison is then conducted on the differences in the mean logarithmic solution times for each of the algorithms over the entire test problem set. The selected confidence level for all comparisons was fixed at 95%. The factors contributing to the success of this analysis are discussed as well as the additional data which would be required to conduct this type of analysis in general.

Eric Sandgren
A Methodological Approach to Testing of NLP-Software

Prior to detailed testing of software there should be: (1) knowledge of the software and the underlying algorithms; (2) knowledge of the class of problems to be solved by this software; and (3) goals of testing.We present a methodological setup of testing nonlinear programming software which is based on prior knowledge. This leads to a discussion of the following topics: representativity of test problems, use of test problem generators, performance measures, presentation of test results, and presentation of final conclusions.The concepts will be illustrated with a project which has been performed to evaluate a set of Newton-like methods for solving systems of nonlinear equations.

Jacques C. P. Bus

Integer Programming and Combinatorial Optimization

A Computational Comparison of Five Heuristic Algorithms for the Euclidean Traveling Salesman Problem

This paper presents a computational comparison of five heuristic algorithms for the traveling salesman problem (TSP). Each of these algorithms is a composite algorithm consisting of a simple tour construction algorithm and a post-processor. The five tour construction procedures considered are all insertion algorithms which may be easily implemented. The post-processor used in the five composite algorithms is a modified 3-opt procedure that is shown to outperform both the 3-opt and 2-opt branch exchange procedures. The final result of the computational tests is the conclusion that there are simple and easily implemented heuristic procedures that will produce high quality solutions to the TSP in a moderate amount of computer time.

William R. Stewart Jr.
Implementing an Algorithm: Performance Considerations and a Case Study

After a general discussion of program performance and its influence factors we consider a case study. A specific version of TOYODA’s heuristic algorithms for solving multidimensional knapsack problems was implemented in three different FORTRAN programs. Starting from a straightforward program, we introduce increasingly sophisticated data structures to improve the virtual execution time of the programs. All programs were compiled with the enhanced version of the FORTRAN H compiler available on IBM/370, 43xx, 303x machines, using its four optimization options. A computational study was performed with a series of randomly generated test problems with up to 3200 variables and constraints and about 70000 nonzeros. The purpose of this study was to measure virtual central processor times and working set sizes as a function of problem size and to study the influence of code optimization performed by the compiler. Since all three programs are representations of the same algorithm, conclusions can be drawn as to the relative importance of the influence factors on program performance. As is also demonstrated, the potential performance of a representation of an abstract algorithm may be difficult to project without a careful implementation.

Uwe Suhl
Which Options Provide the Quickest Solutions

Frequently algorithm users can select their solution strategy by choosing from among various options for each of several algorithm factors. If the algorithm will always eventually find a solution, the important question is which combination of options is likely to be “best”. A general statistical approach to answering this question is illustrated in the context of a new integer linear programming algorithm where “best” is quickest.

William J. Riley, Robert L. Sielken Jr.
An Integer Programming Test Problem Generator

In this paper we present a methodology for evaluating new integer programming algorithms -- an area of mathematical programming which is not yet well understood (see [Lin and Rardin, 1979]). We present our procedure not so much as a recommendation for others to follow but as a basis for discussion and further work.

Michael G. Chang, Fred Shepardson

Comparative Computational Studies in Mathematical Programming

Comparative Computational Studies in Mathematical Programming

This session is concerned with identifying ideas that will guide future studies on comparisons of algorithms and codes. To do so we have selected three recent computational studies with differing viewpoints. The authors of these studies will make a brief presentation** outlining the methodology they used, their views on the good and bad points in the study, and their suggestions for future studies. Following these presentations members of the panel, who have been selected so as to cover a wide range of opinions on the subject, will comment on the points that have been made. The discussion will then be open to the general audience.

Ron S. Dembo
Remarks on the Evaluation of Nonlinear Programming Algorithms

What do we expect from a nonlinear programming code that implements a known mathematical algorithm? Our primary interest seems to lie not in the design of the code and its structures but in the execution of the code since the latter will determine the efficiency, reliability, and robustness of the code. While the details of the algorithm can be read from the literature and the code can be examined from a (hopefully) well-documented listing of the programs, the author of a code should also explain the rationale behind the chosen strategy and the various choices which have been made. It is the choices rather than the decisions that represents the essential information.

D. M. Himmelblau
Comments on Evaluating Algorithms and Codes for Mathematical Programming

In this paper we comment on comparative computational studies for mathematical programming, in particular the Papers by Miele and Gonzalez [3], Sandgren and Ragsdell [5], and Schittkowski [7], and give some of our views on this subject. We suggest the papers by Gill and Murray [2] and Moré [4], and the book they are contained in (Fosdick [1]), as other important reading on this subject.

Robert B. Schnabel
Some Comments on Recent Computational Testing in Mathematical Programming

The scope of these comments is too limited for extensive discussion concerning studies (1), (2) and (3). We shall restrict ourselves to comments concerning (1).

Jacques C. P. Bus
Remarks on the Comparative Experiments of Miele, Sandgren, and Schittkowski

I will in this brief discussion offer critical remarks on the recent comparative experiments of Miele [1], Sandgren [2] and Schittkowski [3]. Because of my closeness to the work of my former doctoral student, Eric Sandgren, it is difficult for me to be completely objective in evaluating his work. I will comment on his work in spite of this obvious bias.

K. M. Ragsdell

Testing Methodologies

In Pursuit of a Methodology for Testing Mathematical Programming Software

To resolve the often conflicting and confusing results of computational testing of mathematical software reported in the literature, the Committee on Algorithms of the Mathematical Programming Society has developed guidelines which present minimum standards to which all papers reporting such efforts should conform. Although guidelines now exist, the development of sound methodologies for this testing is at the embryonic stage. This paper surveys the results to date of computational test efforts in each of the major fields of mathematical programming and reviews to what extent these efforts have advanced the goal of developing methodologies for testing MP software. Directions for future research are presented.

K. L. Hoffman, R. H. F. Jackson
Nonlinear Programming Methods with Linear Least Squares Subproblems

This paper presents the results of an extensive comparative study of nonlinear optimization algorithms, cf. [8]. This study indicates that quadratic approximation methods,which are characterizéd by solving a sequence of quadratic subproblems recursively, belong to the most efficient and reliable nonlinear programming algorithms available at present. The purpose of the paper is to investigate their numerical performance in more detail. In particular, the dependence of the overall performance on alternative quadratic subproblem strategies is tested. The paper indicates how the efficiency of quadratic approximate methods can be improved.

Klaus Schittkowski
An Outline for Comparison Testing of Mathematical Software - - Illustrated by Comparison Testings of Software Which Solves Systems of Nonlinear Equations

This paper presents an outline for doing comparison testing of mathematical software. The outline is illustrated with examples from two comparison testings. One comparison tested software that solves nonlinear least squares problems, while the other tested software that solves square systems of equations.

K. L. Hiebert
A Portable Package for Testing Minimization Algorithms

A package of routines designed to aid in the process of testing minimization (or other types of) algorithms will be discussed., The intention is to provide a vehicle for evaluating codes which is portable, versatile and simple to use. Ease of use should encourage algorithm designers to use the package, versatility will ensure that it is applicable to testing many different routines, and portability will mean that it is available to anyone. This last point should contribute to consistency in the testing of algorithms. The emphasis is not on actual testing, but is on the presentation of some ideas of how one can design a useful testing mechanism.

A. Buckley

Approaches to Software Testing from Other Disciplines

Transportable Test Procedures for Elementary Function Software

This paper explores the principles and numerics behind the use of identities to test the accuracy of elementary function software. The success of this approach depends critically on proper matching of identity and test interval with the purpose of the test, and on careful implementation based on an understanding of computer arithmetic systems and inherent accuracy limitations for computational software.

W. J. Cody
Testing and Evaluation of Statistical Software

Data sets analyzed by statisticians are likely to he ill-conditioned for the kinds of computations ordinarily performed on them. For this reason, most of the activity in testing and validation of statistical software has centered on numerical error analysis. The host generally effective testing and validation method has been test data generation. Once the parameters of numerical condition are identified, methods for systematically generating test data sets can be developed. In the case of least squares computations, for example, collinearity and stiffness seem to be the major components of condition. Test data sets with prespecified collinearity and stiffness are discussed.The problem of software validation goes beyond the tests of the kernels that perform numerical computations. A program may involve the use of several computational modules, and errors in the program often occur in the setup stages in moving from one computational module to another. A stochastic model for errors remaining in a program after a sequence of tests is presented and discussed.There are many statistical computations for which perturbation methods can be used easily to assess correctness. Perturbation methods can be employed by the user so as to avoid the bigger question of testing the software; the test is for the performance of the software on the specific problem of interest.

James E. Gentle
Toolpack — An Integrated System of Tools for Mathematical Software Development

This paper describes the approach being taken in configuring a set of tool capabilities whose goal is the support of mathematical software development. The TOOLPACK tool set is being designed to support such activities as editing, testing, analysis, formatting, transformation, documentation and porting of code. Tools for realizing most of these functional capabilities already exist, yet TOOLPACK aims to do far more than simply bring them together as a collection of side-by-side individual tools. TOOLPACK seeks to merge these capabilities into a system which is smoothly integrated both internally and from a user’s external point of view. The internal integration approach involves the decomposition of all tools into a more or less standard set of modular “tool fragments”. The external integration approach involves the creation of a command language and a set of conceptual entities which is close to the conceptual set used by mathematical software writers in the process of creating their software. This paper describes both of these integration approaches as well as the rather considerable and novel software support needed to make them work.

Leon Osterweil
Overview of Testing of Numerical Software

The purposes of program testing are to expose errors; to insure that standards for portability, robustness, etc. are met; and to evaluate performance. A selection of work in these three applications of program testing is presented here.A variety of tools and techniques for exposing errors in programs have been proposed. They range from measurements of testing thoroughness, to models for predicting errors, to attempts to prove a limited form of correctness. For various reasons, including cost, availability of tools, and interpretation of results, testing of numerical software for the purpose of exposing errors has made only limited use of these tools and techniques.The problem of insuring that standards are met is easier to deal with, and one tool in particular, the PFORT verifier, has been widely used to insure portability. Tools for transforming programs to meet certain requirements have also been used in the production of numerical software as for example, in the production of LINPACK.Program testing for performance evaluation has relied primarily on the use of selected test problems. Collections of test problems for several subject areas — ordinary differential equations, linear equations, and optimization — exist, and have received extensive use. Performance profiles have been used to present results of performance testing in a succinct and meaningful way. Modeling has been used to evaluate the performance of an algorithmic idea without incurring the expense of actually executing all of the supporting software.

Lloyd D. Fosdick
The Application of Halstead’s Software Science Difficulty Measure to a Set of Programming Projects

The difficulty measure, as defined by Halstead [1], is shown to have useful applications in the measurement of certain properties in code implementations. This difficulty measure reflects the effort required to code an algorithm, to test it, to inspect and review it, and to understand it later when the code needs alterations.This paper explains how the difficulty metric reveals insights to the structure of a program module and also to some possible code impurities within the module. It also shows that assembler language programs are significantly more difficult than higher-level PL/S language programs. The author proposes that a maximum level (or threshold) of difficulty should be established to help manage the complexity of programs.

Charles P. Smith

Special Topics

Mathematical Programming Algorithms in APL

The APL programming language offers a way of succinctly expressing data processing algorithms. We will introduce and demonstrate those APL concepts which we find useful for designing, implementing, and testing mathematical programming procedures.

Harlan Crowder

Advances in Networks

Solution Strategies and Algorithm Behavior in Large-Scale Network Codes

This paper surveys results of a recent set of experiments testing different solution strategy parameters and implementation schemes for the U.S. Department of Treasury’s in/out-of-core, primal-simplex-based solution system for optimally merging microdata files. Numerous runs were implemented to study the effects of pricing and pivoting rule, data storage technique, compiler, data page size, and problem density on algorithm behavior. Improvements over previous performance levels are noted.

Richard S. Barr
Recursive Piecewise-Linear Approximation Methods for Nonlinear Networks

The use of recursive piecewise-linear approximations in network optimization problems with convex separable objectives allows the utilization of fast solution methods for the corresponding linear network subproblems. Piecewise-linear approximations also enjoy many advantages over other types of approximations, including the ability to utilize simultaneously information from infeasible as well as feasible points (so that results of Lagrangian relaxations may be directly employed in the approximation), guaranteed decrease in the objective without the need for any line search, and easily computed and tight bounds on the optimal value. The speed of this approach will be exemplified by the presentation of computational results for large-scale nonlinear networks arising from econometric and water supply system applications.

R. R. Meyer
Computational Testing of Assignment Algorithms

The methods used in an empirical evaluation of several recently proposed assignment algorithms are discussed. Brief descriptions of the algorithms tested are also included.

Michael Engquist

On Establishing a Group for Testing Mathematical Programs

“Establishing a Group for Testing Mathematical Programs”
A Panel Discussion

Provided here is a summary of the comments made during this panel discussion. While every attempt was made to insure accuracy, J. Mulvey, who functioned as chairman during this discussion, claims full responsibility for any inaccuracies or misrepresentations which may persist. Names and addresses of all contributors to the discussion appear in the Appendix.

John M. Mulvey

Appendix

Appendix

A description of the conference program and a list of participants are contained in the Appendix. The conference brought together. MP algorithmic researchers, computer scientists, MP users and consultants, and software developers from industry to examine how mathematical programming techniques ought to be evaluated.

John M. Mulvey
Coal Conference Participants
John M. Mulvey
A Model for the Performance Evaluation in Comparative Studies

The paper describes a model for the performance evaluation in an arbitrary comparative study of mathematical algorithms. Distinctions will be made between “static” testing, i.e. evaluation of the design of the corresponding programs and “dynamic testing”, i.e. evaluation of the performance of these programs when implemented and run on a digital computer. For both levels, relevant criteria for choosing among codes are described. Advantages and disadvantages of real life, hand selected, and randomly generated test problems, respectively, are outlined. It is furthermore shown how a decision maker can combine a variety of performance measures into a final score for each major criterion and program.

Klaus Schittkowski
Remarks on the Comparative Evaluation of Algorithms for Mathematical Programming Problems

In this Presentation, we consider the minimization of a function f = f(x), where f is a scalar and x is an n-vector whose components are unconstrained. We consider the comparative evaluation of algorithms for unconstrained minimization. We are concerned with the measurement of the computational speed and examine critically the concept of equivalent number of function evaluations Ne, which is defined by1$${{N}_{e}}={{N}_{0}}+n{{N}_{1}}+m{{N}_{2}}.$$ Here, N0 is the number of function evaluations; N1 is the number of gradient evaluations; N2 is the number of Hessian evaluations? n is the dimension of the vector x; and m = n(n+l)/2 is the number of elements of the Hessian matrix above and including the principal diagonal. We ask the following question: Does the use of the quantity Ne constitute a fair way of comparing different algorithms?

A. Miele
Remarks on a Testing Center

My first reaction to the idea of a test center was on the positive side. However, the more I thought about the operational details, the more I cooled to the idea. After sober reflection, my conclusions were negative, for the reasons indicated below.

Angelo Miele
Systematic Approach for Comparing the Computational Speed of Unconstrained Minimization Algorithms

The comparison of mathematical programming algorithms is inherently difficult, especially when deriving general conclusions about the relative usefulness, applicability, and efficiency of different algorithms. The problem is complicated by the variety of approaches used to compare algorithms. Most often, some approaches ignore essential aspects of the comparison or fail to provide sufficient information about the following items: (a) clarification of the objectives of the comparison; (b) clear and complete description of the algorithms being compared; (c) specification of the memory requirements; (d) use of the same experimental conditions for all of the algorithms being compared; (e) sufficient information about the experimental conditions and the numerical results, so as to make them easily reproducible; (f) use of enough performance indexes to ensure the fulfillment of the objectives of the comparison; (g) use of a reasonably large set of test problems having different characteristics; (h) clarification of the way in which the derivatives are computed; (i) measurement of the computational speed; (j) measurement of the effect of the stopping conditions; (k) measurement of the sensitivity to scaling; (l) presentation of the convergence rates; (m) use of several nominal points for each test problem; and (n) use of a standard format to present the result of the comparison.

S. Gonzalez
The Evaluation of Optimization Software for Engineering Design

The design and implementation of two major comparative experiments is reviewed with particular emphasis on testing methodology. These studies took place at Purdue University. The first, an investigation of the merits of general purpose nonlinear programming codes with major funding from the National Science Foundation, was conducted over the period 1973 to 1977. The second, an investigation of the relative merits of various geometric programming strategies and their code implementations with funding from the Office of Naval Research, was conducted over the period 1974 to 1978. The various major decisions associated with such studies are discussed, such as the selection and collection of problems and codes, the nature of data to be collected, evaluation criteria, ranking schemes, presentation and distribution of results, and the technical design of the experiment itself. The statistical implications of the results in light of the experiment design are examined, as are the effects of various experiment parameters Such as number of variables, number of constraints, degree of nonlinearity in the objective and constraints, and starting point placement. Finally, recommendations for future experiments are given.

K. M. Ragsdell
Backmatter
Metadaten
Titel
Evaluating Mathematical Programming Techniques
herausgegeben von
Prof. John M. Mulvey
Copyright-Jahr
1982
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-95406-1
Print ISBN
978-3-540-11495-6
DOI
https://doi.org/10.1007/978-3-642-95406-1