Skip to main content

2005 | Buch

The Next Wave in Computing, Optimization, and Decision Technologies

insite
SUCHEN

Über dieses Buch

Computer Science and Operations Research continue to have a synergistic relationship and this book represents the results of the cross-fertilization between OR/MS and CS/AI. It is this interface of OR/CS that makes possible advances that could not have been achieved in isolation. Taken collectively, these articles are indicative of the state of the art in the interface between OR/MS and CS/AI and of the high-caliber research being conducted by members of the INFORMS Computing Society.

Inhaltsverzeichnis

Frontmatter

Network

Frontmatter
On the Complexity of Delaying an Adversary’s Project
Abstract
A “project manager” wishes to complete a project (e.g., a weapons-development program) as quickly as possible. Using a limited interdiction budget, an “interdictor” wishes to delay the project’s overall completion time by interdicting and thereby delaying some of the project’s component tasks. We explore a variety of PERT-based interdiction models for such problems and show that the resulting problem complexities run the gamut: polynomially solvable, weakly NP-complete, strongly NP-complete or NP-hard. We suggest methods for solving the problems that are easier than worst-case complexity implies.
Gerald G. Brown, W. Matthew Carlyle, Johannes O. Royset, R. Kevin Wood
A Note on Eswaran and Tarjan’s Algorithm for the Strong Connectivity Augmentation Problem
Abstract
In a seminal paper Eswaran and Tarjan [1] introduced several augmentation problems and presented linear time algorithms for them. This paper points out an error in Eswaran and Tarjan’s algorithm for the strong connectivity augmentation problem. Consequently, the application of their algorithm can result in a network that is not strongly connected. Luckily, the error can be fixed fairly easily, and this note points out the remedy yielding a “corrected” linear time algorithm for the strong connectivity augmentation problem.
S. Raghavan

Integer and Mixed Integer Programming

Frontmatter
Generating Set Partitioning Test Problems with Known Optimal Integer Solutions
Abstract
In this work, we investigate methods for generating set partitioning test problems with known integer solutions. The problems are generated with various cost structures so that their solution by well-known integer programming methods can be shown to be difficult. Computational results are obtained using the branch and bound methods of the CPLEX solver. Possible extensions are considered to the area of cardinality probing of the solutions
Edward K. Baker, Anito Joseph, Brenda Rayco
Computational Aspects of Controlled Tabular Adjustment: Algorithm and Analysis
Abstract
Statistical agencies have used complementary cell suppression to limit statistical disclosure in tabular data for four decades. Cell suppression results in significant information loss and reduces the usefulness of published data, significantly so for the unsophisticated user. Furthermore, computing optimal complementary cell suppressions is known to be an NP-hard problem. In this paper, we explore a recent method for limiting disclosure in tabular data, controlled tabular adjustment (CTA). Based on a mixed integer-programming model for CTA, we present a procedure that provides a lower bound on the objective, which is demonstrated to decrease the computational effort required to solve the model. We perform experiments to examine heuristics for CTA proposed elsewhere that can be used to convert the MIP problem into linear programming problems while preserving essential information.
Lawrence H. Cox, James P. Kelly, Rahul J. Patil
The Symphony Callable Library for Mixed Integer Programming
Abstract
SYMPHONY is a customizable, open-source library for solving mixed-integer linear programs (MILP) by branch, cut, and price. With its large assortment of parameter settings, user callback functions, and compile-time options, SYMPHONY can be configured as a generic MILP solver or an engine for solving difficult MILPs by means of a fully customized algorithm. SYMPHONY can run on a variety of architectures, including single-processor, distributed-memory parallel, and shared-memory parallel architectures under MS Windows, Linux, and other Unix operating systems. The latest version is implemented as a callable library that can be accessed either through calls to the native C application program interface, or through a C++ interface class derived from the COIN-OR Open Solver Interface. Among its new features are the ability to solve bicriteria MILPs, the ability to stop and warm start MILP computations after modifying parameters or problem data, the ability to create persistent cut pools, and the ability to perform rudimentary sensitivity analysis on MILPs.
Ted K. Ralphs, Menal Güzelsoy

Heuristic Search

Frontmatter
Hybrid Graph Heuristics within a Hyper-Heuristic Approach to Exam Timetabling Problems
Abstract
This paper is concerned with the hybridization of two graph coloring heuristics (Saturation Degree and Largest Degree), and their application within a hyper-heuristic for exam timetabling problems. Hyper-heuristics can be seen as algorithms which intelligently select appropriate algorithms/heuristics for solving a problem. We developed a Tabu Search based hyper-heuristic to search for heuristic lists (of graph heuristics) for solving problems and investigated the heuristic lists found by employing knowledge discovery techniques. Two hybrid approaches (involving Saturation Degree and Largest Degree) including one which employs Case Based Reasoning are presented and discussed. Both the Tabu Search based hyper-heuristic and the hybrid approaches are tested on random and real-world exam timetabling problems. Experimental results are comparable with the best state-of-the-art approaches (as measured against established benchmark problems). The results also demonstrate an increased level of generality in our approach.
Edmund Burke, Moshe Dror, Sanja Petrovic, Rong Qu
Metaheuristics Comparison for the Minimum Labelling Spanning Tree Problem
Abstract
We study the Minimum Labelling Spanning Tree Problem: Given a graph G with a color (label) assigned to each edge (not necessarily properly) we look for a spanning tree of G with the minimum number of different colors. The problem has several applications in telecommunication networks, electric networks, multimodal transportation networks, among others, where one aims to ensure connectivity by means of homogeneous connections. For this NP-hard problem very few heuristics are presented in the literature giving good quality solutions. In this paper we apply several metaheuristic approaches to solve the problem. These approaches are able to improve over existing heuristics presented in the literature. Furthermore, a comparison with the results provided by an exact approach existing in the literature shows that we may quite easily obtain optimal or close to optimal solutions.
Raffaele Cerulli, Andreas Fink, Monica Gentili, Stefan Voß
A New Tabu Search Heuristic for the Site-Dependent Vehicle Routing Problem
Abstract
The site-dependent vehicle routing problem takes into account some real-life applications of the basic vehicle routing problem when there are compatible dependencies between customers (sites) and vehicle types. Every customer is associated with a set of allowable vehicle types and has to select only one of them. A series of basic vehicle routing problems are solved over the customers that select the same vehicle type. The objective is to minimize the total distance traveled (or the total travel cost incurred) by the fleet and all constraints for the basic vehicle routing problem as well as the site-dependency constraints must be satisfied. In this paper, we present a new heuristic method based on tabu search combined with the deviation of the deterministic annealing method to carry out the intensification and diversification search by varying the values of the deviations within two different ranges respectively. We test the method on a set of 23 benchmark problems taken from literature, and the computational results show that the new method can solve the problem quite effectively.
I-Ming Chao, Tian-Shy Liou
A Heuristic Method to Solve the Size Assortment Problem
Abstract
This paper considers the size assortment problem, in which a large number of size distributions (e.g., for retail stores) need to be aggregated into a relatively small number of groups in an optimal fashion. All stores within a group are then allocated merchandise according to their common size distribution. A neighborhood search heuristic is developed to produce near-optimal solutions. We investigate the use of both random and “intelligent” starting solutions to initiate the heuristic. The intelligent starting solutions are based on efficiently solving one-dimensional versions of the original problem and then combining these into consensus solutions. Computational results are reported for some small specially structured test problems, as well as some large test problems obtained from an industrial client.
Kenneth W. Flowers, Beth A. Novick, Douglas R. Shier
Heuristic Methods for Solving Euclidean Non-Uniform Steiner Tree Problems
Abstract
We consider a variation of the Euclidean Steiner Tree Problem in which the space underlying the set of nodes has a specified non-uniform cost structure. This problem is significant in many practical situations, such as laying cable networks, where the cost for laying a cable can be highly dependent on the location and the nature of the area through which it is to be laid. We empirically test the performance of a genetic-algorithm-based procedure on a variety of test cases of this problem. We also consider the impact on solutions of charging an additional fee per Steiner node. This can be important when Steiner nodes represent junctions that require the installation of additional hardware. In addition, we present a novel way of visualizing the performance and robustness of the genetic algorithm.
Ian Frommer, Bruce Golden, Guruprasad Pundoor
Modeling and Solving a Selection and Assignment Problem
Abstract
In this paper, we first provide an MIP formulation of a selection/assignment problem and then we discuss a solution method based both on the use of a commercial general-purpose scatter-search and a simple implementation of tabu search. This optimization problem is related to a research project supported by the Office of Naval Research where sailors need to be selected to perform a set of jobs that require specific skill levels. The results of our computational experiments indicate the usefulness of the software system for workforce planning that we have developed.
Manuel Laguna, Terry Wubbena
Solving the Time Dependent Traveling Salesman Problem
Abstract
In the standard version of the traveling salesman problem (TSP), we are given a set of customers located in and around a city and the distances between each pair of customers, and need to find the shortest tour that visits each customer exactly once. Suppose that some of the customers are located in the center of the city. Within a window of time, center city becomes congested so that the time to travel between customers takes longer. Clearly, we would like to construct a tour that avoids visiting customers when the center of the city is congested. This variant of the TSP is known as the time dependent TSP (TDTSP). We review the literature on the TDTSP, develop two solution algorithms, and report computational experience with our algorithms.
Feiyue Li, Bruce Golden, Edward Wasil
The Maximal Multiple-Representation Species Problem Solved Using Heuristic Concentration
Abstract
The Maximal Multiple-Representation Species Problem (MMRSP) is formulated here and examined using two different integer program formulations solved by LP plus branch and bound as well as the metaheuristic known as Heuristic Concentration (HC). It is seen that for some instances of this problem, the exact method can be allowed to run for a long time (> 1 day) without termination while HC can find optimal or near-optimal solutions to the same instances in a few seconds. In one such case, LP-IP was allowed to run for 1,000 times longer than HC and still found a worse solution. Furthermore, HC found the optimal solution in 72.3% of cases and had an objective value gap of less than 1% in 94% of cases. Even when HC takes longer than LP-IP,the longest run-time was under 20 minutes. Therefore, HC is a valuable tool for approaching the MMRSP.
Michelle M. Mizumori, Charles S. ReVelle, Justin C. Williams

Stochastic Modeling

Frontmatter
Fast and Efficient Model-Based Clustering with the Ascent-EM Algorithm
Abstract
In this paper we propose an efficient and fast EM algorithm for model-based clustering of large databases. Drawing ideas from its stochastic descendant, the Monte Carlo EM algorithm, the method uses only a sub-sample of the entire database per iteration. Starting with smaller samples in the earlier iterations for computational efficiency, the algorithm increase the sample size intelligently towards the end of the algorithm to assure maximum accuracy of the results. The intelligent sample size updating rule is centered around EM’s highly-appraised likelihood-ascent property and only increases the sample when no further improvements are possible based on the current sample. In several simulation studies we show the superiority of Ascent-EM over regular EM implementations. We apply the method to an example of clustering online auctions.
Wolfgang Jank
Statistical Learning Theory in Equity Return Forecasting
Abstract
We apply Mangasarian and Bennett’s multi-surface method to the problem of allocating financial capital to individual stocks. The strategy constructs market neutral portfolios wherein capital exposure to long positions equals exposure to short positions at the beginning of each weekly period. The optimization model generates excess returns above the S&P 500, even in the presence of reasonable transaction costs. The trading strategy generates statistical arbitrage for trading costs below 10 basis points per transaction.
John M. Mulvey, A. J. Thompson
Sample Path Derivatives for (s, S) Inventory Systems with Price Determination
Abstract
We consider the problem of simultaneous price determination and inventory management. Demand depends explicitly on the product price p, and the inventory control system operates under a periodic review (s, S) ordering policy. To minimize long-run average loss, we derive sample path derivatives that can be used in a gradient-based algorithm for determining the optimal values of the three parameters (s, S, p) in a simulation-based optimization procedure. Numerical results for several optimization examples are presented, and consistency proofs for the estimators are provided.
Huiju Zhang, Michael Fu

Software and Modeling

Frontmatter
Network and Graph Markup Language (NaGML) - Data File Formats
Abstract
The Network and Graph Markup Language (NaGML) is a family of Extensible Markup Language (xml) languages for network and graph data files. The topology, node properties, and arc properties are validated against the user’s specification for the data values. NaGML is part of a component architecture that reads, validates, processes, displays, and writes network and graph data. Because it implements a family rather than a single xml language, NaGML offers (1) flexibility in choosing property names, data types, and restrictions, (2) strong validation, and (3) a variety of data file formats. This paper demonstrates these points with a sampling of the possible data file formats.
Gordon H. Bradley
Software Quality Assurance for Mathematical Modeling Systems
Abstract
With increasing importance placed on standard quality assurance methodologies by large companies and government organizations, many software companies have implemented rigorous quality assurance (QA) processes to ensure that these standards are met. The use of standard QA methodologies cuts maintenance costs, increases reliability, and reduces cycle time for new distributions. Modeling systems differ from most software systems in that a model may fail to solve to optimality without the modeling system being defective. This additional level of complexity requires specific QA activities. To make software quality assurance (SQA) more cost-effective, the focus is on reproducible and automated techniques. In this paper we describe some of the main SQA methodologies as applied to modeling systems. In particular, we focus on configuration management, quality control, and testing as they are handled in the GAMS build framework, emphasizing reproducibility, automation, and an open-source public-domain framework.
Michael R. Bussieck, Steven P. Dirkse, Alexander Meeraus, Armin Pruessner
Model Development and Optimization with Mathematica™
Abstract
Mathematica is an integrated scientific and technical computing system, with impressive numerical calculation, programming, symbolic manipulation, visualization and documentation capabilities. In recent years Mathematica’s optimization related features have been significantly expanded, both by in-house development and by application packages. Such developments make it an increasingly useful tool also in Operations Research studies. We review and illustrate these features, placing added emphasis on nonlinear (global and convex) optimization, and — within this context — discussing the application packages MathOptimizer and MathOptimizer Professional.
János D. Pintér, Frank J. Kampas
Verification of Business Process Designs Using Maps
Abstract
Business processes form the foundation of an enterprise’s operations and determine what the business does, and more importantly, how well the business does what it does. A systematic approach to the design of business processes, supported by a formal foundation for the specification and modeling of business processes, is necessary to (i) capture domain knowledge in a format that is transferable across enterprises and (ii) provide a basis for re-design based on needs of efficiency, changes in market requirements, and reproducibility of process templates for multiple products/services. The specification of a business process is often characterized by combinations of concurrency, choice, and asynchronous completion, the mix of which could lead to incorrect designs. This chapter highlights the verification issues that arise in the design of business processes, and outlines research questions of immediate relevance to the growing interest in business process modeling and enterprise automation solutions. We also discuss MAPS — a tool for the Modeling and Analysis of Process modelS, that has been used effectively as a classroom aid to highlight the importance of design verification as a necessary first step in the design of business processes.
Eswar Sivaraman, Manjunath Kamath
Alps: A Framework for Implementing Parallel Tree Search Algorithms
Abstract
ALPS is a framework for implementing and parallelizing tree search algorithms. It employs a number of features to improve scalability and is designed specifically to support the implementation of data intensive algorithms, in which large amounts of knowledge are generated and must be maintained and shared during the search. Implementing such algorithms in a scalable manner is challenging both because of storage requirements and because of communications overhead incurred in the sharing of data. In this abstract, we describe the design of ALPS and how the design addresses these challenges. We present two sample applications built with ALPS and preliminary computational results.
Yan Xu, Ted K. Ralphs, Laszlo Ladányi, Matthew J. Saltzman

Classification, Clustering, and Ranking

Frontmatter
Tabu Search Enhanced Markov Blanket Classifier for High Dimensional Data Sets
Abstract
Data sets with many discrete variables and relatively few cases arise in health care, ecommerce, information security, and many other domains. Learning effective and efficient prediction models from such data sets is a challenging task. In this paper, we propose a Tabu Search enhanced Markov Blanket (TS/MB) procedure to learn a graphical Markov Blanket classifier from data. The TS/MB procedure is based on the use of restricted neighborhoods in a general Bayesian Network constrained by the Markov condition, called Markov Equivalent Neighborhoods. Computational results from a real world data set drawn from the health care domain indicate that the TS/MB procedure converges fast, is able to find a parsimonious model with substantially fewer predictor variables than in the full data set, has comparable or better prediction performance when compared against several machine learning methods, and provides insight into possible causal relations among the variables.
Xue Bai, Rema Padman
Dance Music Classification Using Inner Metric Analysis
A Computational Approach and Case Study Using 101 Latin American Dances and National Anthems
Abstract
This paper introduces a method for music genre classification using a computational model for Inner Metric Analysis. Prior classification methods focussing on temporal features utilize tempo (speed) and meter (periodicity) patterns and are unable to distinguish between pieces in the same tempo and meter. Inner Metric Analysis reveals not only the periodicity patterns in the music, but also the accent patterns peculiar to each musical genre. These accent patterns tend to correspond to perceptual groupings of the notes. We propose an algorithm that uses Inner Metric Analysis to map note onset information to an accent profile that can then be compared to template profiles generated from rhythm patterns typical of each genre. The music is classified as being from the genre whose accent profile is most highly correlated with the sample profile. The method has a computational complexity of O(n 2), where n is the length of the query excerpt. We report and analyze the results of the algorithm when applied to Latin American dance music and national anthems that are in the same meter (4/4) and have similar tempo ranges. We evaluate the efficacy of the algorithm when using two variants on the model for Inner Metric Analysis: the metric weight model and the spectral weight model. We find that the correct genre is either the top rank choice or a close second rank choice in almost 80% of the test pieces.
Elaine Chew, Anja Volk, Chia-Ying Lee
Assessing Cluster Quality Using Multiple Measures - A Decision Tree Based Approach
Abstract
Clustering is a popular data mining technique, with applications in many areas. Although there are many clustering algorithms, none of them is superior on all datasets. Typically these clustering algorithms while providing summary statistics on the generated set of clusters do not provide easily interpretable detailed descriptions of the set of clusters that are generated. Further for a given dataset, different algorithms may give different sets of clusters, and so it is never clear which algorithm and which parameter settings is the most appropriate. In this paper we propose the use of a decision tree (DT) based approach that involves the use of multiple performance measures for indirectly assessing cluster quality in order to determine the most appropriate set of clusters.
Kweku-Muata Osei-Bryson
Dispersion of Group Judgments
The Geometric Expected Value Operator
Abstract
To achieve a decision with which the group is satisfied, the group members must accept the judgments, and ultimately the priorities. This requires that (a) the judgments be homogeneous, and (b) the priorities of the individual group members be compatible with the group priorities. There are three levels in which the homogeneity of group preference needs to be considered: (1) for a single paired comparison (monogeneity), (2) for an entire matrix of paired comparisons (multigeneity), and (3) for a hierarchy or network (omnigeneity). In this paper we study monogeneity and the impact it has on group priorities.
Thomas L. Saaty, Luis G. Vargas
Metadaten
Titel
The Next Wave in Computing, Optimization, and Decision Technologies
herausgegeben von
Bruce Golden
S. Raghavan
Edward Wasil
Copyright-Jahr
2005
Verlag
Springer US
Electronic ISBN
978-0-387-23529-5
Print ISBN
978-0-387-23528-8
DOI
https://doi.org/10.1007/b101696

Premium Partner