Discrete OptimizationGenetic-algorithm-based simulation optimization considering a single stochastic constraint
Introduction
Optimization via simulation (OvS) is the process of optimizing the expected performance of a discrete event, stochastic system through computer simulation (e.g., Abo-Hamad and Arisha, 2013, Arreola-Risa et al., 2011, Chen, 2011, Hong and Nelson, 2009, Tsai, 2013, Tsai and Chu, 2012, Yu et al., 2010). Hong and Nelson (2009) classified OvS problems into three categories based on the feasible region structure: continuous OvS, discrete OvS (DOvS), and ranking and selection (R&S). For the R&S problems, the number of alternatives in the feasible region is so small (often less than 500) that we may simulate all solutions and choose the best (or near the best) among them with a specified confidence level (see Kim & Nelson (2006) for a survey). For DOvS problems, we have a very large number of feasible solutions (discrete design variables), and the existing algorithms often emphasize a global convergence to the optimal solution asymptotically. In practice, decision makers usually need to consider multiple performance measures rather than a single one due to physical or managerial requirements. For instance, in a typical flow-line problem, the decision maker is interested in finding a buffer allocation setting to maximize the expected throughput over a fixed planning horizon, while also keeping the expected overall work-in-process no greater than a certain level. Recently, more research interest in the OvS literature has been directed to solving problems with stochastic constraints or multiple performance measures. Morrice and Butler (2006) developed a R&S procedure based on multi-attribute utility theory to allow tradeoffs between conflicting targets. Kabirian and Ólafsson (2009) proposed a heuristic iterative algorithm for finding the best solution in the presence of multiple stochastic constraints. Kleijnen, van Beers, and van Nieuwenhuyse (2010) combined methodologies from metamodeling and mathematical programming for solving constrained optimization of random simulation models. Bhatnagar et al., 2011, Szechtman and Yücesan, 2008 proposed stochastic approximation algorithms for constrained optimization via simulation. Luo and Lim (2011) proposed a new approach that converts a constrained optimization problem into an unconstrained one by using the Lagrangian function. Similarly, Park and Kim (2011) presented a method called penalty function with memory, which is added to the objective function and then replaces a DOvS problem with stochastic constraints into a series of new unconstrained problems. Vieira Junior, Kienitz, and Belderrain (2011) proposed a novel simulation allocation rule to be used in a locally convergent random search algorithm, called COMPASS (Hong & Nelson, 2006), to handle stochastic constrained problems. Hunter and Pasupathy, 2013, Pujowidianto et al., 2012 applied a large-deviations approach to provide an asymptotically optimal sample allocation that maximizes the rate at which the probability of false selection tends to zero. These methods have a requirement that all the solutions be simulated at least once, so they are more appropriate to the setting where the solution space is finite and contains a small number of elements. Andradóttir and Kim (2010) presented one type of R&S procedure (called the Feasibility Determination Procedure, or FDP) that checks the feasibility of each solution among a finite set with respect to a stochastic constraint. Instead of giving a guarantee of correctly choosing the best solution, Bayesian procedures maximize the posterior probability of correct selection within a given simulation budget. There are some works which developed efficient Bayesian procedures to address the constrained optimization problem (e.g., Guan et al., 2006, Jia, 2009, Lee et al., 2012, Li et al., 2002). It should be noted that the existing algorithms handling stochastic constraints are more appropriate to be used when the number of solution designs is not too large. This implies that they will become inefficient (i.e., require excessive sampling cost) when applied to a very large solution space.
Most commercial OvS solvers use optimization metaheuristics, such as tabu search, neural nets, and genetic algorithms (GAs), that have generally been designed and proven to be effective on difficult and deterministic optimization problems. While these algorithms often find promising solutions quickly, they may also become pure random search methods if the stochastic variation of output (i.e., simulation noise) is high or the number of obtained samples for each solution is set too low. That is, their implementations do not always adequately account for the presence of statistical errors. In addition, these algorithms do not provide any statistical guarantee regarding the quality or goodness of the final selected solution. To handle the aforementioned issues, Boesel, Nelson, and Kim (2003) proposed an adaptive genetic-algorithm-based procedure to account for simulation noise in the stochastic optimization context. Their procedure also guarantees to return the best solution over the solutions visited by a heuristic search procedure. Subsequently, Xu, Nelson, and Hong (2010) used the niching GA together with COMPASS (Hong & Nelson, 2006) to establish local optimality with statistical confidence when simulating only a small portion of the feasible solutions. Nazzal, Mollaghasemi, Hedlund, and Bozorgi (2012) proposed a simulation optimization methodology that combines the GA and a R&S procedure under common random number (CRN). They developed a new R&S procedure to select a nonempty subset so that the best solution is contained in the subset with a pre-specified probability. Fitness values and selective probabilities are then computed based on the estimated performances which are obtained from the aforementioned R&S procedure. Notice that these genetic-algorithm-based procedures are adapted for solving the DOvS problem, but can only optimize the expected value of a single performance measure (see Ólafsson (2006) for a review of metaheuristics for OvS). To handle the multi-objective simulation optimization problem, Lee, Chew, Teng, and Chen (2008) developed a solution framework which integrates evolutionary algorithm with multi-objective computing budget allocation method (MOCBA). They employed MOCBA to efficiently allocate simulation replications to solutions in the current population. The proposed approach is applied on a multi-objective aircraft spare parts allocation problem to find a set of non-dominated solutions. Horng, Lin, Lee, and Chen (2013) used GA in combination with a surrogate model to find a set of good solutions in the global search stage. In the second stage they employed a probabilistic local search method to identify the approximate local optima. In the final stage OCBA is used to obtain the best solution among the promising ones identified previously.
In this paper, we propose two efficient algorithms (based on GA) that are both theoretically robust and of practical value for solving the DOvS problem with a single stochastic constraint. In both proposed algorithms we use GA to guide the search process, because it is a population-based algorithm that simultaneously considers multiple candidate solutions and is shown to be more robust to stochastic noise (Xu et al., 2010). GA works with a population of potential solutions and moves this population toward the optimum iteratively. The terms iteration and generation are used interchangeably in this work to refer to the process of transforming one population of solutions to another. The proposed GA is adapted to handle two performance measures with stochastic noise (i.e., a stochastic objective and constraint) in a simulation environment. Two types of R&S procedure are incorporated into our DOvS algorithms to enhance its statistical efficiency and validity. We use FDP (see Andradóttir & Kim (2010)) repeatedly in the proposed GA to ensure that the candidate solutions in a population are feasible with respect to the stochastic constraint (with some confidence). See Appendix A.1 for the statistical guarantee provided by FDP. At the end of the algorithms we also invoke the clean-up procedure proposed in Boesel, Nelson, and Ishii (2003) to select the best with respect to the stochastic objective from a set of potential solutions. See Appendix A.2 for a detailed description of this clean-up procedure. The first proposed DOvS algorithm guarantees global convergence as the simulation effort goes to infinity (under the condition that the picked solution is feasible), and also guarantees to choose the best among all evaluated possibly feasible solutions with a specified confidence level. Of course it is somewhat reassuring to have convergence statements, in the sense that the algorithms will eventually reach the global optimal when given large enough simulation effort. However, the algorithm’s finite-time efficiency may be sacrificed to maintain this theoretically appealing property. Further, this algorithm has to visit every solution infinitely often to guarantee convergence, which is not very practically meaningful especially when the sampling budget is limited. We therefore propose the second DOvS algorithm, which is more heuristic-oriented and can take advantage of the desirable inherent properties of GA (e.g., the adaptive constraint-handling techniques and the mechanism of elite population, see Coello (2002)). The second algorithm is designed to identify the best solution among the final elite population, and may deliver competitive performance in a reasonable computation time.
The paper is organized as follows. In Section 2 we define the DOvS problem with a single stochastic constraint, and introduce the relevant notations and assumptions. Sections 3 A globally convergent algorithm, 4 A heuristic algorithm present two DOvS algorithms that adopt different sampling rules and searching mechanisms, and thus provide different statistical guarantees. We give a high-level review of the existing techniques we incorporate, and a detailed description of only the most critical enhancements. An empirical evaluation to compare different algorithms is provided in Section 5, while the paper ends with some concluding remarks in Section 6. The convergence proof and some details of our algorithms are contained in Appendix A.
Section snippets
Framework
Our goal is to select the solution with the largest or smallest expected performance (in terms of the stochastic objective) among a large number of candidate solutions that satisfy a single stochastic constraint. Let denote the jth simulation observation taken from solution (associated with the objective performance measure), and let be the jth simulation observation taken from solution (associated with the stochastic constraint). The ith solution is a vector of d integer
A globally convergent algorithm
In this section we present a statistically valid algorithm that can offer global convergence as the number of samples goes to infinity, and at the same time guarantees to select the best among all potential solutions that have been declared to be feasible by FDP with a specified confidence level. In the Initialization stage we intend to pick up a number of possibly feasible solutions that serve as starting solutions for the next stage. We use FDP to ensure that the initial solutions are
A heuristic algorithm
In this section we propose a heuristic algorithm that can take advantage of the essential mechanics of GA, such as the adaptive constraint-handling techniques (see Coello (2002)) and elitism. The implementation of an elite population means that superior solutions are always conserved through evolutions, which is often used in routine GAs to speed up convergence. However, a consequence of using an elite population is that it invalidates the global convergence statement of our GA-based DOvS
Empirical results
In this section, an extensive empirical evaluation is performed to compare the performance of the following algorithms for solving the DOvS problem with a single stochastic constraint:
- 1.
The constrained DOvS algorithms proposed in Sections 3 A globally convergent algorithm, 4 A heuristic algorithm are performed, including the Global Convergent Constrained DOvS Algorithm (GCDA) and Heuristic Constrained DOvS Algorithm (HCDA).
- 2.
We also perform a modified version of GCDA, where FDP is replaced by a
Conclusions
In this paper we present two GA-based efficient algorithms that are specifically designed for the DOvS problem with a single stochastic constraint. We use GA to guide the searching process in a noisy environment, because it seems quite robust to statistical errors introduced in the performance estimates (see Boesel (1999)). The first proposed algorithm, GCDA, guarantees global convergence as the simulation effort goes to infinity, and at the same time guarantees to select the best among all
References (48)
- et al.
Simulation-based framework to improve patient experience in an emergency department
European Journal of Operational Research
(2013) - et al.
Optimizing stochastic production-inventory systems: A heuristic based on simulation and regression analysis
European Journal of Operational Research
(2011) - et al.
Selecting the best system
- et al.
Constrained optimization in expensive simulation: Novel approach
European Journal of Operational Research
(2010) - et al.
Multi-objective simulation-based evolutionary algorithm for an aircraft spare parts allocation problem
European Journal of Operational Research
(2008) - et al.
Constraint ordinal optimization
Information Sciences
(2002) Metaheuristics
- et al.
Controlled multistage selection procedures for comparison with a standard
European Journal of Operational Research
(2012) - et al.
Applying simulation optimization to the asset allocation of a property-casualty insurer
European Journal of Operational Research
(2010) - et al.
Fully sequential procedures for comparing constrained systems via simulation
Naval Research Logistics
(2010)
Call center staffing with simulation and cutting plane methods
Annals of Operation Research
Finding feasible systems in the presence of constraints on multiple performance measures
ACM Transactions on Modeling and Computer Simulation
Nonlinear programming: Theory and algorithms
Stochastic approximation algorithms for constrained optimization via simulation
ACM Transactions on Modeling and Computer Simulation
A framework for simulation-optimization software
IIE Transactions
Using ranking and selection to “clean up” after simulation optimization
Operations Research
Selecting a selection procedure
Management Science
Staffing multiskill call centers via linear programming and simulation
Management Science
A revisit of two-stage selection procedures
European Journal of Operational Research
Efficient simulation budget allocation for selecting an optimal subset
Informs Journal on Computing
Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art
Computer Methods in Applied Mechanics and Engineering
Constrained ordinal optimization – A feasibility model based approach
Discrete Event Dynamics Systems
Cited by (27)
Efficient optimization in stochastic production planning problems with product substitution
2024, Computers and Operations ResearchA simulation-based optimization for deploying multiple kinds road rescue vehicles in urban road networks
2023, Computers and Industrial EngineeringMetamodel-based simulation optimization considering a single stochastic constraint
2023, Computers and Operations ResearchA joint problem of strategic workforce planning and fleet renewal: With an application in defense
2022, European Journal of Operational ResearchCitation Excerpt :Hence, we couple the SD simulation model with an optimization algorithm, i.e., a genetic algorithm (GA), to find the best strategy for the joint problem. GA is a popular random search and optimization meta-heuristic that has been successfully applied to a wide range of difficult optimization problems because of its several advantages over other meta-heuristics including, the less need of assistant information of the solution region, more powerful ability to identify the global optimum, and the ease of integration with simulation models (Oztekin, Al-Ebbini, Sevkli, & Delen, 2018; Tsai & Fu, 2014; Zhang, Xie, Ge, Zhang, & Zong, 2019). In our approach, the GA seeks optimal workforce planning and fleet renewal strategies among all feasible candidate strategies, and passes these candidate strategies to the SD simulation model.
Efficient optimization algorithms for surgical scheduling under uncertainty
2021, European Journal of Operational ResearchCitation Excerpt :Many of the discrete OvS algorithms are based on some form of adaptive random search, which are developed to pick a set of candidate solutions in each iteration, simulate the sampled solutions to generate their estimated performance, and then determine which solution should be evaluated in the next iteration (see Nelson, 2010 for a survey). Most random search algorithms are designed to guarantee convergence to an optimal solution (either local or global depending on the algorithm) as the simulation effort approaches infinity (e.g., Alrefaei & Andradóttir, 2001; Tsai & Fu, 2014). Tsai (2013) proposed rapid-screening algorithms to efficiently solve OvS problems with zero-one decision variables.
Self-adjusting the tolerance level in a fully sequential feasibility check procedure
2018, European Journal of Operational ResearchCitation Excerpt :In this approach, the first phase includes the FCP in Batur and Kim (2010) to find a set of feasible or near feasible systems and the second phase includes an optimization algorithm proposed in Alkhamis and Ahmed (2004). Tsai and Fu (2014) combine the FCP with a genetic algorithm to solve a discrete optimization via simulation problem with a single stochastic constraint. In addition, Tsai and Liu (2015) and Tsai and Chen (2016) propose simulation optimization frameworks using the FCP to solve some inventory management problems.