Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 6th International Conference on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems, CPAIOR 2009, held in Pittsburgh, PA, USA, in May 2009. The 20 revised full papers and 10 extended abstracts presented together with 2 invited talks were carefully reviewed and selected from 65 submissions. The papers describe current research in the fields of constraint programming, artificial intelligence, and operations research and present new techniques or new applications in combinatorial optimization, thus exploring ways of solving large-scale, practical optimization problems through integration and hybridization of the fields' different techniques.



Invited Talks

Machine Learning Framework for Classification in Medicine and Biology

Systems modeling and quantitative analysis of large amounts of complex clinical and biological data may help to identify discriminatory patterns that can uncover health risks, detect early disease formation, monitor treatment and prognosis, and predict treatment outcome. In this talk, we describe a machine-learning framework for classification in medicine and biology. It consists of a pattern recognition module, a feature selection module, and a classification modeler and solver. The pattern recognition module involves automatic image analysis, genomic pattern recognition, and spectrum pattern extractions. The feature selection module consists of a combinatorial selection algorithm where discriminatory patterns are extracted from among a large set of pattern attributes. These modules are wrapped around the classification modeler and solver into a machine learning framework. The classification modeler and solver consist of novel optimization-based predictive models that maximize the correct classification while constraining the inter-group misclassifications. The classification/predictive models 1) have the ability to classify any number of distinct groups; 2) allow incorporation of heterogeneous, and continuous/time-dependent types of attributes as input; 3) utilize a high-dimensional data transformation that minimizes noise and errors in biological and clinical data; 4) incorporate a reserved-judgement region that provides a safeguard against over-training; and 5) have successive multi-stage classification capability. Successful applications of our model to developing rules for gene silencing in cancer cells, predicting the immunity of vaccines, identifying the cognitive status of individuals, and predicting metabolite concentrations in humans will be discussed. We acknowledge our clinical/biological collaborators: Dr. Vertino (Winship Cancer Institute, Emory), Drs. Pulendran and Ahmed (Emory Vaccine Center), Dr. Levey (Neurodegenerative Disease and Alzheimer’s Disease), and Dr. Jones (Clinical Biomarkers, Emory).

Eva K. Lee

G12 - Towards the Separation of Problem Modelling and Problem Solving

This paper presents the G12 large scale optimisation software platform, and discusses aspects of its architecture.

Mark Wallace

Regular Papers

Six Ways of Integrating Symmetries within Non-overlapping Constraints

This paper introduces six ways for handling a

chain of lexicographic ordering (lex-chain)

constraint between the origins of identical orthotopes (e.g., rectangles, boxes, hyper-rectangles) subject to the fact that they should not pairwise overlap. While the first two ways deal with the integration of a


constraint within a generic geometric constraint kernel, the four latter ways deal with the conjunction of a


constraint and a


or a


constraint. Experiments on academic two and three dimensional placement problems as well as on industrial problems show the benefit of such a strong integration of symmetry breaking constraints and non-overlapping ones.

Magnus Ågren, Nicolas Beldiceanu, Mats Carlsson, Mohamed Sbihi, Charlotte Truchet, Stéphane Zampelli

Throughput Constraint for Synchronous Data Flow Graphs

Stream (data-flow) computing is considered an effective para-digm for parallel programming of high-end multi-core architectures for embedded applications (networking, multimedia, wireless communication). Our work addresses a key step in stream programming for embedded multicores, namely, the efficient mapping of a synchronous data-flow graph (SDFG) onto a multi-core platform subject to a minimum throughput requirement. This problem has been extensively studied in the past, and its complexity has lead researches to develop incomplete algorithms which cannot exclude false negatives. We developed a CP-based complete algorithm based on a new throughput-bounding constraint. The algorithm has been tested on a number of non-trivial SDFG mapping problems with promising results.

Alessio Bonfietti, Michele Lombardi, Michela Milano, Luca Benini

A Shortest Path-Based Approach to the Multileaf Collimator Sequencing Problem

The multileaf collimator sequencing problem is an important component in effective cancer treatment delivery. The problem can be formulated as finding a decomposition of an integer matrix into a weighted sequence of binary matrices whose rows satisfy a consecutive ones property. Minimising the cardinality of the decomposition is an important objective and has been shown to be strongly NP-Hard, even for a matrix restricted to a single row. We show that in this latter case it can be solved efficiently as a shortest path problem, giving a simple proof that the one line problem is fixed-parameter tractable in the maximum intensity. This result was obtained recently by [9] with a complex construction. We develop new linear and constraint programming models exploiting this idea. Our approaches significantly improve the best known for the problem, bringing real-world sized problem instances within reach of complete methods.

Hadrien Cambazard, Eoin O’Mahony, Barry O’Sullivan

Backdoors to Combinatorial Optimization: Feasibility and Optimality

There has been considerable interest in the identification of structural properties of combinatorial problems that lead, directly or indirectly, to the development of efficient algorithms for solving them. One such concept is that of a backdoor set—a set of variables such that once they are instantiated, the remaining problem simplifies to a tractable form. While backdoor sets were originally defined to capture structure in decision problems with discrete variables, here we introduce a notion of backdoors that captures structure in optimization problems, which often have both discrete and continuous variables. We show that finding a feasible solution and proving optimality are characterized by backdoors of different kinds and size. Surprisingly, in certain mixed integer programming problems, proving optimality involves a smaller backdoor set than finding the optimal solution. We also show extensive results on the number of backdoors of various sizes in optimization problems. Overall, this work demonstrates that backdoors, appropriately generalized, are also effective in capturing problem structure in optimization problems.

Bistra Dilkina, Carla P. Gomes, Yuri Malitsky, Ashish Sabharwal, Meinolf Sellmann

Solution Enumeration for Projected Boolean Search Problems

Many real-world problems require the enumeration of all solutions of combinatorial search problems, even though this is often infeasible in practice. However, not always all parts of a solution are needed. We are thus interested in projecting solutions to a restricted vocabulary. Yet, the adaption of Boolean constraint solving algorithms turns out to be non-obvious provided one wants a repetition-free enumeration in polynomial space. We address this problem and propose a new algorithm computing projective solutions. Although we have implemented our approach in the context of Answer Set Programming, it is readily applicable to any solver based on modern Boolean constraint technology.

Martin Gebser, Benjamin Kaufmann, Torsten Schaub

k-Clustering Minimum Biclique Completion via a Hybrid CP and SDP Approach

This paper presents a hybrid Constraint Programming (CP) and Semidefinite Programming (SDP) approach to the


-clustering minimum biclique completion problem on bipartite graphs. The problem consists in partitioning a bipartite undirected graph into


clusters such that the sum of the edges that complete each cluster into a biclique,


, a complete bipartite subgraph, is minimum. The problem arises in telecommunications, in particular in bundling channels in multicast transmissions. In literature, the problem has been tackled with an Integer Bilinear Programming approach. We introduce two quasi-biclique constraints and we propose a SDP relaxation of the problem that provides much stronger lower bounds than the Bilinear Programming relaxation. The quasi-biclique constraints and the SDP relaxation are integrated into a hybrid CP and SDP approach. Computational results on a set of random instances provide further evidence about the potential of CP and SDP hybridizations.

Stefano Gualandi

Optimal Interdiction of Unreactive Markovian Evaders

The interdiction problem arises in a variety of areas including military logistics, infectious disease control, and counter-terrorism. In the typical formulation of


interdiction, the task of the interdictor is to find a set of edges in a weighted network such that the removal of those edges would maximally increase the cost to an evader of traveling on a path through the network.

Our work is motivated by cases in which the evader has incomplete information about the network or lacks planning time or computational power,


when authorities set up roadblocks to catch bank robbers, the criminals do not know all the roadblock locations or the best path to use for their escape.

We introduce a model of network interdiction in which the motion of one or more evaders is described by Markov processes and the evaders are assumed not to react to interdiction decisions. The interdiction objective is to find an edge set of size


, that maximizes the probability of capturing the evaders.

We prove that similar to the standard least-cost formulation for deterministic motion this interdiction problem is also NP-hard. But unlike that problem our interdiction problem is submodular and the optimal solution can be approximated within 1 − 1/


using a greedy algorithm. Additionally, we exploit submodularity through a priority evaluation strategy that eliminates the linear complexity scaling in the number of network edges and speeds up the solution by orders of magnitude. Taken together the results bring closer the goal of finding realistic solutions to the interdiction problem on global-scale networks.

Alexander Gutfraind, Aric Hagberg, Feng Pan

Using Model Counting to Find Optimal Distinguishing Tests

Testing is the process of stimulating a system with inputs in order to reveal hidden parts of the system state. In the case of non-deterministic systems, the difficulty arises that an input pattern can generate several possible outcomes. Some of these outcomes allow to distinguish between different hypotheses about the system state, while others do not.

In this paper, we present a novel approach to find, for non-deterministic systems, modeled as constraints over variables, tests that allow to distinguish among the hypotheses as good as possible. The idea is to assess the quality of a test by determining the ratio of distinguishing (good) and not distinguishing (bad) outcomes. This measure refines previous notions proposed in the literature on model-based testing and can be computed using model counting techniques. We propose and analyze a greedy-type algorithm to solve this test optimization problem, using existing model counters as a building block. We give preliminary experimental results of our method, and discuss possible improvements.

Stefan Heinz, Martin Sachenbacher

Reformulating Global Grammar Constraints

An attractive mechanism to specify global constraints in rostering and other domains is via formal languages. For instance, the




constraints specify constraints in terms of the languages accepted by an automaton and a context-free grammar respectively. Taking advantage of the fixed length of the constraint, we give an algorithm to transform a context-free grammar into an automaton. We then study the use of minimization techniques to reduce the size of such automata and speed up propagation. We show that minimizing such automata after they have been unfolded and domains initially reduced can give automata that are more compact than minimizing before unfolding and reducing. Experimental results show that such transformations can improve the size of rostering problems that we can “model and run”.

George Katsirelos, Nina Narodytska, Toby Walsh

IBM ILOG CP Optimizer for Detailed Scheduling Illustrated on Three Problems

Since version 2.0, IBM ILOG CP Optimizer provides a new scheduling language supported by a robust and efficient automatic search. This paper illustrates both the expressivity of the modelling language and the robustness of the automatic search on three problems recently studied in the scheduling literature. We show that all three problems can easily be modelled with CP Optimizer in only a few dozen lines (the complete models are provided) and that on average the automatic search outperforms existing problem specific approaches.

Philippe Laborie

Open Constraints in a Boundable World

Open forms of global constraints allow the addition of new variables to an argument during the execution of a constraint program. Such forms are needed for difficult constraint programming problems where problem construction and problem solving are interleaved. We introduce a new model of open global constraint where the length of the sequence of variables can be constrained but there is no a priori restriction on the variables that might be added. In general, propagation that is sound for a global constraint can be unsound when the constraint is open. We identify properties of constraints that simplify the design of algorithms for propagation by identifying when no propagation can be done, and use them to design propagation algorithms for several open global constraints.

Michael J. Maher

Sequencing and Counting with the multicost-regular Constraint

This paper introduces a global constraint encapsulating a


constraint together with several cumulative costs. It is motivated in the context of personnel scheduling problems, where a schedule meets patterns and occurrence requirements which are intricately bound. The optimization problem underlying the


constraint is NP-hard but it admits an efficient Lagrangian relaxation. Hence, we propose a filtering based on this relaxation. The expressiveness and the efficiency of this new constraint is experimented on personnel scheduling benchmark instances with standard work regulations. The comparative empirical results show how


can significantly outperform a decomposed model with





Julien Menana, Sophie Demassey

Bandwidth-Limited Optimal Deployment of Eventually-Serializable Data Services

Providing consistent and fault-tolerant distributed object services is among the fundamental problems in distributed computing. To achieve fault-tolerance and to increase throughput, objects are replicated at different networked nodes. However, replication induces significant communication costs to maintain replica consistency. Eventually-Serializable Data Service (ESDS) has been proposed to reduce these costs and enable fast operations on data, while still providing guarantees that the replicated data will eventually be consistent. This paper revisits ESDS instances where bandwidth constraints are imposed on segments of the network interconnect. This class of problems was shown to be extremely challenging for both Mixed Integer Programming (MIP) and for Constraint Programming (CP), some instances requiring hours of computation time. The paper presents an improved constraint programming model, a constraint-based local search model that can obtain high-quality solutions quickly and a local search/constraint programming hybrid. The experimental results indicate that the resulting models significantly improve the state of the art.

Laurent Michel, Pascal Van Hentenryck, Elaine Sonderegger, Alexander Shvartsman, Martijn Moraal

Tightening the Linear Relaxation of a Mixed Integer Nonlinear Program Using Constraint Programming

This paper aims at solving a nonconvex mixed integer nonlinear programming (MINLP) model used to solve a refinery crude-oil operations scheduling problem. The model is mostly linear but contains bilinear products of continuous variables in the objective function. It is possible to define a linear relaxation of the model leading to a weak bound on the objective value of the optimal solution. A typical method to circumvent this issue is to discretize the continuous space and to use linear relaxation constraints based on variables lower and upper bounds (e.g. McCormick convex envelopes) on each subdivision of the continuous space. This work explores another approach involving constraint programming (CP). The idea is to use an additional CP model which is used to tighten the bounds of the continuous variables involved in bilinear terms and then generate cuts based on McCormick convex envelopes. These cuts are then added to the mixed integer linear program (MILP) during the search leading to a tighter linear relaxation of the MINLP. Results show large reductions of the optimality gap of a two step MILP-NLP solution method due to the tighter linear relaxation obtained.

Sylvain Mouret, Ignacio E. Grossmann, Pierre Pestiaux

The Polytope of Context-Free Grammar Constraints

Context-free grammar constraints enforce that a sequence of variables forms a word in a language defined by a context-free grammar. The constraint has received a lot of attention in the last few years as it represents an effective and highly expressive modeling entity. Its application has been studied in the field of Constraint Programming, Mixed Integer Programming, and SAT to solve complex decision problems such as shift scheduling. In this theoretical study we demonstrate how the constraint can be linearized efficiently. In particular, we propose a lifted polytope which has only integer extreme points. Based on this result, for shift scheduling problems we prove the equivalence of Dantzig’s original set covering model and a lately introduced grammar-based model.

Gilles Pesant, Claude-Guy Quimper, Louis-Martin Rousseau, Meinolf Sellmann

Determining the Number of Games Needed to Guarantee an NHL Playoff Spot

Many sports fans invest a great deal of time into watching and analyzing the performance of their favourite team. However, the tools at their disposal are primarily heuristic or based on folk wisdom. This paper provides a concrete mechanism for calculating the minimum number of points needed to guarantee a playoff spot in the National Hockey League (NHL). Along with determining how many games need to be won to guarantee a playoff spot comes the notion of “must win” games. Our method can identify those games where, if a team loses, they no longer control their own playoff destiny. As a side effect of this, we can also identify when teams get lucky and still make the playoffs even though another team could have eliminated them.

Tyrel Russell, Peter van Beek

Scalable Load Balancing in Nurse to Patient Assignment Problems

This paper considers the daily assignment of newborn infant patients to nurses in a hospital. The objective is to balance the workload of the nurses, while satisfying a variety of side constraints. Prior work proposed a MIP model for this problem, which unfortunately did not scale to large instances and only approximated the objective function, since minimizing the variance cannot be expressed in a linear model. This paper presents constraint programming (CP) models of increasing complexity to solve large instances with hundreds of patients and nurses in a few seconds using the


optimization system. The CP models use the recent spread global constraint to minimize the variance, as well as an exact decomposition technique.

Pierre Schaus, Pascal Van Hentenryck, Jean-Charles Régin

Learning How to Propagate Using Random Probing

In constraint programming there are often many choices regarding the propagation method to be used on the constraints of a problem. However, simple constraint solvers usually only apply a standard method, typically (generalized) arc consistency, on all constraints throughout search. Advanced solvers additionally allow for the modeler to choose among an array of propagators for certain (global) constraints. Since complex interactions exist among constraints, deciding in the modelling phase which propagation method to use on given constraints can be a hard task that ideally we would like to free the user from. In this paper we propose a simple technique towards the automation of this task. Our approach exploits information gathered from a random probing preprocessing phase to automatically decide on the propagation method to be used on each constraint. As we demonstrate, data gathered though probing allows for the solver to accurately differentiate between constraints that offer little pruning as opposed to ones that achieve many domain reductions, and also to detect constraints and variables that are amenable to certain propagation methods. Experimental results from an initial evaluation of the proposed method on binary CSPs demonstrate the benefits of our approach.

Efstathios Stamatatos, Kostas Stergiou

DFS* and the Traveling Tournament Problem

Our paper presents a new exact method to solve the traveling tournament problem. More precisely, we apply DFS* to this problem and improve its performance by keeping the expensive heuristic estimates in memory to help greatly cut down the computational time needed. We further improve the performance by exploiting a symmetry property found in the traveling tournament problem. Our results show that our approach is one of the top performing approaches for this problem. It is able to find known optimal solutions in a much smaller amount of computational time than past approaches, to find a new optimal solution, and to improve the lower bounds of larger problem instances which do not have known optimal solutions. As a final contribution, we also introduce a new set of problem instances to diversify the available instance sets for the traveling tournament problem.

David C. Uthus, Patricia J. Riddle, Hans W. Guesgen

Max Energy Filtering Algorithm for Discrete Cumulative Resources

In scheduling using constraint programming we usually reason only about possible start times and end times of activities and remove those which are recognized as unfeasible. However often in practice there are more variables in play: variable durations of activities and variable resource capacity requirements. This paper presents a new algorithm for filtering maximum durations and maximum capacity requirements for discrete cumulative resources. It is also able to handle optional interval variables introduced in IBM ILOG CP Optimizer 2.0. Time complexity of the algorithm is

${\mathcal O}(n {\rm log} n)$

. The algorithm is based on never published algorithm by Wim Nuijten and a on slightly modified e-feasibility checking algorithm by Armin Wolf and Gunnar Schrader. The later algorithm is also described in the paper.

Petr Vilím

Extended Abstracts

Hybrid Branching

State-of-the-art solvers for Constraint Satisfaction Problems (CSP), Mixed Integer Programs (MIP), and satisfiability problems (SAT) are usually based on a branch-and-bound algorithm. The question how to split a problem into subproblems


is in the core of any branch-and-bound algorithm. Branching on individual variables is very common in CSP, MIP, and SAT. The rules, however, which variable to choose for branching, differ significantly. In this paper, we present hybrid branching, which combines selection rules from all three fields.

Tobias Achterberg, Timo Berthold

Constraint Programming and Mixed Integer Linear Programming for Rescheduling Trains under Disrupted Operations

A Comparative Analysis of Models, Solution Methods,and Their Integration

The problem of rescheduling trains under disrupted operations has received more attention in the last years because of the increasing demand in railway transportation. Railway networks are more and more saturated and the probability that delayed trains affect others is large. Therefore, it is very important to react after incidents by recalculating new arrival/departure times and reassigning tracks/platforms with the aim of minimizing the effect of propagation of incidents.

Rodrigo Acuna-Agost, Philippe Michelon, Dominique Feillet, Serigne Gueye

Constraint Models for Sequential Planning

Planning and constraint satisfaction are important areas of Artificial Intelligence, but surprisingly constraint satisfaction techniques are not widely applied to planning problems where dedicated solving algorithms dominate. Classical AI planning deals with finding a sequence of actions that transfer the initial state of the world into a desired state. One of the main difficulties of planning is that the size of the solution – the length of the plan – is unknown in advance. On the other hand, a constraint satisfaction problem needs to be fully specified in advance before the constraint model goes to the solver. As Kautz and Selman showed, the problem of shortest-plan planning can be translated to a series of SAT problems, where each SAT instance encodes the problem of finding a plan of a given length. First, we start with finding a plan of length 1 and if it does not exist then we continue with a plan of length 2 etc. until the plan is found. The same approach can be applied when constraint satisfaction is used instead of SAT. There exists a straightforward constraint model of this type but more advanced constraint models exploit the structure of a so called planning graph, namely GP-CSP and CSP-PLAN. There also exist hand-crafted constraint models for particular planning problems such as CPlan. All these models share the logical representation of the problem where Boolean variables describe whether a given proposition holds in a given state and whether a given action is applied to a given state. Constraints are in the form of logical formulae (frequently implications). In our opinion, these models do not exploit fully the potential of constraint satisfaction, namely domain filtering techniques and global constraints. Therefore, we suggested to use multi-valued representation of states based on the state variable formalism SAS


that contributes to fewer variables with larger domains where domain filtering pays off. Also, we proposed to encapsulate the sets of logical constraints into combinatorial constraints with an extensionally defined set of admissible tuples. This extended abstract summarizes our recent results in the design of constraint models for planning problems presented in [1,2].

Roman Barták, Daniel Toropila

A Fast Algorithm to Solve the Frequency Assignment Problem

The problem considered in this paper consists in defining an assignment of frequencies to radio link between transmitters which minimize the number of frequency used to solve a CSP very well referenced as ROADEF’01 challenge. This problem is NP-hard and few results have been reported on techniques for solving it optimally. We applied to this version of the frequency assignment problem an original hybrid method which combines constraint propagation and Tabu search. Two Tabu lists are used to filter the search space and to avoid algorithm cycles. Computational results, obtained on a number of standard problem instances, show the efficiency of the proposed approach.

Mohammad Dib, Alexandre Caminada, Hakim Mabed

A Hybrid LS/CP Approach to Solve the Weekly Log-Truck Scheduling Problem

We present a hybrid approach to solve the log-truck scheduling problem (LTSP), which combines routing and scheduling of forest vehicles and includes aspects such as multiple products and inventory management. The LTSP is closely related to some routing problems encountered in other industries, in particular, so called “pick-up and delivery problems”. In general, the LTSP are more complex than classical PDP, since in the LTSP we must synchronise trucks and log-loaders to avoid as much as possible waiting times.

Nizar El Hachemi, Michel Gendreau, Louis-Martin Rousseau

Modelling Search Strategies in Rules2CP

In this abstract, we present a rule-based modelling language for constraint programming, called Rules2CP [1], and a library PKML for modelling packing problems. Unlike other modelling languages, Rules2CP adopts a single knowledge representation paradigm based on logical rules without recursion, and a restricted set of data structures based on records and enumerated lists given with iterators. We show that this is sufficient to model constraint satisfaction problems together with search strategies, where

search trees

are expressed by

logical formulae

, and

heuristic choice criteria

are defined by

preference orderings on variables and formulae

. Rules2CP statements are compiled to constraint programs over finite domains (currently SICStus-prolog and soon Choco-Java) by term rewriting and partial evaluation.

François Fages, Julien Martin

CP-INSIDE: Embedding Constraint-Based Decision Engines in Business Applications

The CP-INSIDE project seeks to make constraint programming technology more accessible. In particular, it aims to facilitate the embedding of CP technology in business applications, and its integration with familiar software tools. A major objective of the CP-INSIDE project is to allow business developers to create constraint-based decision engines utilizing the power of CP technology in a vendor-neutral way. CP-INSIDE simplifies the development of specialized decision engines making them independent of the underlying generic CP solvers. There have already been several attempts to unify constraint programming languages (for instance, [1,2,3]) that directly or indirectly contributed to the same objective.

Jacob Feldman, Eugene Freuder, James Little

An Integrated Genetic Algorithm and Integer Programming Approach to the Network Design Problem with Relays

The network design problem with relays (NDPR) arises in many real-life telecommunication and supply chain networks. Similar to the multi-commodity network design problem, a set of commodities is given, and each commodity


is to be routed through single path from the source node




) to the sink node




). However, an upper bound


is imposed on the distance that a commodity


can travel on the path from the source node




) to the sink node




) without visiting special nodes, called relays. For example, in digital telecommunication networks, relays represent points where attenuated communication signals are regenerated. A fixed cost of



is incurred when a node


is dedicated as a relay, and each edge (




) has an installation cost of





and a length of





. The NDPR is defined as selecting a set of edges from a given set of candidate edges and determining relay nodes to minimize the network design cost while making sure that each commodity


is routed through a single path on which the distances between the node




) and the first relay, between any consecutive relays, and between the last relay and the node




) are less than the upper bound



Abdullah Konak, Sadan Kulturel-Konak

A Benders’Approach to a Transportation Network Design Problem

Benders’ decomposition is an iterative technique for mathematical programming in which a problem is decomposed into two parts: a master problem which solves for only a subset of the variables, and a subproblem which solves for the remaining variables while treating the master variables as fixed. The technique uses information from the solution of the subproblem to design a constraint that is unsatisfied by the current master solution but necessary for the optimal solution of the full problem. The master problem and subproblem are solved iteratively as new constraints are added to the master problem until no such constraint exists and the last master solution therefore optimizes the full problem. In traditional Benders’ decomposition [1], the subproblem is a linear program, and the Benders’ constraint is based on the dual solution to that subproblem. More recently, logic-based [3] or combinatorial Benders’ [2] have been developed that generate Benders constraints based on infeasibility or suboptimality-based constraints. By not requiring a linear programming subproblem, Benders approaches provide an excellent approach to integrating disparate optimization approaches.

Benjamin Peterson, Michael A. Trick

Progress on the Progressive Party Problem

We report new results for solving the progressive party problem with finite domain constraints, which are competitive with the best local search results in the literature.When introduced in [1], the progressive party problem was a show case for finite domain constraint programming, since a solution of the original problem for 6 time periods could be found in 26 minutes with Ilog Solver, while an integer programming model was not successful. Improved results using finite domains were reported in [2]. Since then, alternative solutions using MILP have been proposed, while local search methods have been significantly faster, as well as more stable for given problem variations. The best results (to my knowledge) are from [3].

Helmut Simonis


Weitere Informationen

Premium Partner