Skip to main content
main-content

Inhaltsverzeichnis

Frontmatter

Invited Papers

Search and Inference in AI Planning

While Planning has been a key area in Artificial Intelligence since its beginnings, significant changes have occurred in the last decade as a result of new ideas and a more established empirical methodology. In this invited talk, I will focus on Optimal Planning where these new ideas can be understood along two dimensions: branching and pruning. Both heuristic search planners, and SAT and CSP planners can be understood in this way, with the latter branching on variables and pruning by constraint propagation, and the former branching on actions and pruning by lower bound estimations. The two formulations, however, have a lot in common, and some key planners such as Graphplan can be understood in either way: as computing a lower bound function and searching backwards from the goal, or as performing a precise, bounded form of variable elimination, followed by backtracking. The main limitation of older, so-called Partial Ordered Causal Link (POCL) planners, is that they provide smart branching schemes, in particular for temporal planning, but weak pruning rules. Indeed, the computation and even the formulation of good lower bounds for POCL plans is far from trivial. However, the pruning that cannot be obtained by the use of good monolithic lower bounds, can often be achieved by simple propagation rules over a suitable constraint-based formulation. We show this to be the case for CPT, currently the best domain-independent temporal planner, and then explore briefly further branching and pruning variations in parallel and conformant planning.

Héctor Geffner

OWL: A Description Logic Based Ontology Language

(Extended Abstract)

Description Logics (DLs) are a family of class (concept) based knowledge representation formalisms. They are characterised by the use of various constructors to build complex concepts from simpler ones, an emphasis on the decidability of key reasoning tasks, and by the provision of sound, complete and (empirically) tractable reasoning services.

Although they have a range of applications (e.g., reasoning with database schemas and queries [1,2]), DLs are perhaps best known as the basis for ontology languages such as OIL, DAML+OIL and OWL [3]. The decision to base these languages on DLs was motivated by a requirement not only that key inference problems (such as class satis.ability and subsumption) be decidable, but that “practical” decision procedures and “efficient” implemented systems also be available.

Ian Horrocks

Preference Reasoning

Constraints and preferences are ubiquitous in real-life. Moreover, preferences can be of many kinds: qualitative, quantitative, conditional, positive or negative, to name a few. Our ultimate goal is to define and study formalisms that can model problems with both constraints and many kind of preferences, possibly defined by several agents, and to develop tools to solve such problems efficiently.

In this paper we briefly report on recent work towards this goal.

Francesca Rossi

The G12 Project: Mapping Solver Independent Models to Efficient Solutions

The G12 project recently started by National ICT Australia (NICTA) is an ambitious project to develop a software platform for solving large scale industrial combinatorial optimisation problems. The core design involves three languages: Zinc, Cadmium and Mercury (Group 12 of the periodic table). Zinc is a declarative modelling language for expressing problems, independent of any solving methodology. Cadmium is a mapping language for mapping Zinc models to underlying solvers and/or search strategies, including hybrid approaches. Finally, existing Mercury will be extended as a language for building extensible and hybridizable solvers. The same Zinc model, used with different Cadmium mappings, will allow us to experiment with different complete, local, or hybrid search approaches for the same problem. This talk will explain the G12 global design, the final G12 objectives, and our progress so far.

Peter J. Stuckey, Maria Garcia de la Banda, Michael Maher, Kim Marriott, John Slaney, Zoltan Somogyi, Mark Wallace, Toby Walsh

Best Papers

Symmetry Definitions for Constraint Satisfaction Problems

We review the many different definitions of symmetry for constraint satisfaction problems (CSPs) that have appeared in the literature, and show that a symmetry can be defined in two fundamentally different ways: as an operation preserving the solutions of a CSP instance, or else as an operation preserving the constraints. We refer to these as

solution symmetries

and

constraint symmetries

. We define a constraint symmetry more precisely as an automorphism of a hypergraph associated with a CSP instance, the microstructure complement. We show that the solution symmetries of a CSP instance can also be obtained as the automorphisms of a related hypergraph, the

k-ary nogood hypergraph

and give examples to show that some instances have many more solution symmetries than constraint symmetries. Finally, we discuss the practical implications of these different notions of symmetry.

David Cohen, Peter Jeavons, Christopher Jefferson, Karen E. Petrie, Barbara M. Smith

Dynamic Ordering for Asynchronous Backtracking on DisCSPs

An algorithm that performs asynchronous backtracking on distributed

CSPs

, with dynamic ordering of agents is proposed,

ABT

_

DO

. Agents propose reorderings of lower priority agents and send these proposals whenever they send assignment messages. Changes of ordering triggers a different computation of

Nogoods

. The dynamic ordered asynchronous backtracking algorithm uses polynomial space, similarly to standard

ABT

.

The

ABT

_

DO

algorithm with three different ordering heuristics is compared to standard

ABT

on randomly generated

DisCSPs

. A

Nogood-triggered

heuristic, inspired by dynamic backtracking, is found to outperform static order

ABT

by a large factor in run-time and improve the network load.

Roie Zivan, Amnon Meisels

Full Papers

Incremental Algorithms for Local Search from Existential Second-Order Logic

Local search is a powerful and well-established method for solving hard combinatorial problems. Yet, until recently, it has provided very little user support, leading to time-consuming and error-prone implementation tasks. We introduce a scheme that, from a high-level description of a constraint in existential second-order logic with counting, automatically synthesises incremental penalty calculation algorithms. The performance of the scheme is demonstrated by solving real-life instances of a financial portfolio design problem that seem unsolvable in reasonable time by complete search.

Magnus Ågren, Pierre Flener, Justin Pearson

Inter-distance Constraint: An Extension of the All-Different Constraint for Scheduling Equal Length Jobs

We study a global constraint, the “inter-distance constraint” that ensures that the distance between any pair of variables is at least equal to a given value. When this value is 1, the inter-distance constraint reduces to the all-different constraint. We introduce an algorithm to propagate this constraint and we show that, when domains of the variables are intervals, our algorithm achieves arc-B-consistency. It provides tighter bounds than generic scheduling constraint propagation algorithms (like edge-finding) that could be used to capture this constraint. The worst case complexity of the algorithm is cubic but it behaves well in practice and it drastically reduces the search space. Experiments on special Job-Shop problems and on an industrial problem are reported.

Konstantin Artiouchine, Philippe Baptiste

Mind the Gaps: A New Splitting Strategy for Consistency Techniques

Classical methods for solving numerical CSPs are based on a branch and prune algorithm, a dichotomic enumeration process interleaved with a consistency filtering algorithm. In many interval solvers, the pruning step is based on local consistencies or partial consistencies. The associated pruning algorithms compute numerous data required to identify gaps within some domains,

i.e.

inconsistent intervals strictly included in the domain. However, these gaps are only used to compute the smallest approximation of the box enclosing all the solutions. This paper introduces a search strategy, named

MindTheGaps

, that takes advantage of the gaps identified during the filtering process. Gaps are collected with a negligible overhead, and are used to select the splitting direction as well as to define relevant cutting points within the domain. Splitting the domain by removing such gaps definitely reduces the search space. It also helps to discard some redundant solutions and helps the search algorithm to isolate different solutions. First experimental results show that

MindTheGaps

significantly improves performances of the search process.

Heikel Batnini, Claude Michel, Michel Rueher

Graph Invariants as Necessary Conditions for Global Constraints

This article presents a database of about 200 graph invariants for deriving systematically necessary conditions from the graph properties based representation of global constraints. This scheme is based on invariants on the graph characteristics used in the description of a global constraint. A SICStus Prolog implementation based on arithmetic and logical constraints as well as on indexicals is available.

Nicolas Beldiceanu, Mats Carlsson, Jean-Xavier Rampon, Charlotte Truchet

Allocation and Scheduling for MPSoCs via Decomposition and No-Good Generation

This paper describes an efficient, complete approach for solving a complex allocation and scheduling problem for Multi-Processor System-on-Chip (MPSoC). Given a throughput constraint for a target application characterized as a task graph annotated with computation, communication and storage requirements, we compute an allocation and schedule which minimizes communication cost first, and then the makespan given the minimal communication cost. Our approach is based on problem decomposition where the allocation is solved through an Integer Programming solver, while the scheduling through a Constraint Programming solver. The two solvers are interleaved and their interaction regulated by no-good generation. Experimental results show speedups of orders of magnitude w.r.t. pure IP and CP solution strategies.

Luca Benini, Davide Bertozzi, Alessio Guerri, Michela Milano

Sub-optimality Approximations

The sub-optimality approximation problem considers an optimization problem

$\mathcal{O}$

, its optimal solution

σ

*

, and a variable

x

with domain {

d

1

,...,

d

m

} and returns approximations to

$\mathcal{O}$

[

$x \longleftarrow d_1$

],...,

$\mathcal{O}$

[

$x \longleftarrow d_m$

], where

$\mathcal{O}$

[

$x \longleftarrow d_1$

] denotes the problem

$\mathcal{O}$

with

x

assigned to

d

i

. The sub-optimality approximation problem is at the core of online stochastic optimization algorithms and it can also be used for solution repair and approximate filtering of optimization constraints. This paper formalizes the problem and presents sub-optimality approximation algorithms for metric TSPs, packet scheduling, and metric

k

-medians that run faster than the optimal or approximation algorithms. It also presents results on the hardness/easiness of sub-optimality approximations.

Russell Bent, Irit Katriel, Pascal Van Hentenryck

A Linear-Logic Semantics for Constraint Handling Rules

One of the attractive features of the Constraint Handling Rules (CHR) programming language is its declarative semantics where rules are read as formulae in first-order predicate logic. However, the more CHR is used as a general-purpose programming language, the more the limitations of that kind of declarative semantics in modelling change become apparent. We propose an alternative declarative semantics based on (intuitionistic) linear logic, establishing strong theorems on both soundness and completeness of the new declarative semantics w.r.t. operational semantics.

Hariolf Betz, Thom Frühwirth

Distributed Stable Matching Problems

We consider the Stable Marriage Problem and the Stable Roommates Problem, two well-known types of the general class of Stable Matching Problems. They are combinatorial problems which can be solved by centralized algorithms in polynomial time. This requires to make public lists of preferences which agents would like to keep private. With this aim, we define the distributed version of these problems, and we provide a constraint-based approach that solves them keeping privacy. We give empirical results on the proposed approach.

Ismel Brito, Pedro Meseguer

Beyond Hypertree Width: Decomposition Methods Without Decompositions

The general intractability of the constraint satisfaction problem has motivated the study of restrictions on this problem that permit polynomial-time solvability. One major line of work has focused on structural restrictions, which arise from restricting the interaction among constraint scopes. In this paper, we engage in a mathematical investigation of generalized hypertree width, a structural measure that has up to recently eluded study. We obtain a number of computational results, including a simple proof of the tractability of CSP instances having bounded generalized hypertree width.

Hubie Chen, Víctor Dalmau

Ad-hoc Global Constraints for Life

Still-life is a challenging problem for CP techniques. We show how to use the global

case

constraint to construct ad-hoc constraints which can provide stronger propagation than existing CP models. We also demonstrate how to use BDDs to construct good representations for the

case

constraint which is critical for efficiency. Our results seem comparable to hybrid CP/IP models even though we are only using propagation albeit on ad-hoc global constraints. This is rather promising since it shows the potential of ad-hoc global constraints to do better than IP global constraints.

Kenil C. K. Cheng, Roland H. C. Yap

Tractable Clones of Polynomials over Semigroups

We contribute to the algebraic study of the complexity of constraint satisfaction problems. We give a new sufficient condition on a set of relations Γ over a domain

S

for the tractability of CSP(Γ): if

S

is a block-group (a particular class of semigroups) of exponent

ω

and Γ is a set of relations over

S

preserved by the operation defined by the polynomial

f

(

x

,

y

,

z

) =

xy

ω

− 1

z

over

S

, then CSP(Γ) is tractable. This theorem strictly improves on results of Feder and Vardi and Bulatov et al. and we demonstrate it by reproving an upper bound of Klíma et al.

We also investigate systematically the tractability of CSP(Γ) when Γ is a set of relations closed under operations that are all expressible as polynomials over a finite semigroup

S

. In particular, if

S

is a nilpotent group, we show that CSP(Γ) is tractable iff one of these polynomials defines a Malt’sev operation, and conjecture that this holds for all groups.

Víctor Dalmau, Ricard Gavaldà, Pascal Tesson, Denis Thérien

CP(Graph): Introducing a Graph Computation Domain in Constraint Programming

In an increasing number of domains such as bioinformatics, combinatorial graph problems arise. We propose a novel way to solve these problems, mainly those that can be translated to constrained subgraph finding. Our approach extends constraint programming by introducing CP(Graph), a new computation domain focused on graphs including a new type of variable: graph domain variables as well as constraints over these variables and their propagators. These constraints are subdivided into kernel constraints and additional constraints formulated as networks of kernel constraints. For some of these constraints a dedicated global constraint and its associated propagator are sketched. CP(Graph) is integrated with finite domain and finite sets computation domains, allowing the combining of constraints of these domains with graph constraints.

A prototype of CP(Graph) built over finite domains and finite sets in Oz is presented. And we show that a problem of biochemical network analysis can be very simply described and solved within CP(Graph).

Gregoire Dooms, Yves Deville, Pierre Dupont

Interval Analysis in Scheduling

This paper reconsiders the most basic scheduling problem, that of minimizing the makespan of a partially ordered set of activities, in the context of incomplete knowledge. While this problem is very easy in the deterministic case, its counterpart when durations are interval-valued is much trickier, as standard results and algorithms no longer apply. After positioning this paper in the scope of temporal networks under uncertainty, we provide a complete solution to the problem of finding the latest starting times and floats of activities, and of locating surely critical ones, as they are often isolated. The minimal float problem is NP-hard while the maximal float problem is polynomial. New complexity results and efficient algorithms are provided for the interval-valued makespan minimization problem.

Jérôme Fortin, Paweł Zieliński, Didier Dubois, Hélène Fargier

Assumption-Based Pruning in Conditional CSP

A conditional constraint satisfaction problem (CCSP) is a variant of the standard constraint satisfaction problem (CSP). CCSPs model problems where some of the variables and constraints may be conditionally inactive such that they do not participate in a solution. Recently, algorithms were introduced that use MAC at their core to solve CCSP. We extend MAC with a simple assumption-based reasoning. The resulting algorithm, Activity MAC (AMAC), is able to achieve significantly better pruning than existing methods. AMAC is shown to be more than two orders of magnitude more efficient than CondMAC on certain problem classes. Our algorithm is most naturally expressed using a variant of the CCSP representation that we refer to as Activity CSP (ACSP). ACSP introduces activity variables which explicitly control the presence of other variables in the solution. Common aspects of CCSP, such as activity clustering and disjunction, are easily captured by ACSP and contribute to improved pruning by AMAC.

Felix Geller, Michael Veksler

Conditional Symmetry Breaking

We introduce the study of

Conditional

symmetry breaking in constraint programming. This arises in a sub-problem of a constraint satisfaction problem, where the sub-problem satisfies some

condition

under which additional symetries hold. Conditional symmetry can cause redundancy in a systematic search for solutions. Breaking this symmetry is an important part of solving a constraint satisfaction problem effectively. We demonstrate experimentally that three methods, well-known for breaking unconditional symmetries, can be applied to conditional symmetries. These are: adding conditional symmetry-breaking constraints, reformulating the problem to remove the symmetry, and augmenting the search process to break the conditional symmetry dynamically through the use of a variant of Symmetry Breaking by Dominance Detection (SBDD).

Ian P. Gent, Tom Kelsey, Steve A. Linton, Iain McDonald, Ian Miguel, Barbara M. Smith

Symmetry and Consistency

We introduce a novel and exciting research area: symmetrising levels of consistency to produce stronger forms of consistency and more efficient mechanisms for establishing them. We propose new levels of consistency for Constraint Satisfaction Problems (CSPs) incorporating the symmetry group of a CSP. We first define Sym(

i

,

j

)-consistency, show that even Sym(1,0)-consistency can prune usefully, and study some consequences of maintaining Sym(i, 0)- consistency. We then present pseudocode for SymPath consistency, and a symmetrised version of singleton consistency, before presenting experimental evidence of these algorithms’ practical effectiveness. With this contribution we establish the study of symmetry-based levels of consistency of CSPs.

Ian P. Gent, Tom Kelsey, Steve Linton, Colva Roney-Dougal

Solving the MOLR and Social Golfers Problems

We present a range of techniques for tackling the problem of finding sets of Mutually Orthogonal Latin Rectangles (MOLR). In particular, we use a construction that allows us to search for solutions of a particular form with much reduced effort, and a seeding heuristic for the MOLR problem that allows a local search approach to find much better solutions than would be possible otherwise. Finally, we use the MOLR solutions found to construct solutions to the social golfer problem that improve the best known number of rounds for 43 instances, by as many as 10 rounds.

Warwick Harvey, Thorsten Winterer

Advances in Polytime Isomorph Elimination for Configuration

An inherent and often very underestimated difficulty in solving configuration problems is the existence of many structural isomorphisms. This issue of considerable importance attracted little research interest despite its applicability to almost all configuration problems. We define two search procedures allowing the removal of large portions of the search space that provably solely contain non canonical solutions. The tests performed on each node are time polynomial. Experimental results are reported on a simple generic configuration example.

Laurent Hénocque, Mathias Kleiner, Nicolas Prcovic

Planning and Scheduling to Minimize Tardiness

We combine mixed integer linear programming (MILP) and constraint programming (CP) to minimize tardiness in planning and scheduling. Tasks are allocated to facilities using MILP and scheduled using CP, and the two are linked via logic-based Benders decomposition. We consider two objectives: minimizing the number of late tasks, and minimizing total tardiness. Our main theoretical contribution is a relaxation of the cumulative scheduling subproblem, which is critical to performance. We obtain substantial computational speedups relative to the state of the art in both MILP and CP. We also obtain much better solutions for problems that cannot be solved to optimality.

J. N. Hooker

Search Heuristics and Heavy-Tailed Behaviour

The heavy-tailed phenomenon that characterises the runtime distributions of backtrack search procedures has received considerable attention over the past few years. Some have conjectured that heavy-tailed behaviour is largely due to the characteristics of the algorithm used. Others have conjectured that problem structure is a significant contributor. In this paper we attempt to explore the former hypothesis, namely we study how variable and value ordering heuristics impact the heavy-tailedness of runtime distributions of backtrack search procedures. We demonstrate that heavy-tailed behaviour can be eliminated from particular classes of random problems by carefully selecting the search heuristics, even when using chronological backtrack search. We also show that combinations of good search heuristics can eliminate heavy tails from Quasigroups with Holes of order 10, and give some insights into why this is the case. These results motivate a more detailed analysis of the effects that variable and value ordering can have on heavy-tailedness. We show how combinations of variable and value ordering heuristics can result in a runtime distribution being

inherently heavy-tailed

. Specifically, we show that even if we were to use an Oracle to refute insoluble subtrees optimally, for some combinations of heuristics we would still observe heavy-tailed behaviour. Finally, we study the distributions of refutation sizes found using different combinations of heuristics and gain some further insights into what characteristics tend to give rise to heavy-tailed behaviour.

Tudor Hulubei, Barry O’Sullivan

2 -Way vs.d -Way Branching for CSP

Most CSP algorithms are based on refinements and extensions of backtracking, and employ one of two simple “branching schemes”: 2-way branching or

d

-way branching, for domain size

d

. The schemes are not equivalent, but little is known about their relative power. Here we compare them in terms of how efficiently they can refute an unsatisfiable instance with optimal branching choices, by studying two variants of the resolution proof system, denoted

C

 − 

RES

and

NG

 − 

RES

, which model the reasoning of CSP algorithms. The tree-like restrictions,

tree

 − 

C

 − 

RES

and

tree

 − 

NG

 − 

RES

, exactly capture the power of backtracking with 2-way branching and

d

-way branching, respectively. We give a family instances which require exponential sized search trees for backtracking with

d

-way branching, but have size

O

(

d

2

n

) search trees for backtracking with 2-way branching. We also give a natural branching strategy with which backtracking with 2-way branching finds refutations of these instances in time

O

(

d

2

n

2

). The unrestricted variants of

C

 − 

RES

and

NG

 − 

RES

can simulate the reasoning of algorithms which incorporate learning and

k

-consistency enforcement. We show exponential separations between

C

 − 

RES

and

NG

 − 

RES

, as well as between the tree-like and unrestricted versions of each system. All separations given are nearly optimal.

Joey Hwang, David G. Mitchell

Maintaining Longest Paths in Cyclic Graphs

This paper reconsiders the problem of maintaining longest paths in directed graphs, which is at the core of many scheduling applications. It presents bounded incremental algorithms for arc insertion and deletion running in time

O

(||

δ

|| + |

δ

|

log

|

δ

|) on Cyclic<0 graphs (i.e., graphs whose cycles have strictly negative lengths), where |

δ

| and ||

δ

|| are measures of the change in the input and output. For Cyclic≤0 graphs, maintaining longest paths is unbounded under reasonable computational models; when only arc insertions are allowed, it is shown that the problem can be solved in

O

(||

δ

|| + |

δ

|

log

|

δ

|) time even in the presence of zero-length cycles. The algorithms directly apply to shortest paths (by negating the lengths), leading to simpler algorithms than previously known and reducing the worst-case complexity of an operation from Õ(

n

m

) to

O

(

n

+

m

) for Cyclic>0 graphs with

n

vertices and

m

arcs.

Irit Katriel, Pascal Van Hentenryck

Applying Constraint Programming to Rigid Body Protein Docking

In this paper we show how Constraint Programming (CP) techniques can improve the efficiency and applicability of grid-based algorithms for optimising surface contact between complex solids. We use BiGGER [1] (Bimolecular complex Generation with Global Evaluation and Ranking) to illustrate the method as applied to modelling protein interactions, an important effort in current bioinformatics. BiGGER prunes the search space by maintaining bounds consistency on interval constraints that model the requirement for the shapes to be in contact but not overlapping, and by using a branch and bound approach to search the models with the best fit. This CP approach gives BiGGER some efficiency advantages over popular protein docking methods that use Fourier transforms to match protein structures. We also present an efficient algorithm to actively impose a broad range of constraints or combinations of constraints on distances between points of the two structures to dock, which allows the use of experimental data to increase the effectiveness and speed of modelling protein interactions and which cannot be done as efficiently in Fourier transform methods. This shows that constraint programming provides a different approach to protein docking (and fitting of shapes in general) that increases the scope of application while improving efficiency.

Ludwig Krippahl, Pedro Barahona

Maximum Constraint Satisfaction on Diamonds

In this paper we study the complexity of the weighted maximum constraint satisfaction problem (

Max CSP

) over an arbitrary finite domain. In this problem, one is given a collection of weighted constraints on overlapping sets of variables, and the goal is to find an assignment of values to the variables so as to maximize the total weight of satisfied constraints.

Max CSP

is

NP

-hard in general; however, some restrictions on the form of constraints may ensure tractability. Recent results indicate that there is a connection between tractability of such restricted problems and supermodularity of the allowed constraint types with respect to some lattice ordering of the domain. We prove several results confirming this in a special case when the lattice ordering is as loose as possible, i.e., a diamond one.

Andrei Krokhin, Benoit Larose

Exploiting Unit Propagation to Compute Lower Bounds in Branch and Bound Max-SAT Solvers

One of the main differences between complete SAT solvers and exact Max-SAT solvers is that the former make an intensive use of unit propagation at each node of the proof tree while the latter, in order to ensure optimality, can only apply unit propagation to a restricted number of nodes. In this paper, we describe a branch and bound Max-SAT solver that applies unit propagation at each node of the proof tree to compute the lower bound instead of applying unit propagation to simplify the formula. The new lower bound captures the lower bound based on inconsistency counts that apply most of the state-of-the-art Max-SAT solvers as well as other improvements, like the start rule, that have been defined to get a lower bound of better quality. Moreover, our solver incorporates the Jeroslow-Wang variable selection heuristic, the pure literal and dominating unit clause rules, and novel preprocessing techniques. The experimental investigation we conducted to compare our solver with the most modern Max-SAT solvers provides experimental evidence that our solver is very competitive.

Chu Min Li, Felip Manyà, Jordi Planes

Generalized Conflict Learning for Hybrid Discrete/Linear Optimization

Conflict-directed search algorithms have formed the core of practical, model-based reasoning systems for the last three decades. At the core of many of these applications is a series of discrete constraint optimization problems and a conflict-directed search algorithm, which uses conflicts in the forward search step to focus search away from known infeasibilities and towards the optimal feasible solution. In the arena of model-based autonomy, deep space probes have given way to more agile vehicles, such as coordinated vehicle control, which must robustly control their continuous dynamics. Controlling these systems requires optimizing over continuous, as well as discrete variables, using linear as well as logical constraints.

This paper explores the development of algorithms for solving hybrid discrete/linear optimization problems that use conflicts in the forward search direction, carried from the conflict-directed search algorithm in model-based reasoning. We introduce a novel algorithm called Generalized Conflict-Directed Branch and Bound (GCD-BB). GCD-BB extends traditional Branch and Bound (B&B), by first constructing conflicts from nodes of the search tree that are found to be infeasible or sub-optimal, and then by using these conflicts to guide the forward search away from known infeasible and sub-optimal states. Evaluated empirically on a range of test problems of coordinated air vehicle control, GCD-BB demonstrates a substantial improvement in performance compared to a traditional B&B algorithm applied to either disjunctive linear programs or an equivalent binary integer programming encoding.

Hui Li, Brian Williams

Parallel Local Search in Comet

The availability of commodity multiprocessors offers significant opportunities for addressing the increasing computational requirements of optimization applications. To leverage these potential benefits, it is important however to make parallel processing easily accessible to a wide audience of optimization programmers. This paper addresses this challenge by proposing parallel programming abstractions that keep the distance between sequential and parallel local search algorithms as small as possible. The abstractions, that include parallel loops, interruptions, and thread pools, are compositional and cleanly separate the optimization program and the parallel instructions. They have been evaluated experimentally on a variety of applications, including facility location and coloring, for which they provide significant speedups.

Laurent Michel, Pascal Van Hentenryck

Generating Corrective Explanations for Interactive Constraint Satisfaction

Interactive tasks such as online configuration and e-commerce can be modelled as constraint satisfaction problems (CSPs). These can be solved interactively by a user assigning values to variables. The user may require advice and explanations from a system to help him/her find a satisfactory solution. Explanations of failure in constraint programming tend to focus on conflict. However, what is really desirable is an explanation that is

corrective

in the sense that it provides the basis for moving forward in the problem-solving process. More specifically, when faced with a dead-end, or when a desirable value has been removed from a domain, we need to compute alternative assignments for a subset of the assigned variables that enables the user to move forward. This paper defines this notion of

corrective explanation

, and proposes an algorithm to generate such explanations. The approach is shown to perform well on both real-world configuration benchmarks and randomly generated problems.

Barry O’Callaghan, Barry O’Sullivan, Eugene C. Freuder

SPREAD: A Balancing Constraint Based on Statistics

Many combinatorial problems require of their solutions that they achieve a certain balance of given features. In the constraint programming literature, little has been written to specifically address this issue, particularly at the modeling level. We propose a new constraint dedicated to balancing, based on well-known and well-understood concepts in statistics. We show how it can be used to model different situations in which balance is important. We also design efficient filtering algorithms to guide the search towards balanced solutions.

Gilles Pesant, Jean-Charles Régin

Automatic Detection of Variable and Value Symmetries

Many symmetry breaking techniques assume that the symmetries of a CSP are given as input in addition to the CSP itself. We present a method that can be used to detect all the symmetries of a CSP. This method constructs a graph that has the same symmetries as the CSP. Then, generators for the symmetry group are computed using a graph automorphism algorithm. This method improves and extends previous work in order to cover global constraints, arithmetic expressions and value symmetries. We show that this method is able to find symmetries for examples that were thought to be too convoluted for automated detection. We also show that the overhead of symmetry detection is quite negligible, even on very large instances. We present a comprehensive set of examples where automated symmetry detection is coupled with symmetry breaking techniques.

Jean-François Puget

Breaking All Value Symmetries in Surjection Problems

We propose a surprisingly simple new way of breaking all value symmetries with constraints. Our method requires the addition of one variable per value of the problem plus a linear number of binary constraints. The set of constraints is automatically computed from the symmetries of the problem using computational group theory. Our method applies to problems where every value is taken by at least one variable. Such problems occur frequently in practice. Various experiments show that our method is extremely effective when compared to previously published methods.

Jean-François Puget

AC-*: A Configurable, Generic and Adaptive Arc Consistency Algorithm

In this paper, we present AC-*, a new configurable, generic and adaptive algorithm for establishing arc consistency for binary constraints. AC-* is configurable, that is by combining some parameters AC-* corresponds to any existing AC algorithm: AC-3, AC-4, AC-6, AC-7, AC-2000, AC-2001, AC-8, AC-3

d

, AC-3.2 and AC-3.3. AC-* is generic, like AC-5, because it may take into account the structure of the constraints.

AC-* is adaptive because the underlining algorithm can be changed during the computation in order to use the most efficient one. This new algorithm leads to a new nomenclature of the AC algorithms which is based on the different features used by the algorithm like the values that are reconsidered when a domain is modified, or the fact that bi-directionality is taken into account, or the way a new support is sought. This new nomenclature shows that several new possible combinations are now possible. That is, we can easily combine some ideas of AC-3 with some ideas of AC-7 and some ideas of AC-2001 with some ideas of AC-6. Some experimental results highlight the advantages of our approach.

Jean-Charles Régin

Maintaining Arc Consistency Algorithms During the Search Without Additional Space Cost

In this paper, we detail the versions of the arc consistency algorithms for binary constraints based on list of supports and last value when they are maintained during the search for solutions. In other words, we give the explicit codes of MAC-6 and MAC-7 algorithms. Moreover, we present an original way to restore the last values of AC-6 and AC-7 algorithms in order to obtain a MAC version of these algorithms whose space complexity remains in

O

(

ed

) while keeping the

O

(

ed

2

) time complexity on any branch of the tree search, where

d

is the size of the largest domain and

e

is the number of constraints. This result outperforms all previous studies.

Jean-Charles Régin

Weak Composition for Qualitative Spatial and Temporal Reasoning

It has now been clear for some time that for many qualitative spatial or temporal calculi, for instance the well-known RCC8 calculus, the operation of composition of relations which is used is actually only weak composition, which is defined as the strongest relation in the calculus that contains the real composition. An immediate consequence for qualitative calculi where weak composition is not equivalent to composition is that the well-known concept of path-consistency is not applicable anymore. In these cases we can only use algebraic closure which corresponds to applying the path-consistency algorithm with weak composition instead of composition.

In this paper we analyse the effects of having weak compositions. Starting with atomic CSPs, we show under which conditions algebraic closure can be used to decide consistency in a qualitative calculus, how weak consistency affects different important techniques for analysing qualitative calculi and under which conditions these techniques can be applied. For our analysis we introduce a new concept for qualitative relations, the “closure under constraints”. It turns out that the most important property of a qualitative calculus is not whether weak composition is equivalent to composition, but whether the relations are closed under constraints. All our results are general and can be applied to all existing and future qualitative spatial and temporal calculi. We close our paper with a road map of how qualitative calculi should be analysed. As a side effect it turns out that some results in the literature have to be reconsidered.

Jochen Renz, Gérard Ligozat

Boosting Distributed Constraint Satisfaction

Competition and cooperation can boost the performance of search. Both can be implemented with a portfolio of algorithms which run in parallel, give hints to each other and compete for being the first to finish and deliver the solution. In this paper we present a new generic framework for the application of algorithms for distributed constraint satisfaction which makes use of both cooperation and competition. This framework improves the performance of two different standard algorithms by one order of magnitude and can reduce the risk of poor performance by up to three orders of magnitude. Moreover it greatly reduces the classical idleness flaw usually observed in distributed hierarchy-based searches. We expect our new methods to be similarly beneficial for any tree-based distributed search and describe ways on how to incorporate them.

Georg Ringwelski, Youssef Hamadi

Depth-First Mini-Bucket Elimination

Many important combinatorial optimization problems can be expressed as

constraint satisfaction problems

with

soft constraints

. When problems are too difficult to be solved exactly, approximation methods become the best option.

Mini-bucket elimination

(MBE) is a well known approximation method for combinatorial optimization problems. It has a control parameter

z

that allow us to trade time and space for accuracy. In practice it is the space and not the time that limits the execution with high values of

z

. In this paper we introduce a set of improvements on the way MBE handles memory. The resulting algorithm dfMBE may be orders of magnitude more efficient. As a consequence, higher values of

z

can be used which, in turn, yields significantly better bounds. We demonstrate our approach in scheduling, probabilistic reasoning and resource allocation problems.

Emma Rollon, Javier Larrosa

Using SAT in QBF

QBF is the problem of deciding the satisfiability of quantified boolean formulae in which variables can be either universally or existentially quantified. QBF generalizes SAT (SAT is QBF under the restriction all variables are existential) and is in practice much harder to solve than SAT. One of the sources of added complexity in QBF arises from the restrictions quantifier nesting places on the variable orderings that can be utilized during backtracking search. In this paper we present a technique for alleviating some of this complexity by utilizing an order unconstrained SAT solver during QBF solving. The innovation of our paper lies in the integration of SAT and QBF. We have developed methods that allow information obtained from each solver to be used to improve the performance of the other. Unlike previous attempts to avoid the ordering constraints imposed by quantifier nesting, our algorithm retains the polynomial space requirements of standard backtracking search. Our empirical results demonstrate that our techniques allow improvements over the current state-of-the-art in QBF solvers.

Horst Samulowitz, Fahiem Bacchus

Tree Decomposition with Function Filtering

Besides search, complete inference methods can also be used to solve soft constraint problems. Their main drawback is the high spatial complexity. To improve its practical usage, we present an approach to decrease memory consumtion in tree decomposition methods, a class of complete inference algorithms. This approach, called function filtering, allows to detect and remove some tuples that appear to be consistent (with a cost below the upper bound) but that will become inconsistent (with a cost exceeding the upper bound) when extended to other variables. Using this idea, we have developed new algorithms CTEf, MCTEf and IMCTEf, standing for cluster, mini-cluster and iterative mini-cluster tree elimination with function filtering. We demonstrate empirically the benefits of our approach.

Martí Sánchez, Javier Larrosa, Pedro Meseguer

On Solving Soft Temporal Constraints Using SAT Techniques

In this paper, we present an algorithm for finding utilitarian optimal solutions to Simple and Disjunctive Temporal Problems with Preferences (STPPs and DTPPs) based on Benders’ decomposition and adopting SAT techniques. In our approach, each temporal constraint is replaced by a Boolean

indicator

variable and the decomposed problem is solved by a tightly integrated STP solver and SAT solver. Several hybridization techniques that take advantage of each solver’s strengths are introduced. Finally, empirical evidence is presented to demonstrate the effectiveness of our method compared to other algorithms.

Hossein M. Sheini, Bart Peintner, Karem A. Sakallah, Martha E. Pollack

Eplex: Harnessing Mathematical Programming Solvers for Constraint Logic Programming

The

eplex

library of the ECL

i

PS

e

Constraint Logic Programming platform allows the integration of Mathematical Programming techniques with its native Constraint Logic Programming techniques within the same unified framework. It provides an interface to state-of-the-art Mathematical Programming solvers, and a set of programming primitives that allow ‘hybrid’ techniques to be easily expressed. This paper presents these facilities, and discusses some associated implementation issues.

Kish Shen, Joachim Schimpf

Caching Search States in Permutation Problems

When the search for a solution to a constraint satisfaction problem backtracks, it is not usually worthwhile to remember the assignment that failed, because the same assignment will not occur again. However, we show that for some problems recording assignments is useful, because other assignments can lead to the same state of the search. We demonstrate this in two classes of permutation problem, a satisfaction problem and an optimization problem. Caching states visited has proved effective in reducing both search effort and run-time for difficult instances of each class, and the space requirements are manageable.

Barbara M. Smith

Repair-Based Methods for Quantified CSPs

The Quantified CSP (QCSP) is a generalization of the CSP which allows for universally quantified variables. For each possible sequence of assignments to such variables, we have to find a way to set the values of the remaining, existentially quantified, variables so that all the constraints are satisfied. Such problems arise in areas such as planning under uncertainty, model checking, and adversary game playing. QCSPs are starting to attract interest following the development of numerous efficient solvers for the closely related area of QBF. Two approaches have been studied so far; the encoding of QCSPs into QBF, and the generalization of well-known search procedures for CSPs, like FC and MAC, to the quantified case. In this paper we introduce a new approach which utilizes repair-based techniques. We describe a framework for a QCSP solver in which complete and incomplete repair-based methods can be incorporated. We also evaluate such a solver that applies backtracking and local search methods based on the min-conflicts heuristic. Experimental results demonstrate that even simple repair-based techniques can outperform the state-of-the-art solver QCSP-Solve.

Kostas Stergiou

Handling Implication and Universal Quantification Constraints in FLUX

FLUX is a CLP-approach for programming agents that reason about actions under incomplete state knowledge. FLUX is based on the solution to the fundamental frame problem in the fluent calculus. The core is a set of Constraint Handling Rules for the constraints that are used to encode state knowledge. In order to allow for efficient constraint solving, the original expressiveness of state representations in FLUX has been carefully restricted. In this paper, we enhance the expressiveness by adding both implication and universal quantification constraints. We do so without losing the computational merits of FLUX. We present a set of Constraint Handling Rules for these new constraints and prove their correctness against the fluent calculus.

Michael Thielscher

Solving Simple Planning Problems with More Inference and No Search

Many benchmark domains in AI planning including Blocks, Logistics, Gripper, Satellite, and others lack the interactions that characterize puzzles and can be solved non-optimally in low polynomial time. They are indeed easy problems for people, although as with many other problems in AI, not always easy for machines. In this paper, we address the question of whether simple problems such as these can be solved in a simple way, i.e., without search, by means of a domain-independent planner. We address this question

empirically

by extending the constraint-based planner CPT with additional domain-independent inference mechanisms. We show then for the first time that these and several other benchmark domains can be solved with no backtracks while performing only polynomial node operations. This is a remarkable finding in our view that suggests that the classes of problems that are solvable without search may be actually much broader than the classes that have been identified so far by work in Tractable Planning.

Vincent Vidal, Héctor Geffner

Solving Large-Scale Nonlinear Programming Problems by Constraint Partitioning

In this paper, we present a constraint-partitioning approach for finding local optimal solutions of large-scale mixed-integer nonlinear programming problems (MINLPs). Based on our observation that MINLPs in many engineering applications have highly structured constraints, we propose to partition these MINLPs by their constraints into subproblems, solve each subproblem by an existing solver, and resolve those violated global constraints across the subproblems using our theory of extended saddle points. Constraint partitioning allows many MINLPs that cannot be solved by existing solvers to be solvable because it leads to easier subproblems that are significant relaxations of the original problem. The success of our approach relies on our ability to resolve violated global constraints efficiently, without requiring exhaustive enumerations of variable values in these constraints. We have developed an algorithm for automatically partitioning a large MINLP in order to minimize the number of global constraints, an iterative method for determining the optimal number of partitions in order to minimize the search time, and an efficient strategy for resolving violated global constraints. Our experimental results demonstrate significant improvements over the best existing solvers in terms of solution time and quality in solving a collection of mixed-integer and continuous nonlinear constrained optimization benchmarks.

Benjamin W. Wah, Yixin Chen

Factor Analytic Studies of CSP Heuristics

Factor analysis is a statistical technique for reducing the number of factors responsible for a matrix of correlations to a smaller number of factors that may reflect underlying variables. In this study factor analysis was used to determine if variation in search efficiency due to different variable ordering heuristics could be analyzed by this method to reveal basic sources of variation. It was found that the variation could be ascribed to two major factors, which appear to be related to contention (immediate failure) and to forward propagation (future failure). This was most clearcut with homogeneous random problems, but similar factor patterns were demonstrated for problems with small-world characteristics. Heuristics can be classified in terms of whether they tend to support one or the other strategy, or whether they balance the two; these differences are reflected in the pattern of loadings on the two major factors. Moreover, improvements in efficiency can be obtained by heuristic combinations (“heuristic synergy”) only if the combination includes heuristics that are highly correlated with each factor; therefore, two such heuristics are sufficient. This work represents a step toward understanding the action of heuristics as well as suggesting limits to heuristic performance.

Richard J. Wallace

Short Papers

Lookahead Saturation with Restriction for SAT

We present a new and more efficient heuristic by restricting lookahead saturation (LAS) with NVO (neighbourhood variable ordering) and DEW (dynamic equality weighting). We report on the integration of this heuristic in Satz, a high-performance SAT solver, showing empirically that it significantly improves the performance on an extensive range of benchmark problems that exhibit hard structure.

Anbulagan, John Slaney

Evolving Variable-Ordering Heuristics for Constrained Optimisation

In this paper we present and evaluate an evolutionary approach for learning new constraint satisfaction algorithms, specifically for MAX-SAT optimisation problems. Our approach offers two significant advantages over existing methods: it allows the evolution of more complex combinations of heuristics, and; it can identify fruitful synergies among heuristics. Using four different classes of MAX-SAT problems, we experimentally demonstrate that algorithms evolved with this method exhibit superior performance in comparison to general purpose methods.

Stuart Bain, John Thornton, Abdul Sattar

Multi-point Constructive Search

Multi-Point Constructive Search maintains a small set of “elite solutions” that are used to heuristically guide constructive search through periodically restarting search from an elite solution. Empirical results indicate that for job shop scheduling optimization problems and quasi-group completion problems, multi-point constructive search performs significantly better than chronological backtracking and bounded backtracking with random restart.

J. Christopher Beck

Bounds of Graph Characteristics

This article presents a basic scheme for deriving systematically a filtering algorithm from the graph properties based representation of global constraints. This scheme is based on the bounds of the graph characteristics used in the description of a global constraint. The article provides bounds for the most common used graph characteristics.

Nicolas Beldiceanu, Thierry Petit, Guillaume Rochart

Acquiring Parameters of Implied Global Constraints

This paper presents a technique for

learning

parameterized implied constraints. They can be added to a model to improve the solving process. Experiments on implied

Gcc

constraints show the interest of our approach.

Christian Bessiere, Rémi Coletta, Thierry Petit

Integrating Benders Decomposition Within Constraint Programming

Benders decomposition [1] is a solving strategy based on the separation of the variables of the problem. It is often introduced as a basis for models and techniques using the complementary strengths of constraint programming and optimization techniques. Hybridization schemes have appeared recently and provided interesting computational results [4,5,7,8]. They have been extended [2,3,6] to take into account other kinds of sub-problems and not only the classical linear programming ones. However, decomposition has never been proposed to our knowledge in a generic constraint programming approach. This paper discusses the way a decomposition framework could be embedded in a constraint solver, taking advantage of structures for a non expert user. We explore the possibility of deriving logic Benders cuts using an explanation-based framework for CP and describe Benders decomposition as a nogood recording strategy. We propose a tool implemented at the top of an explained constraint solver that could offer such a systematic decomposition framework.

Hadrien Cambazard, Narendra Jussien

Using Boolean Constraint Propagation for Sub-clauses Deduction

The Boolean Constraint Propagation (BCP) is a well-known helpful technique implemented in most state-of-the-art efficient satisfiability solvers. We propose in this paper a new use of the BCP to deduce sub-clauses from the associated implication graph. Our aim is to reduce the length of clauses thanks to the subsumption rule. We show how such extension can be grafted to modern SAT solvers and we provide some experimental results of the sub-clauses deduction as a pretreatment process.

S. Darras, G. Dequen, L. Devendeville, B. Mazure, R. Ostrowski, L. Saïs

Extending Systematic Local Search for Job Shop Scheduling Problems

Hybrid search methods synthesize desirable aspects of both constructive and local search methods. Constructive methods are systematic and complete, but exhibit poor performance on large problems because bad decisions made early in the search persist for exponentially long times. In contrast, stochastic local search methods are immune to the tyranny of early mistakes. Local search methods replace systematicity with stochastic techniques for diversifying the search. However, the lack of systematicity makes remembering the history of past states problematic. Typically, hybrid methods introduce a stochastic element into a basically constructive search framework. Lynce [6] uses randomized backtracking in a complete boolean satisfiability solver which incorporates clause (nogood) learning to ensure completeness. Jussein & Lhomme [4] perform a constructive search while keeping conflict sets (nogoods) in a Tabu list and backtrack via a stochastic local search in the space of conflict sets.

Our method, called

Systematic Local Search

(SysLS) [3], follows the opposite approach. We incorporate systematicity within an inherently stochastic search method (like [2]). SysLS searches through a space of complete variable assignments and relaxes the requirement for maintaining feasibility. It preserves full freedom to move heuristically in the search space with maximum heuristic information available. While many local search methods easily get trapped in local optima, SysLS records local optima as nogoods in a search memory. Nogoods force the search away from these maximally consistent but unacceptable solutions. Our method is analogous to other diversification mechanisms in local search (

eg-Tabu search

) but is systematic and inherits the sound resolution rule for nogood learning. In this paper, we extend SysLS for optimization and, in particular, for job shop scheduling problems.

Bistra Dilkina, Lei Duan, William S. Havens

Interactive Reconfiguration in Power Supply Restoration

Given a configuration of parameters that satisfies a set of constraints, and given external changes that change and fix the value of some parameters making the configuration invalid, the problem of

interactive reconfiguration

is to assist a user to interactively reassign a subset of the parameters to reach a consistent configuration again. In this paper, we present two BDD-based algorithms for solving the problem, one based on a monolithic BDD-representation of the solution space and another using a set of BDDs. We carry out experiments on a set of

power supply restoration

benchmarks and show that the set-of-BDDs algorithm scales much better.

Tarik Hadzic, Henrik Reif Andersen

Neighbourhood Clause Weight Redistribution in Local Search for SAT

In recent years, dynamic local search (DLS) clause weighting algorithms have emerged as the local search state-of-the-art for solving propositional satisfiability problems. This paper introduces a new approach to clause weighting, known as Divide and Distribute Fixed Weights (DDFW), that transfers weights from neighbouring satisfied clauses to unsatisfied clauses in order to break out from local minima. Unlike earlier approaches, DDFW continuously redistributes a fixed quantity of weight between clauses, and so does not require a weight smoothing heuristic to control weight growth. It also exploits inherent problem structure by redistributing weights between neighbouring clauses.

To evaluate our ideas, we compared DDFW with two of the best reactive local search algorithms, AdaptNovelty+ and RSAPS. In both these algorithms, a problem sensitive parameter is automatically adjusted during the search, whereas DDFW uses a fixed default parameter. Our empirical results show that DDFW has consistently better performance over a range of SAT benchmark problems. This gives a strong indication that neighbourhood weight redistribution strategies could be the key to a next generation of structure exploiting, parameter-free local search SAT solvers.

Abdelraouf Ishtaiwi, John Thornton, Abdul Sattar, Duc Nghia Pham

Computing and Exploiting Tree-Decompositions for Solving Constraint Networks

Methods exploiting tree-decompositions seem to provide the best approach for solving constraint networks w.r.t. the theoretical time complexity. However, they have not shown a real practical interest yet. In this paper, we study several methods for computing a rough optimal tree-decomposition and assess their relevance for solving CSPs.

Philippe Jégou, Samba Ndojh Ndiaye, Cyril Terrioux

Encoding Requests to Web Service Compositions as Constraints

Interacting with a web service enabled marketplace in order to achieve a complex task involves sequencing a set of individual service operations, gathering information from the services, and making choices. We propose to encode the problem of issuing requests to a composition of web services as a constraint-based problem.

Alexander Lazovik, Marco Aiello, Rosella Gennari

Test Instance Generation for MAX 2SAT

(Extended Abstract)

Since MAX 2SAT is one of the famous NP-hard optimization problems, many heuristics and (polynomial-time) approximation algorithms have been proposed in the literature [1,4,5,6]. To evaluate the performance of such algorithms, there are two possibilities; theoretical analysis and empirical study.

In theoretical analysis, an approximation ratio of the algorithm is often used as a measure. The approximation ratio is an upper bound on the ratio of an approximated cost to the optimal cost, and hence, this is a worst case measure. It is often difficult to analyze theoretically the performance of heuristics or hybrid algorithms.

Mistuo Motoki

Consistency for Quantified Constraint Satisfaction Problems

The generalization of the constraint satisfaction problem with universal quantifiers is a challenging PSPACE-complete problem, which is interesting theoretically and also relevant to solving other PSPACE problems arising in AI, such as reasoning with uncertainty, and multiplayer games. I define two new levels of consistency for QCSP, and give an algorithm to enforce consistency for one of these definitions. The algorithm is embedded in backtracking search, and tested empirically. The aims of this work are to increase the facilities available for modelling and to increase the power of constraint propagation for QCSPs. The work is motivated by examples from adversarial games.

Peter Nightingale

Alternate Modeling in Sport Scheduling

Sports scheduling is a classical problem in the field of combinatorial optimization. One of the first successful methods to solve a complex instance was implemented using constraint programming. In this article, we explore an alternate and lighter way of modeling the round-robin part of the problem. We show this model can be enriched by additional propagations that complement the all different constraint.

Laurent Perron

Approximations in Distributed Optimization

We present a parameterized approximation scheme for distributed combinatorial optimization problems based on dynamic programming. The algorithm is a utility propagation method and requires a linear number of messages. For exact computation, the size of the largest message is exponential in the width of the constraint graph. We present a distributed approximation scheme where the size of the largest message can be adapted to the desired approximation ratio,

α

. The process is similar to a distributed version of the minibucket elimination scheme, performed on a DFS traversal of the problem.

The second part of this paper presents an anytime version of the algorithm, that is suitable for very large, distributed problems, where the propagations may take too long to complete.

Simulation results show that these algorithms are a viable approach to real world, loose optimization problems, possibly of unbounded size.

Adrian Petcu, Boi Faltings

Extremal CSPs

We present a new class of binary CSPs called extremal CSPs. The CSPs of this class are inconsistent but would become consistent if any pair of variable assignments among the forbidden ones was allowed. Being inconsistent, they cannot be solved by any local repair method. As they allow a great number of partial (almost complete) solutions, they can be very hard to solve with tree search methods integrating domain filtering. We experiment that balanced extremal CSPs are much harder to solve than random CSPs of same size at the complexity peak.

Nicolas Prcovic

Beyond Finite Domains: The All Different and Global Cardinality Constraints

We describe how the propagator for the

All-Different

constraint can be generalized to prune variables whose domains are not just simple finite integer domains. We show, for example, how it can be used to propagate set, multiset and tuple variables.

Claude-Guy Quimper, Toby Walsh

Views and Iterators for Generic Constraint Implementations

This paper introduces an architecture for generic constraint implementations based on variable views and range iterators. Views allow, for example, to scale, translate, and negate variables. The paper shows how to make constraint implementations generic and how to reuse a single generic implementation with different views for different constraints. Applications of views exemplify their usefulness and their potential for simplifying constraint implementations. We introduce domain operations compatible with views based on range iterators to access and modify entire variable domains.

Christian Schulte, Guido Tack

Approximated Consistency for the Automatic Recording Problem

In constraint optimization, global constraints play a decisive role. To develop an efficient optimization tool, we need to be able to assess whether we are still able to improve the objective function further. This observation has lead to the development of a special kind of global constraints, so-called optimization constraints [2,5]. Roughly speaking, an optimization constraint expresses our wish to search for improving solutions only while enforcing feasibility for at least one of the constraints of the problem.

Meinolf Sellmann

Towards an Optimal CNF Encoding of Boolean Cardinality Constraints

We consider the problem of encoding Boolean cardinality constraints in conjunctive normal form (CNF). Boolean cardinality constraints are formulae expressing that at most (resp. at least)

k

out of

n

propositional variables are true. We give two novel encodings that improve upon existing results, one which requires only 7

n

clauses and 2

n

auxiliary variables, and another one demanding

$\mathcal{O}(n \cdot k)$

clauses, but with the advantage that inconsistencies can be detected in linear time by unit propagation alone. Moreover, we prove a linear lower bound on the number of required clauses for any such encoding.

Carsten Sinz

Approximate Constrained Subgraph Matching

Our goal is to build a declarative framework for approximate graph matching where various constraints can be stated upon the pattern graph, enabling approximate constrained subgraph matching, extending models and constraints proposed by Rudolf [1] and Valiente et al. [2]. In the present work, we propose a CSP approach for approximate subgraph matching where the potential approximation is declaratively stated in the pattern graph as mandatory/optional nodes/edges. Forbidden edges, that is edges that may not be included in the matching, can be declared on the pattern graph. We also want to declare properties between pairs of nodes in the pattern graph, such as distance properties, that can be either stated by the user, or automatically inferred by the system. In the former case, such properties can define new approximate patterns. In the latter case, these redundant constraints enhance the pruning.

Stéphane Zampelli, Yves Deville, Pierre Dupont

Doctoral Papers

Distributed Constraints for Large-Scale Scheduling Problems

In this work, we present a distributed model for solving large-scale CSPs. Our technique carries out a partition over the constraint network by a graph partitioning software, such as each subproblem is as independent as possible and, it can be solved in a reasonable time.

Montserrat Abril, Miguel A. Salido, Federico Barber

Solving Over-Constrained Problems with SAT

We have defined a new formalism, based on Max-SAT, for encoding and solving over-constrained problems. Our formalism is an extension of Boolean CNF formulas in which we deal with blocks of Boolean clauses instead of dealing with individual clauses. Every block, formed by a set of clauses, is declared either as a hard block (i.e., must be satisfied by any solution) or as a soft block (i.e., can be violated by some solution). The idea behind the notion of block is that it encodes a problem constraint (for example, adjacent vertices have different colors); in general, it is not enough a single clause to encode a problem constraint. We call

soft CNF formulas

to this new kind of formulas.

Josep Argelich, Felip Manyà

A Constraint Based Agent for TAC–SCM

The annual international Trading Agent Competition-Supply Chain Management (TAC-SCM) game is based around the manufacture and supply of PCs. There are multiple agents in the game, scheduling production, competing for orders from customers and components from suppliers. A key decision to be made each day in the game is what offers should be made to customers. Each day, the agents receive a set of request for quotes (RFQ) from customers, agents respond with offers, and then the customers select the lowest bid.

David A. Burke, Kenneth N. Brown

Solving the Car-Sequencing Problem as a Non-binary CSP

The car-sequencing problem arises from the manufacture of cars on an assembly line. A number of cars are to be made on a production line; they are not identical because different options are available as variants on the basic model. The different stations which install the various options have been designed to handle at most a certain percentage of the cars passing along the assembly line. Consequently, the cars must be arranged in a sequence so that these capacities are not exceeded. In this paper, the formulation of the car-sequencing problem is presented as a non-binary constraint satisfaction problem (CSP) with constraints of fixed arity 5. A search algorithm based on non-binary forward checking (nFC) is used to solve the problem. For the car-sequencing problem the variables should be assigned consecutively. The choice of value ordering heuristics having a dramatic effect on solution time for this problem, different value ordering heuristics were implemented. Since any possible solution is a permutation of a fixed set of values, a succeed-first strategy for value ordering only postpones the assignment of the difficult classes and a value ordering based on fail-first could be a better choice. These methods are compared on the instances reported in the CSPLib. The results obtained showed the superiority of a strategy of fail-first type against to a succeed-first strategy. In particular, the

MaxUtil

and

MaxPQ

heuristics allowed a better exploration of the space of solutions and solved all the instances of problems with 200 variables. It should be underlined the fact that these problems were solved in little time (6 seconds on average) and the longest time is 13 seconds for the instance 90_09, whereas for ILOG Solver the least powerful time exceeds 1 minute. This result can be justified by our encoding. Indeed, we encoded the maximum of constraints (the capacity of each option, the request for each class) inside an explicit 5-ary constraint with very high tightness (close to 0.95).

MaxUtil

remains the best heuristic because it is surprisingly backtrack-free. Within the future work, the filtering method will be improved. A hybridization between the optimization methods is another way of interesting research. The use of parallelism also seems an interesting direction for solving this type of problem.

Mihaela Butaru, Zineb Habbas

Dimensioning an Inbound Call Center Using Constraint Programming

One of the critical problems in the call center industries is the staffing problem, since they must face variable demands and because staff costs represent a major part of the costs of these industries. From a modeling point of view, a call center is generally modeled as a

M

/

M

/

N

system, also called the Erlang-C model. In [Koole and Mandelbaum, 2001], the authors present a survey of the state-of-the-art about possible models of a call center.

Cyril Canon, Jean-Charles Billaut, Jean-Louis Bouquard

Methods to Learn Abstract Scheduling Models

For practical reasons, most scheduling problems are an abstraction of the real problem being solved. For example, when you plan your day, you schedule the activities which are

critical

; that is you schedule the activities which are essential to the success of your day. So you may plan what time to leave the house to get to work, when to have meetings, how you share your vehicle with your spouse and so on. On the other hand, you probably do not consider the activities that are easy to arrange like brushing your teeth, going to the shops, making photocopies and other such tasks that can usually be accomplished whenever you have the time available. Scheduling all of these activities at once is often too complicated. Instead, a simpler schedule is produced by considering only the critical activities. However, if a schedule goes wrong, it is often because an activity turned out to be critical but was not scheduled. We typically learn which activities are critical by experience and create an abstract scheduling problem which includes all known critical activities. Instead of scheduling the non-critical activities we estimate their effects in the abstract scheduling problem.

Tom Carchrae, J. Christopher Beck, Eugene C. Freuder

Automated Search for Heuristic Functions

The computer constructed local search heuristic function for a SAT problem can run and solve SAT faster than human designed heuristics. The idea behind this was to start with some predefined primitives chosen in accordance with a human observation and to combine them using genetic programming. This leads to a question whether it is possible to construct an effective heuristic function solely by a computer, a function constructed from very elementary program building blocks not only from predefined higher-level primitives.

Pavel Cejnar, Roman Barták

Constraint-Based Inference: A Bridge Between Constraint Processing and Probability Inference

Constraint-Based Inference (CBI) [1] is an umbrella term for various superficially different problems including probabilistic inference, decision-making under uncertainty, constraint satisfaction, propositional satisfiability, decoding problems, and possibility inference. In this project we explicitly use the semiring concept to generalize various CBI problems into a single formal representation framework with a broader coverage of the problem space, based on the synthesis of existing generalized frameworks from both constraint processing and probability inference communities. Based on our generalized CBI framework, extensive comparative studies of exact and approximate inference approaches are commenced. First, we extend generalized arc consistency to probability inference based on a weaker condition [2]. All the existing arc consistency enforcing algorithms can be generalized and migrated to handle other concrete CBI problems that satisfy this condition. Second, based on our CBI framework we apply junction tree algorithms in probability inferences to solve soft CSPs [1]. We show that the message-passing schemes of junction tree algorithms can be modified to achieve better computational efficiency if the semiring of a CBI problem has additional properties. Third, we study loopy message propagation in probability inference for general CBI problems. We claim in [1] that for CBI problems with a idempotent combination operator, the loopy message propagation is an exact inference approach. Our experimental results also show that the loopy message propagation yields high quality inference approximation for general CBI problems like Max CSPs. Finally, we discuss the possibilities of integrating stochastic approaches into our semiring-based CBI framework. We also discuss contextspecific inference with backtracking as a promising inference approach for general CBI problems. In general, we are aiming at studying the most important common characteristics of various CBI problems, borrowing design ideas from other fields based on the analyses and comparison of different inference approaches, and significantly reducing the amount of implementation work targetted previously at the individual problems.

Le Chang, Alan K. Mackworth

Scheduling Social Tournaments

Tournament scheduling problems arise in many practical applications and their highly symmetric and combinatorial nature makes them particularly challenging for search algorithms. This research generalizes the modeling and local search approach proposed in [1] in order to schedule some challenging, real-life, social tournaments. The approach also schedules other challenging, social tournaments. Results can be found in the web version of this abstract.

Iván Dotú, Álvaro del Val, Pascal Van Hentenryck

Domain Reduction for the Circuit Constraint

We present an incomplete filtering algorithm for the circuit constraint. The filter removes redundant values by eliminating non-Hamiltonian edges from the associated graph. We prove a necessary condition for an edge to be Hamiltonian, which provides the basis for eliminating edges of a smaller graph defined on a separator of the original graph.

Latife Genc Kaya, John Hooker

Using Constraint Programming for Solving Distance CSP with Uncertainty

Many problems in chemistry, robotics or molecular biology can be expressed as a Distance CSP (Constraint Satisfaction Problem). Sometimes, the parameters of this kind of problems are determined in an experimental way, and therefore they have an uncertainty degree. A classical approach for solving this class of problems is to solve the CSP without considering the uncertainties, and to obtain a set of solutions without knowing the real solution sub-spaces. A better approach is to apply a branch and prune algorithm to generate a set of disjoint boxes that include all the solution sub-spaces, but without information about independent solution sub-spaces or the different types of boxes.

Carlos Grandon, Bertrand Neveu

Improved Algorithm for Finding (a,b)-Super Solutions

Super solutions are a mechanism to provide robustness to constraint programs. We introduce a new algorithm that exploits the similarity between a super solution and its repairs in order to do inference during search. It improves on previous methods since it is more space efficient and also faster in practice.

Emmanuel Hebrard, Toby Walsh

Local Consistency in Weighted CSPs and Inference in Max-SAT

Weighted constraint satisfaction problems (WCSP) and Max-SAT are optimization versions of the CSP framework and SAT repectively. They have many practical applications. Most current state-of-the-art complete solvers for WCSP and Max-SAT problems can be described as a basic depth-first branch and bound search that computes a lower bound during the search that can be used together with the cost of the best solution found in order to prune entire search subtrees. Recently, a collection of local consistency properties such as NC*, AC*, DAC*, FDAC* and EDAC* have been proposed for WCSP in order to simplify the problem. In Max-SAT we have recently proposed inference rules to detect unfeasible assignments.

Federico Heras, Javier Larrosa

Modeling Constraint Programs with Software Technology Standards

In [1] Puget argued for a “model-and-run” paradigm for constraint programming. He proposed to develop a standard file format to express CP models. There is no such unified modeling standard available to the CP community, so constraint programs cannot be developed independently from the used CP library and they are hard to maintain.

This research targets platform-independent object-oriented modeling of constraint programs. It will be shown how CP models can be expressed using software technology standards, and further how these standards will enable automated transformation and execution of such models. The work reconsiders results published in [2]. There, we have formally shown how the Unified Modeling Language (UML) and the Object Constraint Language (OCL) can be used to create well formed models of constraint problems called Constraint Network Schemata (CNS). Resting upon this, the author now proposes to see Model Driven Architecture (MDA) as a chance for further research advances.

Matthias Hoche, Stefan Jähnichen

Solution Equivalent Subquadrangle Reformulations of Constraint Satisfaction Problems

Constraint Satisfaction Problem instances (CSPs) are a natural way to model real life problems such as image processing, scheduling and natural language understanding. In this paper we are concerned with the modeling of problems as CSPs and how this can affect the performance of different solution algorithms. In particular we are interested in modeling in the language of subquadrangles.

A Quadrangle is essentially an ‘anything-goes’ constraint for some Cartesian product of domains. A Subquadrangle [1] is a constraint all of whose projections to proper subsets of the scope are quadrangles.

Subquadrangles are a very ‘natural’ way in which to represent constraints. This is because they do not place any restrictions on proper subsets of their scope, thus reducing the number of required constraint checks. This leads us to believe that a subquadrangle aware solver could be particularly efficient.

Chris Houghton, David Cohen

Mechanism Design for Preference Aggregation over Coalitions

Mechanisms are decision functions that map the individual preference orderings of separate parties into a single ordering over the group outcome. Unfortunately, classical impossibility results, readily extended to preferences, show that no mechanism can be “fair” for all scenarios [1]. Further, any positive results typically assume that agents do not form coalitions or other such partnerships.

While coalitions can complicate both theoretical analysis and underlying paradigms of rationality, in a particular setting they can serve to constrain a problem to the point of circumventing traditional impossibility results.

Automated mechanism design

(AMD) [2] attempts to overcome such results by designing specific mechanisms for specific situations on the spot. No perfect mechanism exists that works in every context, but whenever there is information about the players, a fair mechanism can exist for that specific setting.

Eric Hsu, Sheila McIlraith

LP as a Global Search Heuristic Across Different Constrainedness Regions

Recent years have witnessed the emergence of a new area involving hybrid solvers integrating CP- and OR-based methods. The LP relaxation provides bounds on overall solution quality and can be used for pruning in a branch-and-bound approach, especially in domains where we have a combination of linear constraints, well-suited for linear programming (LP) formulations, and discrete constraints, suited for constraint satisfaction problem (CSP) formulations. However, in a

purely combinatorial

setting, so far it has been surprisingly difficult to integrate LP-based and CSP-based techniques.

We study the behavior of heuristics based on the LP relaxation with respect to the underlying constraindness of the problem. Our study focuses on the Latin square (or quasigroup) completion problem as a prototype for highly combinatorial problems. This problem is NP-hard and it exhibits an easy-hard-easy pattern in complexity, measured in the runtime (backtracks) to find a completion [1]. In our previous work [2] we report an interesting phase transition phenomenon in the

solution integrality

of the LP relaxation for this problem.

We find that simple techniques based on the LP relaxation of the problem provide satisfactory guidance for under- and over-constrained instances. In the critically constrained region, the performance of such simple techniques degrades, due to the inherit hardness of the problem. In this setting, we examine a technique that recomputes the LP relaxation every time a variable is set. This leads to a significant increase in performance, suggesting that carefully designed “one step at a time” LP-based heuristics could provide suitable guidance even for the hardest instances. We examine the quality of the guidance provided by the LP relaxation as a function of the structure of the problem, i.e., we characterize the performance of LP heuristics across different constraindness regions in the search space.

Lucian Leahu, Carla Gomes

Consistency for Partially Defined Constraints

Partially defined or

Open Constraints

[2] can be used to model the incomplete knowledge of a concept or a relation. In an Open Constraint, some tuples are known to be true, some other are known to be false and some are just

unknown

. We propose to complete its definition by using Machine Learning techniques. The idea of the technique we use for learning comes directly from the classical model of solvers computing a chaotic iteration of reduction operators [1]. We begin by learning the constraint. But instead of learning it by a classifier which takes as input all its variables and answers ”yes” if the tuple belongs to the constraint and ”no” otherwise, we choose to

learn the support functionn

 < X = a>

of the constraint for each value of its variables’ domains. A tuple is part of the constraint if accepted by all support functions for each of its values and rejected as soon as it gets rejected by one. We propose to use as representation for learning an Artificial Neural Network (ANN) with an intermediate hidden layer trained by the classical backpropagation algorithm [4].

When put in a CSP, a constraint should contribute to the domain reduction. We propose to use the learned classifiers also for solving. In order to do this, we take the

natural

extension to intervals [3] of the learned classifiers. Let

N

 < X = a>

be the natural interval extension of

n

 < X = a>

. Then, by using as input the current domain of the variables, we can obtain a range for its output. Since we put a 0.5 threshold after the output neuron, we can reject the value

a

for

X

if the maximum of the output range is less than 0.5, which means that all tuples are rejected in the current domain intervals. Otherwise, the value remains in the domain. Our experiments show that the learned consistency is weaker than more classical consistencies but still reduces notably the search space.

We show that our technique not only has good learning performances but also yields a very efficient solver for the learned constraint.

Andreï Legtchenko, Arnaud Lallouet

Subnet Generation Problem: A New Network Routing Problem

We introduce a new type of network routing problem, the subnet generation problem (SGP) which is a special case of the traffic placement problem (TPP). In the TPP, given (1) a network which consists of routers and links, and (2) a set of point-to-point traffic demands to be placed including finding feasible routes, the objective is to minimize the sum of costs of the unplaced demands subject to the Quality-of-Service routing constraints.

The SGP is a TPP with an extra set of constraints that restricts the combinations of demands to be placed. In our SGP, each router has a fixed amount of

information-gain

that is to be transmitted to every other router in the

subnet

. A subnet is defined as any subset of the routers in the network. This means that every router in the subnet will have exactly the same total information-gain. The objective is to find a subnet that maximizes the total information-gain: there must be a demand with a valid path between every pair of routers in the subnet. The reason for creating demands among the routers arises from the fact that every node in a selected group of routers is required to be a client and a server. The figure below shows an example of a subnet and a path solution.

Cheuk Fun Bede Leung, Barry Richards, Olli Kamarainen

Partial Redundant Modeling

Redundant modeling combines different models of the same problem using channeling constraints [1]. Channeling constraints allow different formulations of a problem to interact, propagating the constraints between different formulations. This can result in a significant improvement in performance. Originally, work on redundant modeling assumed that redundant models must fully characterize the problem [1]. Later, Smith argued that only the primal model need fully characterize the problem, while the dual model need only have all the dual variables and channeling constraints between the two models (a

minimal combined model

)[2]. This paper proposes

partial redundant modeling

, an extension of the minimal combined model that encourages more than two models, omits some dual variables and omits all the dual constraints. Partial redundant models originate in problems with a

categorical structure

, where the variables may be subdivided into categories

.

Often these categories can be identified as groups of variables that fall under n-ary constraints that partition the variables into disjoint sets. Real world problems, such as scheduling and rostering, may also have categorical structure.

Logic puzzles

are a class of problems with a simplified version of categorical structure. A logic puzzle consists of a set of

objects

, a set of

categories

(same-size disjoint subsets of those objects) that must take on different values, and a set of

semantic relations

, which specify the relationships that hold between the categories. In a logic puzzle, each category can be viewed as a subset of CSP variables under an all-diff constraint. Different CSP models can be obtained by selecting the objects in a different category as the domain values, taking all other objects in the other categories to be the variables. We propose to maintain multiple partial redundant models of problems with categorical structure, adding channeling constraints between the n-ary constraints in the redundant partial models. These channeling constraints make certain that value assignments under an n-ary constraint in one partial model are reflected under that n-ary constraint in some other partial model. We call these

categorical channeling constraints

.

Tiziana Ligorio, Susan L. Epstein

AND/OR Branch-and-Bound for Solving Mixed Integer Linear Programming Problems

Graphical models are a powerful representation framework for automated reasoning tasks. These models use graphs to capture conditional independencies between variables, allowing for a concise representation of the knowledge. Optimization tasks defined within this framework are typically tackled with either

search

or

inference

. Search methods are time exponential in the number of variables and can operate in linear space. Inference algorithms are time and space exponential in the

tree width

of the problem. This potentially higher space complexity makes these methods impractical.

The AND/OR search space for graphical models is a newly introduced framework for search that is sensitive to the independencies in the model, often resulting in exponentially reduced complexities. The AND/OR search is based on a pseudo-tree which expresses independencies between variables, resulting in a search tree exponential in the depth of the pseudo-tree, rather than the number of variables.

The AND/OR Branch-and-Bound algorithm (AOBB) is a new search method that explores the AND/OR search tree for solving optimization tasks in graphical models. In this paper we extend the algorithm for solving combinatorial optimization problems from the class of Mixed Integer Linear Programs (MILP). A MILP instance is a linear program where some of the decision variables are constrained to have only integer values at the optimal solution (we consider only binary integer variables). AOBB can be readily adapted for solving this class of optimization problems by arranging the integer variables into a start pseudo-tree and, then, traversing the corresponding AND/OR search tree. This rather straightforward extension can be further improved. We introduce a

dynamic

version of AOBB which uses a recursive decomposition of the problem, based on hypergraph separators. The hypergraph of a MILP instance has a vertex for each constraint and a hyperedge, which corresponds to a variable, connects all the constraints that contain that variable. A separator translates into a subset of variables that, when instantiated, decompose the problem into independent components. The algorithm traverses an AND/OR search tree based on a pseudo-tree which is recomputed dynamically at each search tree node using the hypergraph separator of the respective subproblem. The search process is guided in both cases by lower-bounding heuristic estimates computed at each node by solving the linear relaxation (i.e. ignoring the integrality restrictions) of the subproblem rooted at that node. Preliminary evaluation of the structural properties of several hard problem instances from the MIPLIB2003 library showed promise that the new AND/OR search schemes can improve significantly over the traditional OR tree search approach. Finally, we mention that more advanced strategies developed in the recent years for integer programming, such as the

branch-and-cut

scheme, can be readily adapted to exploit the AND/OR structural paradigm.

Radu Marinescu, Rina Dechter

Weak Symmetries in Problem Formulations

Unlike symmetries weak symmetries act only on a subset of the variables and/or respect only a subset of the constraints of the problem. Therefore, weak symmetries preserve the state of feasibility only with respect to the subset of variables they act on and only for the constraints they respect. This means if two solutions are symmetric under the weak symmetry they yield different full solutions with potentially different feasibility states.

But weak symmetries cannot be simply broken, since this would result in a loss of solutions that cannot be derived afterwards. Therefore we propose a modelling technique that uses additional variables (

SymVars

) and constraints that enable us to express symmetric states of a solution. The idea is to decompose a problem

P

in a way such that the variables and constraints respected by the weak symmetry is present in one sub-problem

P

1

and the rest in the sub-problem

P

2

. This way the weak symmetry acts as a common symmetry on

P

1

. The additional variables and constraints form a new sub-problem

P

sym

that is incorporated and the solving order is to find a solution to

P

1

, consider a symmetric equivalent by

P

sym

and pass the solution to

P

2

which finds a solution for the whole problem. By doing so the symmetry on

P

1

can be broken.

Although additional variables are introduced which extends the search space symmetry breaking enables us to reduce the search effort. Whether symmetry breaking does compensate the extension of the search space by the additional variables depends on the problem and the search heuristic. But as soon as a solution for

P

1

is found the whole equivalence class of solutions can be considered via

P

sym

.

Weak symmetries occur in various problems.They can be found in real-life problems (especially optimisation problems where the weak symmetry does not respect the objective value) as well as in in classic problem formulations like the magic square problem [1] or extensions of problems like the diagonal latin square [2] or the weighted magic square problem [3].

Roland Martin, Karsten Weihe

Towards the Systematic Generation of Channelling Constraints

The automatic modelling tool

Conjure

[1] generates CSP models from a problem specified in the specification language

Essence

. Variables in

Essence

may have domains currently unsupported by solvers. Also, the elements of the domains may be arbitrarily compound, for example, sets of sets, sets of sets of sets.

Conjure

uses a set of

refinement rules

to compositionally transform the variables (and constraints) into representations that can be implemented in current solvers.

Conjure

can produce multiple alternative (redundant) representations of the same variable that may appear simultaneously in the same model. Currently,

Conjure

does not generate the

channelling constraints

[2] that are needed to maintain the consistency between these simultaneous alternatives.

There are several unsolved issues related to channeling constraints and automatic modelling, such as, how to identify the cases where a channelled model performs better than a non-channelled one, and how to implement the channelling constraints efficiently. In this paper however, we focus the automated generation of channelling constraints under the

Conjure

framework.

Our work has identified and proved correct, an algorithm to systematically generate the channelling constraints needed in a

Conjure

-generated model. We briefly describe this algorithm as follows. Let

P

be a specification refined by

Conjure

into the CSP model

P

′. Let

X

be a variable in

P

that has two representations

X

1

and

X

2

in

P

′. Let

Y

be a new variable with exactly the same domain of

X

. We can then

re-refineX

=

Y

(channelling constraint between

X

and

Y

)

forcing

the

X

to refine into

X

1

and the

Y

into

X

2

. Hence, we produce a correct channelling constraint between

X

1

and

X

2

providing

Conjure

generates correct refinements.

B. Martínez-Hernández, A. M. Frisch

AND/OR Search Spaces and the Semantic Width of Constraint Networks

The primary contribution of this paper consists in using the

AND/OR search

paradigm [1,2] to define the new concept of

semantic width

of a constraint network. The well known parameter

tree-width

is graph based, and cannot capture context sensitive information. This often results in a very loose upper bound on the actual complexity of the problem. A typical example is the compact result of a compilation schemes such as

ordered binary decision diagram

(OBDD), in spite of a large tree-width (and path-width). The semantic width is based on the notion of equivalent constraint networks. The idea is to capture the intrinsic hardness of a problem by the smallest width equivalent network.

Robert Mateescu, Rina Dechter

Statistical Modelling of CSP Solving Algorithms Performance

Our goal is to characterize and to be able to predict the search cost, of some of the most important CSP algorithms and heuristics when solving CSP problems by obtaining a statistical model of the algorithm runtime based on inexpensively computed parameters obtained from the CSP problem specification and the associated constraints and nogoods graphs.

Such a model will give us three important items concerning the studied CSP problems. First, the model provides a tool to predict the search cost of a given instance, allowing a portfolio of solvers to decide for the best algorithm before to proceed. Second, the models will give an insight about which are the main features that characterize the complexity of a RBCSP. Finally, another potential benefit of the model is pointing out which features are the algorithms most sensible to, thus helping to guess potential areas of improvement.

Ramon Béjar, Cèsar Fernández, Carles Mateu

Probabilistic Arc Consistency

The two most popular algorithms for solving Constraint Satisfaction Problems are Forward Checking (

fc

) [1] and Maintaining Arc Consistency (

mac

) [2] M

ac

maintains full arc consistency while

fc

maintains a limited form of arc consistency during search. There is no single champion algorithm:

mac

is more efficient on sparse problems which are tightly constrained but

fc

has an increasing advantage as problems become dense and constraints loose. Ideally a good search algorithm should find the right balance —for any problem— between visiting fewer nodes in the search tree and reducing the work that is required to establish local consistency. In order to do so, we maintain

probabilistic arc consistency

during search. The idea is to assume that a support exists and skip the process of seeking a support if the probability of having some support for a value is at least equal to some, carefully chosen, stipulated threshold.

Arc consistency involves revisions of domains, which require support checks to remove unsupported values. In many revisions,

some

or

all

values find some support. If we can predict the existence of a support then a considerable amount of work can be saved. In order to do so, we propose the notions of a

Probabilistic Support Condition

(

psc

) and

Probabilistic Revision Condition

(

prc

). If

psc

holds then the probability of having some support for a value is at least equal to the threshold and the process of seeking a support is skipped. If

prc

holds then for each value the probability of having some support is at least equal to the threshold and the corresponding revision is skipped.

Deepak Mehta, M. R. C. van Dongen

GOOSE – A Generic Object-Oriented Search Environment

The constraint programming community keeps on creating numerous search algorithms, which differ to a greater or lesser extent.It is an as desirable as difficult task to implement a variety of search algorithms in a unifying framework.

This design proposal states the object-oriented environment GOOSE, which supports development of generic search algorithms. It is inspired by Prosser’s categorisation of backtracking search algorithms [2]. GOOSE is abstract enough to house dissimilar search approaches and separates abstract generic logic from domain details. The research focuses implementation needs and explicitly goes for an efficient object-oriented design, which enforces code reuse and flexibility by adequate use of class inheritance and object composition [1]. GOOSE can be implemented in any modern object-oriented language, and as a proof of concept it is realised within our object-oriented solver

firstcs

. Up to now the concept handles variants of backtracking search and deals with topics like constraint-based scheduling, static and dynamic variable ordering, justifications and backjumping, optimisation, randomisation and restarting techniques. Multidimensional search structures and control flow organisation are of particular interest. Creating new complete search algorithms is easy: A generic frame algorithm is completed by implementing some domain-specific methods. Plug-and-play like assembly of compatible components easily realises algorithmic variations. The variations are chiefly achieved by using generic decorator or strategy objects [1], which can be exchanged during runtime. First experimental results (job-shop scheduling, 3-SAT, n-queens) indicate a performance loss between 0 % (strong pruning) and 10 % (weak pruning) compared to monolithic equivalents. Future work will cover the introduction of non-systematic generic search algorithms and dynamic adaptive search configuration, e.g. switching dynamically from chronological tree movement to backjumping or from global to local search etc.

Implementing search algorithms according to GOOSE will make them easier to understand and compare, the code will be flexible and reusable.

Henry Müller, Stefan Jähnichen

Randomization for Multi-agent Constraint Optimization

In a constraint optimization problem for multiple agents, the agents have conflicting preferences in the final solution and the goal is to find an optimal assignment that maximizes total utilities of all agents. Two major challenges when solving constraint optimization problems for multiple agents are the complexity of finding optimal solution and the incentive compatibility for participating agents. First, computing the optimal solution for large optimization problems are computationally infeasible and it can only be solved approximately by local search algorithms. Second, ensuring honest elicitation among self-interested agents is computationally expensive. It has been shown that the only known mechanism that guarantees truthfulness among agents requires computing optimal solutions, and sub-optimal solutions for such a mechanism will break the incentive compatibility ([2]).

The long-term goal of our research is to solve these two challenges by using randomization in local search algorithms to find near-optimal solutions while ensuring incentive compatibility for bounded-rational agents. Our work is based on the observation that in real-world settings, the potential for manipulation is limited by uncertainty and risk. This uncertainty makes it difficult for a manipulator to predict the consequences of his manipulation and thus makes attempts at manipulating it uninteresting.

Quang Huy Nguyen, Boi V. Faltings

Uncertainty in Soft Constraint Problems

Preferences and uncertainty occur in many real-life problems. We are concerned with the coexistence of preference and uncertainty in the same problem. In particular, we consider uncertainty, defined by the theory of possibility [2], that is one non-probabilistic way of dealing with uncertainty, that comes from lack of data or imprecise knowledge.

We propose a method to integrate fuzzy preferences and uncertainty [3], which follows the approach in [2] and allows one to handle uncertainty within a fuzzy optimization engine, which at the same time observing separately the preference and the robustness of complete instantiations. To order the complete instantiations, we define three semantics, which correspond to the attitude to the risk.

We generalize this method in [4] to deal also with other classes of preferences such as probabilistic or weighted constraints [1], and also probabilistic uncertainty. In particular, probabilistic constraints are very useful for modelling real-life scenarios where fuzzy constraints are not the ideal setting, because combining preferences multiplying them is better than taking their minimum value. Weighted constraints are useful to model problems where preferences are penalties (or costs) to be added, and where the best preferences are the smallest ones.

Maria Silvia Pini, Francesca Rossi

Speeding Up Constrained Path Solvers with a Reachability Propagator

We present a propagator which we call

Reachability

that implements a generalized reachability constraint on a directed graph

g

. Given a source node

source

in

g

, we can identify three parts in the

Reachability

constraint: (1) the relation between each node of

g

and the set of nodes that it reaches, (2) the association of each pair of nodes 〈

source

,

i

〉 with its set of cut nodes, and (3) the association of each pair of nodes 〈

source

,

i

〉 with its set of bridges.

Luis Quesada, Peter Van Roy, Yves Deville

From Linear Relaxations to Global Constraint Propagation

Recently, many algorithms have been designed to propagate global constraints. Unfortunately, some global constraints, such the

At-Most-1

constraint and the

Extended-GCC

are NP-Hard to propagate. Often, these constraints can easily be written as integer linear programs. Using linear relaxations and other techniques developed by the operation research community, we want to efficiently propagate such constraints.

We model constraints as integer programs that we relax into linear programs. For each value

v

in a variable domain dom(

x

), we create a binary variable

x

v

. The assignment

x

v

= 1 indicates that

x

=

v

while

x

v

= 0 indicates that

x

v

.

Claude-Guy Quimper, Alejandro López-Ortiz

Encoding HTN Planning as a Dynamic CSP

Constraint satisfaction methodology has proven to be a successful technique for solving variety of combinatorial and optimization problems. Despite this fact, it was exploited very little in the planning domain. In particular hierarchical task network planning (HTN) [2] seems to be suitable for use of constraint programming. The formulation of HTN planning problem involves a lot of structural information which can be used to prune the search space. Encoding of this structural information by means of constraint programming would provide an effective way for such pruning during the search for solution.

Pavel Surynek, Roman Barták

Specialised Constraints for Stable Matching Problems

The stable marriage problem (SM) and the Hospital / Residents problem (HR) are both stable matching problems. They consist of two sets of objects that need to be matched to each other; in SM men to women, and in HR residents to hospitals. Each set of objects expresses a ranked preference for the objects in the other set, in the form of a preference list. The problem is then to find a matching of one set to the other such that the matching is stable. A matching is stable iff it contains no blocking pairs. A blocking pair in a matching

M

consists of two objects

x

and

y

one from each set(

x

= man and

y

= woman for SM or

x

= hospital and

y

= resident in HR), such that

x

and

y

are not matched in

M

and both

x

and

y

would rather be matched to each other than to there assignment in

M

.

Chris Unsworth, Patrick Prosser

Bounds-Consistent Local Search

With this work we present a hybrid approach to solve large-scale constraint satisfaction and optimization problems. The method proposed can be termed Bounds-Consistent Local Search. It is inspired by the success of a randomized algorithm for the Boolean Satisfiability (SAT) problem called UnitWalk. We have adapted the algorithm UnitWalk to integer programs. The search space is pruned through propagation; particularly we use Bounds-Consistency (BC). In this way we combine Local Search which performs well on large combinatorial optimization problems, and consistency propagation methods used to solve constraint problems. Unit-Walk is a simple algorithm that performs well on different instances of hard SAT problems. It has given best results on industrial problems in SAT competition. It also has been proved it is Probabilistically Approximately Complete (PAC); it means that it succeeds with probability one without restarting for initial assignment. We opted to use bounds-consistency propagation for linear constraints for two reasons. Firstly, because bounds-consistency implies hyper-arc consistency on integer linear inequalities. Hyper-arc consistency is a strong form of consistency, and linear inequalities are an expressive form of constraint that has already been used to model many problems including Multiple Sequence Alignment problem (MSA) from Bioinformatics. Secondly, large domains do not need to be explicitly represented, which saves a lot of memory and reduces runtime overheads. With BC we need only maintain two numbers per integer variable: an upper and a lower bound. BC can be also applied to non-linear constraints such as all-different, which we plan to deal with in future work. The algorithm starts with a random assignment for the variables, and then explores the search space by randomly choosing the variable to be instantiated. It performs bounds propagation before and during search. If domain wipe-out occurs then it restarts the search, using previous successful assignments to guide the selection of domain values. We are improving the algorithm with new heuristics. We have developed a dynamic prioritization heuristic that uses the information gained during the search in order to set up variables’ selecting ordering. This prioritization is updated at each new search, and is inspired by an another algorithm called Squeaky Wheel Optimization.

Stefania Verachi, Steven Prestwich

Robust Constraint Solving Using Multiple Heuristics

Representing and solving problems in terms of constraints can be difficult to do effectively. A single problem can be modeled in many different ways, either in terms of representation or in terms of the solving process. Different approaches can outperform each other over different problem classes or even for different instances within the same class. It is possible that even the best combination of model and search on average is still too slow across a range of problems, taking orders of magnitude more time on some problems than combinations that are usually poorer. This fact complicates the use of constraints, and makes it very difficult for novice users to produce effective solutions. The modeling and solving process would be easier if we could develop robust algorithms, which perform acceptably across a range of problems.

Alfio Vidotto, Kenneth N. Brown, J. Christopher Beck

Scheduling with Uncertain Start Dates

In manufacturing scheduling, jobs may have uncertain earliest start times, caused by supplier lead-time uncertainty. How should we build initial schedules to be robust to these uncertain release dates? We are attempting to adapt the online decision making methods from [1] in particular the

expectation

and

consensus

methods, which combine sampling of future scenarios with deterministic optimization.

First, we assume that we have a probability distribution over the release dates, and we use this to select samples, giving them a weight proportional to their probability, which we use when determining the best decision. For expectation, we consider all possible partial schedules of released jobs up to the earliest possible uncertain release date (

t

0

), for each sample extend each partial schedule, and return the one with the lowest expected makespan. For consensus, we find the optimal decision for each sample, and then choose the one with the highest sum of weights. We consider three possible types of initial decision: (i) partial schedules up to

t

0

; (ii) independent resource assignments for a single time step; and (iii) a tuple of resource assignments for a single time step. The latter two require re-optimization at each time-step up to

t

0

.

We have implemented the above in Ilog Scheduler 6.0, and tested them on modified JSP benchmarks [2]. The experiments show that our four adapted methods all provide good results (i.e. weighted relative error not being more than 2%). However, it is also possible to get close to the optimal without reasoning about the uncertain events in advance (e.g. using pure reactive methods). The problems appear to be too easy, and not sensitive to the uncertainties. Future work will investigate why this is the case, to determine the features that do make the problems sensitive to the uncertainty in the release dates.

Christine Wei Wu, Kenneth N. Brown, J. Christopher Beck

The Role of Redundant Clauses in Solving Satisfiability Problems

In our work, we investigate the role of redundant clauses in characterizing and solving hard SAT problems. Informally, a redundant clause is one that may be removed from the CNF representation of a SAT instance without altering the satisfying assignments of that instance. Correspondingly, a set of prime clauses is a set of clauses that preserves all the but that contains no redundant clauses. We identify several interesting features of redundant clauses that provide compelling evidence of the correlation between the percentage of redundant clauses and the hardness of instances. We propose a definition of weighted clause-to-variable ratio (

WCV

), which substantially improves the classic clause-to-variable (

m

/

n

) ratio in predicting search cost and explaining the phase transition.

WCV

is based on a linear combination of the number of prime clauses (

NPC

) and the number of redundant clauses (

NRC

). We compare

WCV

to a number of existing parameters including backbone size and backbone fragility, the constrainedness measure, and the

m

/

n

ratio; we posit a variety of advantages to

WCV

over other measures. We believe that full utilization of redundant knowledge to solve random and real-world SAT problems can significantly improve the performance of SAT solvers, in terms of the scale of the problems that can be dealt with as well as the speed with which these problems are solved.

Honglei Zeng, Sheila McIlraith

Applying Decomposition Methods to Crossword Puzzle Problems

Structural decomposition methods have been proposed for identifying tractable Constraint Satisfaction Problems (CSPs)[1-5]. The basic principle is to decompose a CSP into tree-structured sub-problems. The subproblems are solved independently, then the original CSP is solved in a backtrack-free manner after the tree structure is made arc-consistent, as described in [1]. In [5], we proposed four decomposition methods: HINGE

 + 

, CUT, TRAVERSE, and CaT and tested these methods on randomly generated CSPs. We compare these techniques on instances of the fully interlocked Crossword Puzzle Problems (CPPs) [6] taken from a public library [7] and identify special cases of the constraint hypergraphs where some decomposition techniques yield better results than others although in general the opposite holds. Our future work includes: 1)Identifying more such configurations, and building hybrid decompositions techniques that exploit this information; 2)Tailoring existing decomposition methods for fully interlocked CPPs so that every sub-problem, after backtrack search, has few solutions; and 3)Designing a heuristic for applying local search for fully interlocked CPPs. This work is supported by CAREER Award #0133568 from the National Science Foundation.

Yaling Zheng, Berthe Y. Choueiry

Asymmetric Distributed Constraints Satisfaction Problems

Distributed constraint satisfaction problems (

DisCSP

s) with asymmetric constraints reflect the fact that agents may wish to retain their constraints private. Brito and Meseguer proposed a model for asymmetric constraints which they term

Partially Known Constraints

(

PKC

). In the

PKC

model each binary constraint is divided between the two constraining agents. In order to solve the resulting

DisCSP

with asymmetric constraints, a two phase asynchronous backtracking algorithm was proposed [BM03]. In the first phase an asynchronous backtracking algorithm is performed, in which only the constraints held by the lower priority agents are examined. When a solution is reached, a second phase is performed in which the consistency of the solution is checked again, according to the constraints held by the higher priority agents in each binary constraint.

The present paper proposes a one phase asynchronous backtracking algorithm which solves

DisCSPs

with asymmetric constraints. In the proposed

asynchronous backtracking for asymmetric constraints

algorithm (

ABT

_

ASC

) agents send their proposed assignments to all their neighbors in the constraints graph. Agents assign their local variables according to the priority order as in

ABT

but check the constraints also against the assignment of lower priority agents. When an agent detects a conflict between its own assignment and the assignment of an agent with a lower priority than itself, it sends a

Nogood

to the lower priority agent

but keeps its assignment

. Agents which receive a

Nogood

from higher priority agents, perform the same operations as if they have produced this

Nogood

themselves. As in

ABT

[Yok00], they remove their current assignment from their current-domain, store the eliminating

Nogood

and reassign their variable.

The

ABT

_

ASC

algorithm is evaluated experimentaly on randomly generated

DisCSPs

and is shown to outperform the 2-phase

ABT

with respect to two different distributed performance measures. ABT_ASC performs fewer non-concurrent constraints checks by a factor of 6, for the harder problem instances. The load on the network is very similar for both algorithms, counting the total number of messages sent by both algorithms.

Roie Zivan, Amnon Meisels

Full Arc Consistency in WCSP and in Constraint Hierarchies with Finite Domains

Consistency techniques proved to be an efficient method for reducing the search space of CSP and hence they were modified for soft constraint frameworks too. Several algorithms for local consistency in Weighted CSP, where costs are assigned to tuples, were introduced in [2,3]. The strongest form among them is Full Star Directional Arc Consistency (FDAC*). The algorithm for enforcing FDAC* is based on sending costs from one constraint to another in one direction with respect to the fixed order of variables.

Josef Zlomek, Roman Barták

System Demonstrations

CoJava: A Unified Language for Simulation and Optimization

We have proposed and implemented the language CoJava, which offers both the advantages of simulation-like process modeling in Java, and the capabilities of true decision optimization. We will demonstrate the modeling methodology and implementation techniques, following an optimization application example.

By design, the syntax of CoJava is identical to the programming language Java, extended with special constructs to (1) make a non-deterministic choice of a numeric value, (2) assert a constraint, and (3) designate a program variable as the objective to be optimized.

Alexander Brodsky, Hadon Nash

Programming with $\mathcal{TOY}(\mathcal{FD})$

In [1] we presented the language

$\mathcal{TOY}(\mathcal{FD})$

that integrates the best features of existing functional and logic languages, as well as finite domain (

$\mathcal{FD}$

) constraint solving. We believe that

$\mathcal{TOY}(\mathcal{FD})$

is more flexible and expressive than the existing approaches of constraint logic programming on finite domain (CLP(

$\mathcal{FD}$

)) as it integrates

$\mathcal{FD}$

constraint solving, lazy evaluation, higher order applications of functions and constraints, polymorphism, type checking, composition of functions (and, in particular, constraints), combination of relational and functional notation, and a number of other characteristics. These features allow to write more concise programs, therefore increasing the expressivity level.

Antonio J. Fernández, Teresa Hortalá-González, Fernando Sáenz-Pérez

Computing Super-Schedules

RobustScheduler

Emmanuel Hebrard, Paul Tyler, Toby Walsh

Proterv-II: An Integrated Production Planning and Scheduling System

Medium-term production planning and short-term scheduling match future production load and capacities over various horizons and on different levels of detail. Although these two levels of the decision hierarchy are strongly interdependent, traditional systems handle them separately. In the Proterv-II prototype system that was developed for manufacturing industries, the two levels are linked by an automated aggregation procedure that constructs the planning representation from detailed job-shop level data [3].

András Kovács, Péter Egri, Tamás Kis, József Váncza

The Comet Programming Language and System

Comet

Laurent Michel, Pascal Van Hentenryck

Random Stimuli Generation for Functional Hardware Verification as a CP Application

Functional verification of modern hardware design consumes roughly 70% of the effort invested in the design cycle. Simulation of randomly generated stimuli is the main means of achieving functional verification. A typical verification effort is centered around a stimuli generator which produces a massive amount of test cases that are simulated on the verified design. Bugs are then identified when the design behaves incorrectly. (in some cases this process is complemented by some amount of formal verification).

Yehuda Naveh, Roy Emek

A BDD-Based Interactive Configurator for Modular Systems

Interactive configuration can be viewed as interactive CSP solving. We see interactive

modular configuration

as interactive

modular CSP

solving, where we define a modular CSP as a labeled rooted directed multigraph, where each vertex is labeled with a CSP, and each edge is labeled with (1) a set of total assignments to the variables of the CSP at the source vertex and (2) a set of equality constraints between the variables in the source and destination CSPs.

This allows the modeling of hierarchical systems through the use of edges (to model both

has-a

and

is-a

relationships), and the modeling of unbounded systems through the creation of cycles in the problem graph.

During configuration, the user builds a labeled rooted solution tree which is homomorphic with the problem graph — each solution tree node (object) is associated with a problem graph vertex (class) — and which is isomorphic with the configured system’s structure. The nodes are essentially labeled with the sets of full assignments that can still be part of globally consistent solutions.

Erik R. van der Meer

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise