Skip to main content

2012 | Buch

Principles and Practice of Constraint Programming

18th International Conference, CP 2012, Québec City, QC, Canada, October 8-12, 2012. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-conference proceedings of the 18th International Conference on Principles and Practice of Constraint Programming (CP 2012), held in Québec, Canada, in October 2012. The 68 revised full papers were carefully selected from 186 submissions. Beside the technical program, the conference featured two special tracks. The former was the traditional application track, which focused on industrial and academic uses of constraint technology and its comparison and integration with other optimization techniques (MIP, local search, SAT, etc.) The second track, featured for the first time in 2012, concentrated on multidisciplinary papers: cross-cutting methodology and challenging applications collecting papers that link CP technology with other techniques like machine learning, data mining, game theory, simulation, knowledge compilation, visualization, control theory, and robotics. In addition, the track focused on challenging application fields with a high social impact such as CP for life sciences, sustainability, energy efficiency, web, social sciences, finance, and verification.

Inhaltsverzeichnis

Frontmatter

Invited Papers

Constraint Programming and a Usability Quest

In 2004, Jean-Francois Puget presented [2] an analysis of the “simplicity of Use” of Constraint Programming from which he articulated a series of challenges to make Constraint Programming systems accessible and easier to use. The core of the argument was a contrast between mathematical programming and constraint programming tools. Mathematical programming adopts a

model and run

paradigm, rely on a simple vocabulary to model problems (i.e., linear constraints), support standard formats for sharing models and benefit from extensive documentation on how to model [5]. Constraint programming features a

model and search

paradigm, rich modeling languages with combinatorial objects and has a distinctive flavor of programming. While it can be construed as CP’s Achilles’ heel, it is also its most potent strength and is supported by modeling aids [3,4]. The very existence of sophisticated parameter tuning solutions for SAT solvers and Math Programming solvers to determine ideal parameters (e.g., ParamILS [1]) certainly cast a major shadow on the potency of the

model and run

mantra that is evolving into

model and search for the right parameters

.

Laurent D. Michel
Optimization Challenges in Smart Grid Operations

A smart grid is the combination of a traditional power distribution system with two-way communication between suppliers and consumers. While this communication is expected to deliver energy savings, cost reductions and increased reliability and security, smart grids bring up challenges in the management of the power system, such as integrating renewable energy sources and incorporating demand-management approaches. We discuss how optimization-based techniques are being used to overcome some of these challenges and speculate on ways in which constraint programming could play a greater role in this area.

Miguel F. Anjos
Where Are the Interesting Problems?

Constraint programming has become an important technology for solving hard combinatorial problems in a diverse range of application domains. It has its roots in artificial intelligence, mathematical programming, operations research, and programming languages. In this talk we will discuss a number of challenging application domains for constraint programming, and the technical challenges that these present to the research community.

Barry O’Sullivan

Best Paper

A Generic Method for Identifying and Exploiting Dominance Relations

Many constraint problems exhibit

dominance relations

which can be exploited for dramatic reductions in search space. Dominance relations are a generalization of symmetry and conditional symmetry. However, unlike symmetry breaking which is relatively well studied, dominance breaking techniques are not very well understood and are not commonly applied. In this paper, we present formal definitions of dominance breaking, and a generic method for identifying and exploiting dominance relations via

dominance breaking constraints

. We also give a generic proof of the correctness and compatibility of symmetry breaking constraints, conditional symmetry breaking constraints and dominance breaking constraints.

Geoffrey Chu, Peter J. Stuckey

Best Application Paper

Scheduling Scientific Experiments on the Rosetta/Philae Mission

The Rosetta/Philae mission was launched in 2004 by the European Space Agency (ESA). It is scheduled to reach the comet 67P/Churyumov-Gerasimenko in 2014 after traveling more than six billion kilometers. The Philae module will then be separated from the orbiter (Rosetta) to attempt the first ever landing on the surface of a comet. If it succeeds, it will engage a sequence of scientific exploratory experiments on the comet.

In this paper we describe a constraint programming model for scheduling the different experiments of the mission. A feasible plan must satisfy a number of constraints induced by energetic resources, precedence relations on activities, or incompatibility between instruments. Moreover, a very important aspect is related to the transfer (to the orbiter then to the Earth) of all the data produced by the instruments. The capacity of inboard memories and the limitation of transfers within visibility windows between lander and orbiter, make the transfer policy implemented on the lander’s CPU prone to data loss. We introduce a global constraint to handle data transfers. The goal of this constraint is to ensure that data-producing activities are scheduled in such a way that no data is lost.

Thanks to this constraint and to the filtering rules we propose, mission control is now able to compute feasible plans in a few seconds for scenarios where minutes were previously often required. Moreover, in many cases, data transfers are now much more accurately simulated, thus increasing the reliability of the plans.

Gilles Simonin, Christian Artigues, Emmanuel Hebrard, Pierre Lopez

Honorable Mentions

Max-Sur-CSP on Two Elements

Max-Sur-CSP is the following optimisation problem: given a set of constraints, find a surjective mapping of the variables to domain values that satisfies as many of the constraints as possible. Many natural problems, e.g. Minimum

k

-Cut (which has many different applications in a variety of fields) and Minimum Distance (which is an important problem in coding theory), can be expressed as Max-Sur-CSPs. We study Max-Sur-CSP on the two-element domain and determine the computational complexity for all constraint languages (families of allowed constraints). Our results show that the problem is solvable in polynomial time if the constraint language belongs to one of three classes, and NP-hard otherwise. An important part of our proof is a polynomial-time algorithm for enumerating all near-optimal solutions to a generalised minimum cut problem. This algorithm may be of independent interest.

Hannes Uppman
An Optimal Arc Consistency Algorithm for a Chain of Atmost Constraints with Cardinality

The A

t

M

ost

S

eq

C

ard

constraint is the conjunction of a cardinality constraint on a sequence of

n

variables and of

n

 − 

q

 + 1 constraints A

t

M

ost

u

on each subsequence of size

q

.

This constraint is useful in car-sequencing and crew-rostering problems. In [18], two algorithms designed for the A

mong

S

eq

constraint were adapted to this constraint with a

O

(2

q

n

) and

O

(

n

3

) worst case time complexity, respectively. In [10], another algorithm similarly adaptable to filter the A

t

M

ost

S

eq

C

ard

constraint with a time complexity of

O

(

n

2

) was proposed.

In this paper, we introduce an algorithm for achieving Arc Consistency on the A

t

M

ost

S

eq

C

ard

constraint with a

O

(

n

) (hence optimal) worst case time complexity. We then empirically study the efficiency of our propagator on instances of the car-sequencing and crew-rostering problems.

Mohamed Siala, Emmanuel Hebrard, Marie-José Huguet

CP Main Track

Conflict Directed Lazy Decomposition

Two competing approaches to handling complex constraints in satisfaction and optimization problems using SAT and LCG/SMT technology are:

decompose

the complex constraint into a set of clauses; or

(theory) propagate

the complex constraint using a standalone algorithm and explain the propagation. Each approach has its benefits. The decomposition approach is prone to an explosion in size to represent the problem, while the propagation approach may require exponentially more search since it does not have access to intermediate literals for explanation. In this paper we show how we can obtain the best of both worlds by

lazily decomposing

a complex constraint propagator using conflicts to direct it. If intermediate literals are not helpful for conflicts then it will act like the propagation approach, but if they are helpful it will act like the decomposition approach. Experimental results show that it is never much worse than the better of the decomposition and propagation approaches, and sometimes better than both.

Ignasi Abío, Peter J. Stuckey
Improving SAT-Based Weighted MaxSAT Solvers

In the last few years, there has been a significant effort in designing and developing efficient Weighted MaxSAT solvers. We study in detail the WPM1 algorithm identifying some weaknesses and proposing solutions to mitigate them. Basically, WPM1 is based on iteratively calling a SAT solver and adding blocking variables and cardinality constraints to relax the unsatisfiable cores returned by the SAT solver. We firstly identify and study how to break the symmetries introduced by the blocking variables and cardinality constraints. Secondly, we study how to prioritize the discovery of higher quality cores. We present an extensive experimental investigation comparing the new algorithm with state-of-the-art solvers showing that our approach makes WPM1 much more competitive.

Carlos Ansótegui, Maria Luisa Bonet, Joel Gabàs, Jordi Levy
Distributed Tree Decomposition with Privacy

Tree Decomposition of Graphical Models is a well known method for mapping a graph into a tree, that is commonly used to speed up solving many problems. However, in a distributed case, one may have to respect the privacy rules (a subset of variables may have to be kept secret in a peer), and the initial network architecture (no link can be dynamically added). In this context, we propose a new distributed method, based on token passing and local elections, that shows performances (in the jointree quality) close to the state of the art Bucket Elimination in a centralized case (

i.e.

when used without these two restrictions). Until now, the state of the art in a distributed context was using a Depth-First traversal with a clever heuristic. It is outperformed by our method on two families of problems sharing the small-world property.

Vincent Armant, Laurent Simon, Philippe Dague
Refining Restarts Strategies for SAT and UNSAT

So-called Modern SAT solvers are built upon a few – but essential – ingredients: branching, learning, restarting and clause database cleaning. Most of them have been greatly improved since their first introduction, more than ten years ago. In many cases, the initial reasons that lead to their introduction do not explain anymore their current usage (for instance: very rapid restarts, aggressive clause database cleaning). Modern SAT solvers themselves share fewer and fewer properties with their ancestor, the classical backtrack search DPLL procedure.

In this paper, we explore restart strategies in the light of a new vision of SAT solvers. Following the successful results of G

lucose

, we consider CDCL solvers as resolution-based producers of clauses. We show that this vision is particularly salient for targeting UNSAT formulae. In a second part, we show how detecting sudden increases in the number of variable assignments can help the solver to target SAT instances too. By varying our restart strategy, we show an important improvement over G

lucose

2.0, the winner of the 2011 SAT Competition, category Application SAT+UNSAT formulae. Finally we would like to point out that this new version of G

lucose

was the winner of the SAT Challenge 2012.

Gilles Audemard, Laurent Simon
Boosting Local Consistency Algorithms over Floating-Point Numbers

Solving constraints over floating-point numbers is a critical issue in numerous applications notably in program verification. Capabilities of filtering algorithms over the floating-point numbers (

$\mathcal{F}$

) have been so far limited to 2b-consistency and its derivatives. Though safe, such filtering techniques suffer from the well known pathological problems of local consistencies, e.g., inability to efficiently handle multiple occurrences of the variables. These limitations also have their origins in the strongly restricted floating-point arithmetic. To circumvent the poor properties of floating-point arithmetic, we propose in this paper a new filtering algorithm, called FPLP, which relies on various relaxations over the real numbers of the problem over

$\mathcal{F}$

. Safe bounds of the domains are computed with a mixed integer linear programming solver (MILP) on safe linearizations of these relaxations. Preliminary experiments on a relevant set of benchmarks are promising and show that this approach can be effective for boosting local consistency algorithms over

$\mathcal{F}$

.

Mohammed Said Belaid, Claude Michel, Michel Rueher
A Model Seeker: Extracting Global Constraint Models from Positive Examples

We describe a system which generates finite domain constraint models from positive example solutions, for highly structured problems. The system is based on the global constraint catalog, providing the library of constraints that can be used in modeling, and the Constraint Seeker tool, which finds a ranked list of matching constraints given one or more sample call patterns.

We have tested the modeler with 230 examples, ranging from 4 to 6,500 variables, using between 1 and 7,000 samples. These examples come from a variety of domains, including puzzles, sports-scheduling, packing & placement, and design theory. When comparing against manually specified “canonical” models for the examples, we achieve a hit rate of 50%, processing the complete benchmark set in less than one hour on a laptop. Surprisingly, in many cases the system finds usable candidate lists even when working with a single, positive example.

Nicolas Beldiceanu, Helmut Simonis
On Computing Minimal Equivalent Subformulas

A propositional formula in Conjunctive Normal Form (CNF) may contain redundant clauses — clauses whose removal from the formula does not affect the set of its models. Identification of redundant clauses is important because redundancy often leads to unnecessary computation, wasted storage, and may obscure the structure of the problem. A formula obtained by the removal of all redundant clauses from a given CNF formula

${\mathcal{F}}$

is called a Minimal Equivalent Subformula (MES) of

${\mathcal{F}}$

. This paper proposes a number of efficient algorithms and optimization techniques for the computation of MESes. Previous work on MES computation proposes a simple algorithm based on iterative application of the definition of a redundant clause, similar to the well-known deletion-based approach for the computation of Minimal Unsatisfiable Subformulas (MUSes). This paper observes that, in fact, most of the existing algorithms for the computation of MUSes can be adapted to the computation of MESes. However, some of the optimization techniques that are crucial for the performance of the state-of-the-art MUS extractors cannot be applied in the context of MES computation, and thus the resulting algorithms are often not efficient in practice. To address the problem of efficient computation of MESes, the paper develops a new class of algorithms that are based on the iterative analysis of subsets of clauses. The experimental results, obtained on representative problem instances, confirm the effectiveness of the proposed algorithms. The experimental results also reveal that many CNF instances obtained from the practical applications of SAT exhibit a large degree of redundancy.

Anton Belov, Mikoláš Janota, Inês Lynce, Joao Marques-Silva
Including Soft Global Constraints in DCOPs

In the centralized context, global constraints have been essential for the advancement of constraint reasoning. In this paper we propose to include soft global constraints in distributed constraint optimization problems (DCOPs). Looking for efficiency, we study possible decompositions of global constraints, including the use of extra variables. We extend the distributed search algorithm BnB-ADOPT

 + 

to support these representations of global constraints. In addition, we explore the relation of global constraints with soft local consistency in DCOPs, in particular for the generalized soft arc consistency (GAC) level. We include specific propagators for some well-known soft global constraints. Finally, we provide empirical results on several benchmarks.

Christian Bessiere, Patricia Gutierrez, Pedro Meseguer
The Weighted Average Constraint

Weighted average expressions frequently appear in the context of allocation problems with balancing based constraints. In combinatorial optimization they are typically avoided by exploiting problems specificities or by operating on the search process. This approach fails to apply when the weights are decision variables and when the average value is part of a more complex expression. In this paper, we introduce a novel

average

constraint to provide a convenient model and efficient propagation for weighted average expressions appearing in a combinatorial model. This result is especially useful for Empirical Models extracted via Machine Learning (see [2]), which frequently count average expressions among their inputs. We provide basic and incremental filtering algorithms. The approach is tested on classical benchmarks from the OR literature and on a workload dispatching problem featuring an Empirical Model. In our experimentation the novel constraint, in particular with incremental filtering, proved to be even more efficient than traditional techniques to tackle weighted average expressions.

Alessio Bonfietti, Michele Lombardi
Weibull-Based Benchmarks for Bin Packing

Bin packing is a ubiquitous problem that arises in many practical applications. The motivation for the work presented here comes from the domain of data centre optimisation. In this paper we present a parameterisable benchmark generator for bin packing instances based on the well-known Weibull distribution. Using the shape and scale parameters of this distribution we can generate benchmarks that contain a variety of item size distributions. We show that real-world bin packing benchmarks can be modelled extremely well using our approach. We also study both systematic and heuristic bin packing methods under a variety of Weibull settings. We observe that for all bin capacities, the number of bins required in an optimal solution increases as the Weibull shape parameter increases. However, for each bin capacity, there is a range of Weibull shape settings, corresponding to different item size distributions, for which bin packing is hard for a CP-based method.

Ignacio Castiñeiras, Milan De Cauwer, Barry O’Sullivan
Space-Time Tradeoffs for the Regular Constraint

Many global constraints can be described by a regular expression or a DFA. Originally, the

regular

constraint, uses a DFA to describe the constraint, however, it can also be used to express a

table

constraint. Thus, there are many representations for an equivalent constraint. In this paper, we investigate the effect of different representations on natural extensions of the

regular

constraint focusing on the space-time tradeoffs. We propose a variant of the

regular

constraint,

nfac

(

X

), where

X

can be an NFA, DFA or MDD. Our

nfac

algorithm enforces GAC directly on any of the input representations, thus, generalizing the

mddc

algorithm. We also give an algorithm to directly convert an NFA representation to an MDD for

nfac

(MDD) or

mddc

. Our experiments show that the NFA representation not only saves space but also time. When the ratio of the NFA over the MDD or DFA size is small enough,

nfac

(NFA) is faster. When the size ratio is larger,

nfac

(NFA) still saves space and is a bit slower. In some problems, the initialization cost of MDD or DFA can also be a significant overhead, unlike NFA which has low initialization cost. We also show that the effect of the early-cutoff optimization is significant in all the representations.

Kenil C. K. Cheng, Wei Xia, Roland H. C. Yap
Inter-instance Nogood Learning in Constraint Programming

Lazy Clause Generation is a powerful approach to reducing search in Constraint Programming. This is achieved by recording sets of domain restrictions that previously led to failure as new clausal propagators called nogoods. This dramatically reduces the search and provides orders of magnitude speedups on a wide range of problems. Current implementations of Lazy Clause Generation only allows solvers to learn and utilize nogoods

within

an individual problem. This means that everything the solver learns will be forgotten as soon as the current problem is finished. In this paper, we show how Lazy Clause Generation can be extended so that nogoods learned from one problem can be retained and used to significantly speed up the solution of other, similar problems.

Geoffrey Chu, Peter J. Stuckey
Solving Temporal Problems Using SMT: Strong Controllability

Many applications, such as scheduling and temporal planning, require the solution of Temporal Problems (TP’s) representing constraints over the timing of activities. A TP with uncertainty (TPU) is characterized by activities with uncontrollable duration. Depending on the Boolean structure of the constraints, we have simple (STPU), constraint satisfaction (TCSPU), and disjunctive (DTPU) temporal problems with uncertainty.

In this work we tackle the problem of strong controllability, i.e. finding an assignment to all the controllable time points, such that the constraints are fulfilled under any possible assignment of uncontrollable time points. We work in the framework of Satisfiability Modulo Theory (SMT), where uncertainty is expressed by means of universal quantifiers. We obtain the first practical and comprehensive solution for strong controllability: the use of quantifier elimination techniques leads to quantifier-free encodings, which are in turn solved with efficient SMT solvers.

We provide a detailed experimental evaluation of our approach over a large set of benchmarks. The results clearly demonstrate that the proposed approach is feasible, and outperforms the best state-of-the-art competitors, when available.

Alessandro Cimatti, Andrea Micheli, Marco Roveri
A Characterisation of the Complexity of Forbidding Subproblems in Binary Max-CSP

Tractable classes of binary CSP and binary Max-CSP have recently been discovered by studying classes of instances defined by excluding subproblems. In this paper we characterise the complexity of all classes of binary Max-CSP instances defined by forbidding a single subproblem. In the resulting dichotomy, the only non-trivial tractable class defined by a forbidden subproblem corresponds to the set of instances satisfying the so-called joint-winner property.

Martin C. Cooper, Guillaume Escamocher, Stanislav Živný
Optimisation Modelling for Software Developers

Software developers are an ideal channel for the distribution of Constraint Programming (CP) technology. Unfortunately, including even basic optimisation functionality in an application currently requires the use of an entirely separate paradigm with which most software developers are not familiar.

We suggest an alternative interface to CP designed to overcome this barrier, and describe a prototype implementation for Java. The interface allows an optimisation problem to be defined in terms of procedures rather than decision variables and constraints. Optimisation is seamlessly integrated into a wider application through automatic conversion between this definition and a conventional model solved by an external solver.

This work is inspired by the language CoJava, in which a simulation is automatically translated into an optimal simulation. We extend this idea to support a general interface where optimisation is triggered on-demand. Our implementation also supports much more advanced code, such as object variables, variable-sized collections, and complex decisions.

Kathryn Francis, Sebastian Brand, Peter J. Stuckey
Adaptive Bisection of Numerical CSPs

Bisection is a search algorithm for numerical CSPs. The main principle is to select one variable at every node of the search tree and to bisect its interval domain. In this paper, we introduce a new adaptive variable selection strategy following an intensification diversification approach. Intensification is implemented by the maximum smear heuristic. Diversification is obtained by a round-robin ordering on the variables. The balance is automatically adapted during the search according to the solving state. Experimental results from a set of standard benchmarks show that this new strategy is more robust. Moreover, it is particularly efficient for solving the well-known Transistor problem, illustrating the benefits of an adaptive search.

Laurent Granvilliers
Resource Constrained Shortest Paths with a Super Additive Objective Function

We present an exact solution approach to the constrained shortest path problem with a super additive objective function. This problem generalizes the resource constrained shortest path problem by considering a cost function

c

(·) such that, given two consecutive paths

P

1

and

P

2

,

c

(

P

1

 ∪ 

P

2

) ≥ 

c

(

P

1

) + 

c

(

P

2

). Since super additivity invalidates the Bellman optimality conditions, known resource constrained shortest path algorithms must be revisited. Our exact solution algorithm is based on a two stage approach: first, the size of the input graph is reduced as much as possible using resource, cost, and Lagrangian reduced-cost filtering algorithms that account for the super additive cost function. Then, since the Lagrangian relaxation provides a tight lower bound, the optimal solution is computed using a near-shortest path enumerative algorithm that exploits the lower bound. The behavior of the different filtering procedures are compared, in terms of computation time, reduction of the input graph, and solution quality, considering two classes of graphs deriving from real applications.

Stefano Gualandi, Federico Malucelli
Relating Proof Complexity Measures and Practical Hardness of SAT

Boolean satisfiability (SAT) solvers have improved enormously in performance over the last 10–15 years and are today an indispensable tool for solving a wide range of computational problems. However, our understanding of what makes SAT instances hard or easy in practice is still quite limited. A recent line of research in proof complexity has studied theoretical complexity measures such as

length

,

width

, and

space

in resolution, which is a proof system closely related to state-of-the-art conflict-driven clause learning (CDCL) SAT solvers. Although it seems like a natural question whether these complexity measures could be relevant for understanding the practical hardness of SAT instances, to date there has been very limited research on such possible connections. This paper sets out on a systematic study of the interconnections between theoretical complexity and practical SAT solver performance. Our main focus is on space complexity in resolution, and we report results from extensive experiments aimed at understanding to what extent this measure is correlated with hardness in practice. Our conclusion from the empirical data is that the resolution space complexity of a formula would seem to be a more fine-grained indicator of whether the formula is hard or easy than the length or width needed in a resolution proof. On the theory side, we prove a separation of general and tree-like resolution space, where the latter has been proposed before as a measure of practical hardness, and also show connections between resolution space and backdoor sets.

Matti Järvisalo, Arie Matsliah, Jakob Nordström, Stanislav Živný
The SeqBin Constraint Revisited

We revisit the

SeqBin

constraint [1]. This meta-constraint subsumes a number of important global constraints like

Change

[2],

Smooth

[3] and

IncreasingNValue

[4]. We show that the previously proposed filtering algorithm for

SeqBin

has two drawbacks even under strong restrictions: it does not detect bounds disentailment and it is not idempotent. We identify the cause for these problems, and propose a new propagator that overcomes both issues. Our algorithm is based on a connection to the problem of finding a path of a given cost in a restricted

n

-partite graph. Our propagator enforces domain consistency in

O

(

nd

2

) and, for special cases of

SeqBin

that include

Change

,

Smooth

and

IncreasingNValue

in

O

(

nd

) time.

George Katsirelos, Nina Narodytska, Toby Walsh
Eigenvector Centrality in Industrial SAT Instances

Despite the success of modern SAT solvers on industrial instances, most of the progress relies on intensive experimental testing of improvements or new ideas. In most cases, the behavior of CDCL solvers cannot be predicted and even small changes may have a dramatic positive or negative effect. In this paper, we do not try to improve the performance of SAT solvers, but rather try to improve our understanding of their behavior. More precisely, we identify an essential structural property of industrial instances, based on the Eigenvector centrality of a graphical representation of the formula. We show how this static value, computed only once over the initial formula casts new light on the behavior of CDCL solvers.

We also advocate for a better partitionning of industrial problems. Our experiments clearly suggest deep discrepancies among the families of benchmarks used in the last SAT competitions.

George Katsirelos, Laurent Simon
Classifying and Propagating Parity Constraints

Parity constraints, common in application domains such as circuit verification, bounded model checking, and logical cryptanalysis, are not necessarily most efficiently solved if translated into conjunctive normal form. Thus, specialized parity reasoning techniques have been developed in the past for propagating parity constraints. This paper studies the questions of deciding whether unit propagation or equivalence reasoning is enough to achieve full propagation in a given parity constraint set. Efficient approximating tests for answering these questions are developed. It is also shown that equivalence reasoning can be simulated by unit propagation by adding a polynomial amount of redundant parity constraints to the problem. It is proven that without using additional variables, an exponential number of new parity constraints would be needed in the worst case. The presented classification and propagation methods are evaluated experimentally.

Tero Laitinen, Tommi Junttila, Ilkka Niemelä
Consistencies for Ultra-Weak Solutions in Minimax Weighted CSPs Using the Duality Principle

Minimax Weighted Constraint Satisfaction Problems (formerly called Quantified Weighted CSPs) are a framework for modeling soft constrained problems with adversarial conditions. In this paper, we describe novel definitions and implementations of node, arc and full directional arc consistency notions to help reduce search space on top of the basic tree search with alpha-beta pruning for solving ultra-weak solutions. In particular, these consistencies approximate the lower and upper bounds of the cost of a problem by exploiting the semantics of the quantifiers and reusing techniques from both Weighted and Quantified CSPs. Lower bound computation employs standard estimation of costs in the sub-problems used in alpha-beta search. In estimating upper bounds, we propose two approaches based on the Duality Principle: duality of quantifiers and duality of constraints. The first duality amounts to changing quantifiers from min to max , while the second duality re-uses the lower bound approximation functions on dual constraints to generate upper bounds. Experiments on three benchmarks comparing basic alpha-beta pruning and the six consistencies from the two dualities are performed to confirm the feasibility and efficiency of our proposal.

Arnaud Lallouet, Jimmy H. M. Lee, Terrence W. K. Mak
Propagating Soft Table Constraints

WCSP is a framework that has attracted a lot of attention during the last decade. In particular, many filtering approaches have been developed on the concept of equivalence-preserving transformations (cost transfer operations), using the definition of soft local consistencies such as, for example, node consistency, arc consistency, full directional arc consistency, and existential directional arc consistency. Almost all algorithms related to these properties have been introduced for binary weighted constraint networks, and most of the conducted experiments typically include networks with binary and ternary constraints only. In this paper, we focus on extensional soft constraints (of large arity), so-called soft table constraints. We propose an algorithm to enforce a soft version of generalized arc consistency (GAC) on such constraints, by combining both the techniques of cost transfer and simple tabular reduction, the latter dynamically maintaining the list of allowed tuples in constraint tables. On various series of problem instances containing soft table constraints of large arity, we show the practical interest of our approach.

Christophe Lecoutre, Nicolas Paris, Olivier Roussel, Sébastien Tabary
WCSP Integration of Soft Neighborhood Substitutability

WCSP is an optimization problem for which many forms of soft local (arc) consistencies have been proposed such as, for example, existential directional arc consistency (EDAC) and virtual arc consistency (VAC). In this paper, we adopt a different perspective by revisiting the well-known property of (soft) substitutability. First, we provide a clear picture of the relationships existing between soft neighborhood substitutability (SNS) and a tractable property called

pcost

which allows us to compare the cost of two values (through the use of so-called cost pairs). We prove that under certain assumptions,

pcost

is equivalent to SNS but weaker than SNS in the general case since we show that SNS is coNP-hard. We also show that SNS preserves the property VAC but not the property EDAC. Finally, we introduce an algorithm to enforce

pcost

that benefits from several optimizations (early breaks, residues, timestamping). The practical interest of maintaining

pcost

together with AC*, FDAC or EDAC, during search. is shown on various series of WCSP instances.

Christophe Lecoutre, Olivier Roussel, Djamel E. Dehani
Increasing Symmetry Breaking by Preserving Target Symmetries

Breaking the exponential number of all symmetries of a constraint satisfaction problem is often too costly. In practice, we often aim at breaking a subset of the symmetries efficiently, which we call target symmetries. In static symmetry breaking, the goal is to post a set of constraints to break these target symmetries in order to reduce the solution set and thus also the search space. Symmetries of a problem are all intertwined. A symmetry breaking constraint intended for a particular symmetry almost always breaks more than just the intended symmetry as a side-effect. Different constraints for breaking the same target symmetry can have different side-effects. Conventional wisdom suggests that we should select a symmetry breaking constraint that has more side-effects by breaking more symmetries. While this wisdom is valid in many ways, we should be careful where the side-effects take place. A symmetry

σ

of a CSP

$\mathcal{P} = (\mathcal{V},\mathcal{D},\mathcal{C})$

is

preserved

by a set of symmetry breaking constraints

C

sb

iff

σ

is a symmetry of

$\mathcal{P}' = (\mathcal{V},\mathcal{D},\mathcal{C} \cup C^{sb})$

. We give theorems and examples to demonstrate that it is beneficial to post symmetry breaking constraints that preserve the target symmetries and restrict the side-effects to only non-target symmetries as much as possible. The benefits are in terms of the number of symmetries broken and the extent to which a symmetry is broken (or eliminated), resulting in a smaller solution set and search space. Extensive experiments are also conducted to confirm the feasibility and efficiency of our proposal empirically.

Jimmy H. M. Lee, Jingying Li
A Scalable Sweep Algorithm for the cumulative Constraint

This paper presents a sweep based algorithm for the

cumulative

constraint, which can operate in filtering mode as well as in greedy assignment mode. Given

n

tasks, this algorithm has a worst-case time complexity of

O

(

n

2

). In practice, we use a variant with better average-case complexity but worst-case complexity of

O

(

n

2

log

n

), which goes down to

O

(

n

log

n

) when all tasks have unit duration, i.e. in the bin-packing case. Despite its worst-case time complexity, this algorithm scales well in practice, even when a significant number of tasks can be scheduled in parallel. It handles up to 1 million tasks in one single

cumulative

constraint in both Choco and SICStus.

Arnaud Letort, Nicolas Beldiceanu, Mats Carlsson
A New Encoding from MinSAT into MaxSAT

MinSAT is the problem of finding a truth assignment that minimizes the number of satisfied clauses in a CNF formula. When we distinguish between hard and soft clauses, and soft clauses have an associated weight, then the problem, called Weighted Partial MinSAT, consists in finding a truth assignment that satisfies all the hard clauses and minimizes the sum of weights of satisfied soft clauses. In this paper we define a novel encoding from Weighted Partial MinSAT into Weighted Partial MaxSAT, which is also valid for encoding Weighted Partial MaxSAT into Weighted Partial MinSAT. Moreover, we report on an empirical investigation that shows that our encoding significantly outperforms existing encodings on weighted and unweighted Min2SAT and Min3SAT instances.

Zhu Zhu, Chu-Min Li, Felip Manyà, Josep Argelich
Solving Minimal Constraint Networks in Qualitative Spatial and Temporal Reasoning

The

minimal label problem

(MLP) (also known as the deductive closure problem) is a fundamental problem in qualitative spatial and temporal reasoning (QSTR). Given a qualitative constraint network Γ, the minimal network of Γ relates each pair of variables (

x

,

y

) by the minimal label of (

x

,

y

), which is the minimal relation between

x

,

y

that is entailed by network Γ. It is well-known that MLP is equivalent to the corresponding consistency problem with respect to polynomial Turing-reductions. This paper further shows, for several qualitative calculi including Interval Algebra and RCC-8 algebra, that deciding the minimality of qualitative constraint networks and computing a solution of a minimal constraint network are both NP-hard problems.

Weiming Liu, Sanjiang Li
Containment, Equivalence and Coreness from CSP to QCSP and Beyond

The constraint satisfaction problem (CSP) and its quantified extensions, whether without (QCSP) or with disjunction (QCSP

$_\lor$

), correspond naturally to the model checking problem for three increasingly stronger fragments of positive first-order logic. Their complexity is often studied when parameterised by a fixed model, the so-called template. It is a natural question to ask when two templates are equivalent, or more generally when one “contain” another, in the sense that a satisfied instance of the first will be necessarily satisfied in the second. One can also ask for a smallest possible equivalent template: this is known as the core for CSP. We recall and extend previous results on containment, equivalence and “coreness” for QCSP

$_\lor$

before initiating a preliminary study of cores for QCSP which we characterise for certain structures and which turns out to be more elusive.

Florent Madelaine, Barnaby Martin
An Optimal Filtering Algorithm for Table Constraints

Filtering algorithms for table constraints are constraint-based, which means that the propagation queue only contains information on the constraints that must be reconsidered. This paper proposes four efficient value-based algorithms for table constraints, meaning that the propagation queue also contains information on the removed values. One of these algorithms (AC5TC-Tr) is proved to have an optimal time complexity of

O

(

r

·

t

 + 

r

·

d

) per table constraint. Experimental results show that, on structured instances, all our algorithms are two or three times faster than the state of the art STR2+ and MDD

c

algorithms.

Jean-Baptiste Mairy, Pascal Van Hentenryck, Yves Deville
Parallel SAT Solver Selection and Scheduling

Combining differing solution approaches by means of solver portfolios has proven as a highly effective technique for boosting solver performance. We consider the problem of generating parallel SAT solver portfolios. Our approach is based on a recently introduced sequential SAT solver portfolio that excelled at the last SAT competition. We show how the approach can be generalized for the parallel case, and how obstacles like parallel SAT solvers and symmetries induced by identical processors can be overcome. We compare different ways of computing parallel solver portfolios with the best performing parallel SAT approaches to date. Extensive experimental results show that the developed methodology very significantly improves our current parallel SAT solving capabilities.

Yuri Malitsky, Ashish Sabharwal, Horst Samulowitz, Meinolf Sellmann
Constraint Satisfaction over Bit-Vectors

Reasoning over bit-vectors arises in a variety of applications in verification and cryptography. This paper presents a bit-vector domain for constraint programming and its associated filtering algorithms. The domain supports all the traditional bit operations and correctly models modulo-arithmetic and overflows. The domain implementation uses bit operations of the underlying architecture, avoiding the drawback of a bit-blasting approach that associates a variable with each bit. The filtering algorithms implement either domain consistency on the bit-vector domain or bit consistency, a new consistency notion introduced in this paper. Filtering algorithms for logical and structural constraints typically run in constant time, while arithmetic constraints such as addition run in time linear in the size of the bit-vectors. The paper also discusses how to channel bit-vector variables with an integer variable.

Laurent D. Michel, Pascal Van Hentenryck
Towards Solver-Independent Propagators

We present an extension to indexicals to describe propagators for global constraints. The resulting language is compiled into actual propagators for different solvers, and is solver-independent. In addition, we show how this high-level description eases the proof of propagator properties, such as correctness and monotonicity. Experimental results show that propagators compiled from their indexical descriptions are sometimes not significantly slower than built-in propagators of Gecode. Therefore, our language can be used for the rapid prototyping of new global constraints.

Jean-Noël Monette, Pierre Flener, Justin Pearson
Interactive Algorithm for Multi-Objective Constraint Optimization

Many real world problems involve multiple criteria that should be considered separately and optimized simultaneously. A Multi-Objective Constraint Optimization Problem (MO-COP) is the extension of a mono-objective Constraint Optimization Problem (COP). In a MO-COP, it is required to provide the most preferred solution for a user among many optimal solutions. In this paper, we develop a novel Interactive Algorithm for MO-COP (MO-IA). The characteristics of this algorithm are as follows: (i) it can guarantee to find a Pareto solution, (ii) it narrows a region, in which Pareto front may exist, gradually, (iii) it is based on a pseudo-tree, which is a widely used graph structure in COP algorithms, and (iv) the complexity of this algorithm is determined by the induced width of problem instances. In the evaluations, we use an existing model for representing a utility function, and show empirically the effectiveness of our algorithm. Furthermore, we propose an extension of MO-IA, which can provide the more detailed information for Pareto front.

Tenda Okimoto, Yongjoon Joe, Atsushi Iwasaki, Toshihiro Matsui, Katsutoshi Hirayama, Makoto Yokoo
Focus : A Constraint for Concentrating High Costs

Many Constraint Programming models use integer cost variables aggregated in an objective criterion. In this context, some constraints involving exclusively cost variables are often imposed. Such constraints are complementary to the objective function. They characterize the solutions which are acceptable in practice. This paper deals with the case where the set of costs is a sequence, in which high values should be concentrated in a few number of areas. Representing such a property through a search heuristic may be complex and overall not precise enough. To solve this issue, we introduce a new constraint,

Focus

(

X

,

y

c

,

len

,

k

), where

X

is a sequence of

n

integer variables,

y

c

an integer variable, and

len

and

k

are two integers. To satisfy

Focus

, the minimum number of distinct sub-sequences of consecutive variables in

X

, of length at most

len

and that involve exclusively values strictly greater than

k

, should be less than or equal to

y

c

. We present two examples of problems involving

Focus

. We propose a complete filtering algorithm in

O

(

n

) time complexity.

Thierry Petit
Refining Abstract Interpretation Based Value Analysis with Constraint Programming Techniques

Abstract interpretation based value analysis is a classical approach for verifying programs with floating-point computations. However, state-of-the-art tools compute an over-approximation of the variable values that can be very coarse. In this paper, we show that constraint solvers can significantly refine the approximations computed with abstract interpretation tools. We introduce a hybrid approach that combines abstract interpretation and constraint programming techniques in a single static and automatic analysis.

rAiCp

, the system we developed is substantially more precise than

Fluctuat

, a state-of-the-art static analyser. Moreover, it could eliminate 13 false alarms generated by

Fluctuat

on a standard set of benchmarks.

Olivier Ponsini, Claude Michel, Michel Rueher
Time-Dependent Simple Temporal Networks

Simple Temporal Networks (STN) allow conjunctions of minimum and maximum distance constraints between pairs of temporal positions to be represented. This paper introduces an extension of STN called Time-dependent STN (TSTN), which covers temporal constraints for which the minimum and maximum distances required between two temporal positions x and y are not necessarily constant but may depend on the assignments of x and y. Such constraints are useful to model problems in which the transition time required between two activities may depend on the time at which the transition is triggered. Properties of the new framework are analyzed, and standard STN solving techniques are extended to TSTN. The contributions are applied to the management of temporal constraints for so-called “agile” satellites.

Cédric Pralet, Gérard Verfaillie
Improved Bounded Max-Sum for Distributed Constraint Optimization

Bounded Max-Sum is a message-passing algorithm for solving Distributed Constraint Optimization Problems able to compute solutions with a guaranteed approximation ratio. Although its approximate solutions were empirically proved to be within a small percentage of the optimal solution on low and moderately dense problems, in this paper we show that its theoretical approximation ratio is overestimated, thus overshadowing its good performance. We propose a new algorithm, called Improved Bounded Max-Sum, whose approximate solutions are at least as good as the ones found by Bounded Max-Sum and with a tighter approximation ratio. Our empirical evaluation shows that the new approximation ratio is significantly tighter.

Emma Rollon, Javier Larrosa
A Hybrid MIP/CP Approach for Multi-activity Shift Scheduling

We propose a hybrid MIP/CP approach for solving multi-activity shift scheduling problems, based on regular languages that partially describe the set of feasible shifts. We use an aggregated MIP relaxation to capture the optimization part of the problem and to get rid of symmetry. Whenever the MIP solver generates a integer solution, we use a CP solver to check whether it can be turned into a feasible solution of the original problem. A MIP-based heuristic is also developed. Computational results are reported, showing that the proposed method is a promising alternative compared to the state-of-the-art.

Domenico Salvagnin, Toby Walsh
Contributions to the Theory of Practical Quantified Boolean Formula Solving

Recent solvers for quantified boolean formulas (QBFs) use a clause learning method based on a procedure proposed by Giunchiglia et al. (JAIR 2006), which avoids creating tautological clauses. The underlying proof system is Q-resolution. This paper shows an exponential worst case for the clause-learning procedure. This finding confirms empirical observations that some formulas take mysteriously long times to solve, compared to other apparently similar formulas.

Q-resolution is known to be refutation complete for QBF, but not all logically implied clauses can be derived with it. A stronger proof system called QU-resolution is introduced, and shown to be complete in this stronger sense. A new procedure called QPUP for clause learning without tautologies is also described.

A generalization of pure literals is introduced, called effectively depth-monotonic literals. In general, the variable-elimination resolution operation, as used by Quantor, sQueezeBF, and Bloqqer is unsound if the existential variable being eliminated is not at innermost scope. It is shown that variable-elimination resolution is sound for effectively depth-monotonic literals even when they are not at innermost scope.

Allen Van Gelder
Breaking Variable Symmetry in Almost Injective Problems

Lexicographic constraints are commonly used to break variable symmetries. In the general case, the number of constraint to be posted is potentially exponential in the number of variables. For injective problems (AllDiff), Puget’s method[12] breaks all variable symmetries with a linear number of constraints.

In this paper we assess the number of constraints for “almost” injective problems. We propose to characterize them by a parameter

μ

based on Global Cardinality Constraint as a generalization of the AllDiff constraint. We show that for almost injective problems, variable symmetries can be broken with no more than

$\binom{n}{\mu}$

constraints which is XP in the framework of parameterized complexity. When only

ν

variables can take duplicated values, the number of constraints is FPT in

μ

and

ν

.

Philippe Vismara, Remi Coletta
Understanding, Improving and Parallelizing MUS Finding Using Model Rotation

Recently a new technique for improving algorithms for extracting Minimal Unsatisfiable Subsets (MUSes) from unsatisfiable CNF formulas called “model rotation” was introduced [Marques-Silva et. al. SAT2011]. The technique aims to reduce the number of times a MUS finding algorithm needs to call a SAT solver. Although no guarantees for this reduction are provided the technique has been shown to be very effective in many cases. In fact, such model rotation algorithms are now arguably the state-of-the-art in MUS finding.

This work analyses the model rotation technique in detail and provides theoretical insights that help to understand its performance. These new insights on the operation of model rotation lead to several modifications and extensions that are empirically evaluated. Moreover, it is demonstrated how such MUS extracting algorithms can be effectively parallelized using existing techniques for parallel incremental SAT solving.

Siert Wieringa
Revisiting Neighborhood Inverse Consistency on Binary CSPs

Our goal is to investigate the definition and application of strong consistency properties on the dual graphs of binary Constraint Satisfaction Problems (CSPs). As a first step in that direction, we study the structure of the dual graph of binary CSPs, and show how it can be arranged in a triangle-shaped grid. We then study, in this context, Relational Neighborhood Inverse Consistency (RNIC), which is a consistency property that we had introduced for non-binary CSPs [17]. We discuss how the structure of the dual graph of binary CSPs affects the consistency level enforced by RNIC. Then, we compare, both theoretically and empirically, RNIC to Neighborhood Inverse Consistency (NIC) and strong Conservative Dual Consistency (sCDC), which are higher-level consistency properties useful for solving difficult problem instances. We show that all three properties are pairwise incomparable.

Robert J. Woodward, Shant Karakashian, Berthe Y. Choueiry, Christian Bessiere
Syntactically Characterizing Local-to-Global Consistency in ORD-Horn

Establishing

local consistency

is one of the most frequently used algorithmic techniques in constraint satisfaction in general and in spatial and temporal reasoning in particular. A collection of constraints is

globally consistent

if it is completely explicit, that is, every partial solution may be extended to a full solution by greedily assigning values to variables one at a time. We will say that a structure

B

has

local-to-global consistency

if establishing local-consistency yields a globally consistent instance of

$\textbf{CSP}(\mathbf{B})$

.

This paper studies local-to-global consistency for ORD-Horn languages, that is, structures definable over the ordered rationals (ℚ; < ) within the formalism of ORD-Horn clauses. This formalism has attracted a lot of attention and is of crucial importance to spatial and temporal reasoning. We provide a syntactic characterization (in terms of first-order definability) of all ORD-Horn languages enjoying local-to-global consistency.

Michał Wrona
A Hybrid Paradigm for Adaptive Parallel Search

Parallelization offers the opportunity to accelerate search on constraint satisfaction problems. To parallelize a sequential solver under a popular message passing protocol, the new paradigm described here combines portfolio-based methods and search space splitting. To split effectively and to balance processor workload, this paradigm adaptively exploits knowledge acquired during search and allocates additional resources to the most difficult parts of a problem. Extensive experiments in a parallel environment show that this paradigm significantly improves the performance of an underlying sequential solver, outperforms more naive approaches to parallelization, and solves many difficult problems left open after recent solver competitions.

Xi Yun, Susan L. Epstein

Application Track

A Constraint Programming Approach for the Traveling Purchaser Problem

We present a novel approach to the

Traveling Purchaser Problem

(TPP), based on constraint programming and Lagrangean relaxation. The TPP is a generalization of the Traveling Salesman Problem involved in many real-world applications. Given a set of markets providing products at different prices and a list of products to be purchased, the problem is to determine the route minimizing the sum of the traveling and purchasing costs. We propose in this paper an efficient approach when the number of markets visited in an optimal solution is low. We believe that the real-world applications of this problem often assume a bounded number of visits when they involve a physical routing. It is an actual requirement from our industrial partner which is developing a web application to help their customers’ shopping planning. The approach is tested on academic benchmarks. It proves to be competitive with a state of the art branch-and-cut algorithm and can provide in some cases new optimal solutions for instances with up to 250 markets and 200 products.

Hadrien Cambazard, Bernard Penz
Constraint-Based Register Allocation and Instruction Scheduling

This paper introduces a constraint model and solving techniques for code generation in a compiler back-end. It contributes a new model for global register allocation that combines several advanced aspects: multiple register banks (subsuming spilling to memory), coalescing, and packing. The model is extended to include instruction scheduling and bundling. The paper introduces a decomposition scheme exploiting the underlying program structure and exhibiting robust behavior for functions with thousands of instructions. Evaluation shows that code quality is on par with LLVM, a state-of-the-art compiler infrastructure.

The paper makes important contributions to the applicability of constraint programming as well as compiler construction: essential concepts are unified in a high-level model that can be solved by readily available modern solvers. This is a significant step towards basing code generation entirely on a high-level model and by this facilitates the construction of correct, simple, flexible, robust, and high-quality code generators.

Roberto Castañeda Lozano, Mats Carlsson, Frej Drejhammar, Christian Schulte
Maximising the Net Present Value of Large Resource-Constrained Projects

The Resource-constrained Project Scheduling Problem (

Rcpsp

), in which a schedule must obey resource constraints and precedence constraints between pairs of activities, is one of the most studied scheduling problems. An important variation of this problem (

RcpspDc

) is to find a schedule which maximises the net present value (discounted cash flow). Large industrial applications can require thousands of activities to be scheduled over a long time span. The largest case we have (from a mining company) includes about 11,000 activities spanning over 20 years. We apply a Lagrangian relaxation method for large

RcpspDc

to obtain both upper and lower bounds. To overcome the scalability problem of this approach we first decompose the problem by relaxing as fewer as possible precedence constraints, while obtaining activity clusters small enough to be solved efficiently. We propose a hierarchical scheme to accelerate the convergence of the subgradient algorithm of the Lagrangian relaxation. We also parallelise the implementation to make better use of modern computing infrastructure. Together these three improvements allow us to provide the best known solutions for very large examples from underground mining problems.

Hanyu Gu, Peter J. Stuckey, Mark G. Wallace
Comparing Solution Methods for the Machine Reassignment Problem

The machine reassignment problem is defined by a set of machines and a set of processes. Each machine is associated with a set of resources, e.g. CPU, RAM etc., and each process is associated with a set of required resource values and a currently assigned machine. The task is to reassign the processes to machines while respecting a set of hard constraints in order to improve the usage of the machines, as defined by a complex cost function. We present a natural Constraint Programming (CP) formulation of the problem that uses a set of well-known global constraints. However, this formulation also requires new global constraints. Additionally, the instances are so large that systematic search is not practical especially when the time is limited. We therefore developed a CP formulation of the problem that is especially suited for a large neighborhood search approach (LNS). We also experimented with a mixed-integer programming (MIP) model for LNS. We first compare our formulations on a set of ROADEF’12 problem instances where the number of processes and machines are restricted to 1000 and 100 respectively. Both MIP and CP-based LNS approaches find solutions that are significantly better than the initial ones and compare well to other submitted entries. We further investigate their performances on larger instances where the number of processes and machines can be up to 50000 and 5000 respectively. The results suggest that our CP-based LNS can scale to large instances and is superior in memory use and the quality of solutions that can be found in limited time.

Deepak Mehta, Barry O’Sullivan, Helmut Simonis
A Boolean Model for Enumerating Minimal Siphons and Traps in Petri Nets

Petri nets are a simple formalism for modeling concurrent computation. Recently, they have emerged as a promising tool for modeling and analyzing biochemical interaction networks, bridging the gap between purely qualitative and quantitative models. Biological networks can indeed be large and complex, which makes their study difficult and computationally challenging. In this paper, we focus on two structural properties of Petri nets, siphons and traps, that bring us information about the persistence of some molecular species. We present two methods for enumerating all minimal siphons and traps of a Petri net by iterating the resolution of Boolean satisfiability problems executed with either a SAT solver or a CLP(B) program. We compare the performances of these methods with respect to a state-of-the-art algorithm from the Petri net community. On a benchmark with 80 Petri nets from the Petriweb database and 403 Petri nets from curated biological models of the Biomodels database, we show that miniSAT and CLP(B) solvers are overall both faster by two orders of magnitude with respect to the dedicated algorithm. Furthermore, we analyse why these programs perform so well on even very large biological models and show the existence of hard instances in Petri nets with unbounded degrees.

Faten Nabli, François Fages, Thierry Martinez, Sylvain Soliman
Cardinality Reasoning for Bin-Packing Constraint: Application to a Tank Allocation Problem

Flow reasoning has been successfully used in CP for more than a decade. It was originally introduced by Régin in the well-known Alldifferent and Global Cardinality Constraint (GCC) available in most of the CP solvers. The BinPacking constraint was introduced by Shaw and mainly uses an independent knapsack reasoning in each bin to filter the possible bins for each item. This paper considers the use of a cardinality/flow reasoning for improving the filtering of a bin-packing constraint. The idea is to use a GCC as a redundant constraint to the BinPacking that will count the number of items placed in each bin. The cardinality variables of the GCC are then dynamically updated during the propagation. The cardinality reasoning of the redundant GCC makes deductions that the bin-packing constraint cannot see since the placement of all items into every bin is considered at once rather than for each bin individually. This is particularly well suited when a minimum loading in each bin is specified in advance. We apply this idea on a Tank Allocation Problem (TAP). We detail our CP model and give experimental results on a real-life instance demonstrating the added value of the cardinality reasoning for the bin-packing constraint.

Pierre Schaus, Jean-Charles Régin, Rowan Van Schaeren, Wout Dullaert, Birger Raa
The Offshore Resources Scheduling Problem: Detailing a Constraint Programming Approach

The development of maritime oil wells depends on the availability of specialized fleet capable of performing the required activities. In addition, the exploitation of each well can only start when it is connected through pipes to a producing unit. The Offshore Resources Scheduling Problem (ORSP) combines such restrictions with the aim to prioritize development projects of higher return in oil production. In this work, an extended description of the ORSP is tackled with Constraint Programming (CP). Such extension refers to the customization of pipes and to the scheduling of fleet maintenance periods. The use of CP is due to the noted ability of the technique to model and solve large and complex scheduling problems. For the type of scenarios faced by Petrobras, the reported approach was able to quickly deliver good-quality solutions.

Thiago Serra, Gilberto Nishioka, Fernando J. M. Marcellino

Multi-disciplinary Track

Computational Protein Design as a Cost Function Network Optimization Problem

Proteins are chains of simple molecules called amino acids. The three-dimensional shape of a protein and its amino acid composition define its biological function. Over millions of years, living organisms have evolved and produced a large catalog of proteins. By exploring the space of possible amino-acid sequences, protein engineering aims at similarly designing tailored proteins with specific desirable properties. In Computational Protein Design (CPD), the challenge of identifying a protein that performs a given task is defined as the combinatorial optimization problem of a complex energy function over amino acid sequences.

In this paper, we introduce the CPD problem and some of the main approaches that have been used to solve it. We then show how this problem directly reduces to Cost Function Network (CFN) and 0/1LP optimization problems. We construct different real CPD instances to evaluate CFN and 0/1LP algorithms as implemented in the

toulbar2

and

cplex

solvers. We observe that CFN algorithms bring important speedups compared to the CPD platform

osprey

but also to

cplex

.

David Allouche, Seydou Traoré, Isabelle André, Simon de Givry, George Katsirelos, Sophie Barbe, Thomas Schiex
A Filtering Technique for Fragment Assembly- Based Proteins Loop Modeling with Constraints

Methods to predict the structure of a protein often rely on the knowledge of macro sub-structures and their exact or approximated relative positions in space. The parts connecting these sub-structures are called

loops

and, in general, they are characterized by a high degree of freedom. The modeling of loops is a critical problem in predicting protein conformations that are biologically realistic. This paper introduces a class of constraints that models a general multi-body system; we present a proof of NP-completeness and provide filtering techniques, inspired by inverse kinematics, that can drastically reduce the search space of potential conformations. The paper shows the application of the constraint in solving the protein loop modeling problem, based on fragments assembly.

Federico Campeotto, Alessandro Dal Palù, Agostino Dovier, Ferdinando Fioretto, Enrico Pontelli
A Branch and Prune Algorithm for the Computation of Generalized Aspects of Parallel Robots

Parallel robots enjoy enhanced mechanical characteristics that have to be contrasted with a more complicated design. In particular, they often have parallel singularities at some poses, and the robot may become uncontrollable, and could even be damaged, in such configurations. The computation of singularity free sets of reachable poses, called generalized aspects, is therefore a key issue in their design. A new methodology based on numerical constraint programming is proposed to compute a certified enclosure of such generalized aspects which fully automatically applies to arbitrary robot kinematic model.

Stéphane Caro, Damien Chablat, Alexandre Goldsztejn, Daisuke Ishii, Christophe Jermann
The Semigroups of Order 10

The number of finite semigroups increases rapidly with the number of elements. Since existing counting formulae do not give the complete number of semigroups of given order up to equivalence, the remainder can only be found by careful search. We describe the use of mathematical results combined with distributed Constraint Satisfaction to show that the number of non-equivalent semigroups of order 10 is 12,418,001,077,381,302,684. This solves a previously open problem in Mathematics, and has directly led to improvements in Constraint Satisfaction technology.

Andreas Distler, Chris Jefferson, Tom Kelsey, Lars Kotthoff
Exploring Chemistry Using SMT

How to synthesize molecules is a fundamental and well studied problem in chemistry. However, computer aided methods are still under-utilized in chemical synthesis planning. Given a specific chemistry (a set of chemical reactions), and a specified overall chemical mechanism, a number of exploratory questions are of interest to a chemist. Examples include: what products are obtainable, how to find a minimal number of reactions to synthesize a certain chemical compound, and how to map a specific chemistry to a mechanism. We present a Constraint Programming based approach to these problems and employ the expressive power of Satisfiability Modulo Theory (SMT) solvers. We show results for an analysis of the

Pentose Phosphate Pathway

and the

Biosynthesis of 3-Hydroxypropanoate

. The main novelty of the paper lies in the usage of SMT for expressing search problems in chemistry, and in the generality of its resulting computer aided method for synthesis planning.

Rolf Fagerberg, Christoph Flamm, Daniel Merkle, Philipp Peters
A Pseudo-Boolean Set Covering Machine

The Set Covering Machine (SCM) is a machine learning algorithm that constructs a conjunction of Boolean functions. This algorithm is motivated by the minimization of a theoretical bound. However, finding the optimal conjunction according to this bound is a combinatorial problem. The SCM approximates the solution using a greedy approach. Even though SCM seems very efficient in practice, it is unknown how it compares to the optimal solution. To answer this question, we present a novel pseudo-Boolean optimization model that encodes the minimization problem. It is the first time a Constraint Programming approach addresses the combinatorial problem related to this machine learning algorithm. Using that model and recent pseudo-Boolean solvers, we empirically show that the greedy approach is surprisingly close to the optimal.

Pascal Germain, Sébastien Giguère, Jean-Francis Roy, Brice Zirakiza, François Laviolette, Claude-Guy Quimper
Finding a Nash Equilibrium by Asynchronous Backtracking

Graphical Games are a succinct representation of multi agent interactions in which each participant interacts with a limited number of other agents. The model resembles Distributed Constraint Optimization Problems (DCOPs) including agents, variables, and values (strategies). However, unlike distributed constraints, local interactions of Graphical Games take the form of small strategic games and the agents are expected to seek a Nash Equilibrium rather than a cooperative minimal cost joint assignment.

The present paper models graphical games as a Distributed Constraint Satisfaction Problem with unique k-ary constraints in which each agent is only aware of its part in the constraint. A proof that a satisfying solution to the resulting problem is an

ε

-Nash equilibrium is provided and an Asynchronous Backtracking algorithm is proposed for solving this distributed problem. The algorithm’s completeness is proved and its performance is evaluated.

Alon Grubshtein, Amnon Meisels
Reasoning over Biological Networks Using Maximum Satisfiability

Systems biology is with no doubt one of the most compelling fields for a computer scientist. Modelling such systems is per se a major challenge, but the ultimate goal is to reason over those systems. We focus on modelling and reasoning over biological networks using Maximum Satisfiability (MaxSAT). Biological networks are represented by an influence graph whose vertices represent the genes of the network and edges represent interactions between genes. Given an influence graph and an experimental profile, the first step is to check for mutual consistency. In case of inconsistency, a repair is suggested. In addition, what is common to all solutions/optimal repairs is also provided. This information, named prediction, is of particular interest from a user’s point of view. Answer Set Programming (ASP) has been successfully applied to biological networks in the recent past. In this work, we give empirical evidence that MaxSAT is by far more adequate for solving this problem. Moreover, we show how concepts well studied in the fields of MaxSAT and CP, such as backbones and unsatisfiable subformulas, can be naturally related to this practical problem.

João Guerra, Inês Lynce
Properties of Energy-Price Forecasts for Scheduling

Wholesale electricity markets are becoming ubiquitous, offering consumers access to competitively-priced energy. The cost of energy is often correlated with its environmental impact; for example, environmentally sustainable forms of energy might benefit from subsidies, while the use of high-carbon sources might be discouraged through taxes or levies. Reacting to real-time electricity price fluctuations can lead to high cost savings, in particular for large energy consumers such as data centres or manufacturing plants. In this paper we focus on the challenge of day-ahead energy price prediction, using the Irish Single Electricity Market Operator (SEMO) as a case-study. We present techniques that significantly out-perform SEMO’s own prediction. We evaluate the energy savings that are possible in a production scheduling context, but show that better prediction does not necessarily yield energy-cost savings. We explore this issue further and characterize, and evaluate, important properties that an energy price predictor must have in order to give rise to significant scheduling-cost savings in practice.

Georgiana Ifrim, Barry O’Sullivan, Helmut Simonis
Aggregating Conditionally Lexicographic Preferences on Multi-issue Domains

One approach to voting on several interrelated issues consists in using a language for compact preference representation, from which the voters’ preferences are elicited and aggregated. A language usually comes with a domain restriction. We consider a well-known restriction, namely,

conditionally lexicographic preferences

, where both the relative importance between issues and the preference between values of an issue may depend on the values taken by more important issues. The naturally associated language consists in describing conditional importance and conditional preference by trees together with conditional preference tables. In this paper, we study the aggregation of conditionally lexicographic preferences, for several voting rules and several restrictions of the framework. We characterize computational complexity for some popular cases, and show that in many of them, computing the winner reduces in a very natural way to a

maxsat

problem.

Jérôme Lang, Jérôme Mengin, Lirong Xia
Constraint Programming for Path Planning with Uncertainty
Solving the Optimal Search Path Problem

The optimal search path (OSP) problem is a single-sided detection search problem where the location and the detectability of a moving object are uncertain. A solution to this

$\mathcal{NP}$

-hard problem is a path on a graph that maximizes the probability of finding an object that moves according to a known motion model. We developed constraint programming models to solve this probabilistic path planning problem for a single indivisible searcher. These models include a simple but powerful branching heuristic as well as strong filtering constraints. We present our experimentation and compare our results with existing results in the literature. The OSP problem is particularly interesting in that it generalizes to various probabilistic search problems such as intruder detection, malicious code identification, search and rescue, and surveillance.

Michael Morin, Anika-Pascale Papillon, Irène Abi-Zeid, François Laviolette, Claude-Guy Quimper
Feature Term Subsumption Using Constraint Programming with Basic Variable Symmetry

Feature Terms are a generalization of first-order terms which have been recently received increased attention for their usefulness in structured machine learning applications. One of the main obstacles for their wide usage is that their basic operation, subsumption, has a very high computational cost. Constraint Programming is a very suitable technique to implement that operation, in some cases providing orders of magnitude speed-ups with respect to the standard subsumption approach. In addition, exploiting a basic variable symmetry –that often appears in Feature Terms databases– causes substantial additional savings. We provide experimental results of the benefits of this approach.

Santiago Ontañón, Pedro Meseguer
Backmatter
Metadaten
Titel
Principles and Practice of Constraint Programming
herausgegeben von
Michela Milano
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-33558-7
Print ISBN
978-3-642-33557-0
DOI
https://doi.org/10.1007/978-3-642-33558-7

Premium Partner