2005 | OriginalPaper | Chapter
Consistency for Partially Defined Constraints
Authors : Andreï Legtchenko, Arnaud Lallouet
Published in: Principles and Practice of Constraint Programming - CP 2005
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Partially defined or
Open Constraints
[2] can be used to model the incomplete knowledge of a concept or a relation. In an Open Constraint, some tuples are known to be true, some other are known to be false and some are just
unknown
. We propose to complete its definition by using Machine Learning techniques. The idea of the technique we use for learning comes directly from the classical model of solvers computing a chaotic iteration of reduction operators [1]. We begin by learning the constraint. But instead of learning it by a classifier which takes as input all its variables and answers ”yes” if the tuple belongs to the constraint and ”no” otherwise, we choose to
learn the support functionn
< X = a>
of the constraint for each value of its variables’ domains. A tuple is part of the constraint if accepted by all support functions for each of its values and rejected as soon as it gets rejected by one. We propose to use as representation for learning an Artificial Neural Network (ANN) with an intermediate hidden layer trained by the classical backpropagation algorithm [4].
When put in a CSP, a constraint should contribute to the domain reduction. We propose to use the learned classifiers also for solving. In order to do this, we take the
natural
extension to intervals [3] of the learned classifiers. Let
N
< X = a>
be the natural interval extension of
n
< X = a>
. Then, by using as input the current domain of the variables, we can obtain a range for its output. Since we put a 0.5 threshold after the output neuron, we can reject the value
a
for
X
if the maximum of the output range is less than 0.5, which means that all tuples are rejected in the current domain intervals. Otherwise, the value remains in the domain. Our experiments show that the learned consistency is weaker than more classical consistencies but still reduces notably the search space.
We show that our technique not only has good learning performances but also yields a very efficient solver for the learned constraint.