The clp(FD) solver described in this chapter is an instance of the general Constraint Logic Programming scheme introduced in [Jaffar & Michaylov 87]. This constraint domain is particularly useful for modeling discrete optimization and verification problems such as scheduling, planning, packing, timetabling etc. The treatise [Van Hentenryck 89] is an excellent exposition of the theoretical and practical framework behind constraint solving in finite domains, and summarizes the work up to 1989.
This solver has the following highlights:
The rest of this chapter is organized as follows: How to load the solver and how to write simple programs is explained in section Solver Interface. A description of all constraints that the solver provides is contained in section Available Constraints. The predicates for searching for solution are documented in section Enumeration Predicates. The predicates for getting execution statistics are documented in section Statistics Predicates. A few example programs are given in section Example Programs. Finally, section Syntax Summary contains syntax rules for all expressions.
The following sections discuss advanced features and are probably only relevant to experienced users: How to control the amount of information presented in answers to queries is explained in section Answer Constraints. The solver's execution mechanism and primitives are described in section The Constraint System. How to add new global constraints via a programming interface is described in section Defining Global Constraints. How to define new primitive constraints with indexicals is described in section Defining Primitive Constraints.
When referring to this implementation of clp(FD) in publications, please use the following reference:
Carlsson M., Ottosson G., Carlson B. "An Open-Ended Finite Domain Constraint Solver" Proc. Programming Languages: Implementations, Logics, and Programs, 1997.
The first version of this solver was written as part of Key Hyckenberg's MSc thesis in 1995. The code was later rewritten by Mats Carlsson, with contributions from Greger Ottosson at the Computing Science Department, Uppsala University. Peter Szeredi contributed material for this manual chapter.
The development of this software was supported by the Swedish National Board for Technical and Industrial Development (NUTEK) under the auspices of Advanced Software Technology (ASTEC) Center of Competence at Uppsala University.
We include a collection of examples, some of which have been distributed with the INRIA implementation of clp(FD) [Diaz & Codognet 93].
The solver is available as a library module and can be loaded with a query
:- use_module(library(clpfd)).
The solver contains predicates for checking the consistency and entailment of finite domain constraints, as well as solving for solution values for your problem variables.
In the context of this constraint solver, a finite domain is a subset of the integers, and a finite domain constraint denotes a relation over one or more finite domains. All domain variables, i.e. variables that occur as arguments to finite domain constraints get associated with a finite domain, either explicitly declared by the program, or implicitly imposed by the constraint solver. Temporarily, the domain of a variable may actually be infinite, if it does not have a finite lower or upper bound.
The domain of all variables gets narrower and narrower as more constraints are added. If a domain becomes empty, the accumulated constraints are unsatisfiable, and the current computation branch fails. At the end of a successful computation, all domains have usually become singletons, i.e. the domain variables have become assigned.
The domains do not become singletons automatically. Usually, it takes
some amount of search to find an assignment that satisfies all
constraints. It is the programmer's responsibility to do so. If some
domain variables are left unassigned in a computation, the garbage
collector will preserve all constraint data that is attached to them.
Furthermore, the behavior of the predicates assert/1
,
findall/3
, copy_term/2
and friends is undefined on
non-ground terms containing domain variables.
The heart of the constraint solver is a scheduler for indexicals [Van Hentenryck et al. 92] and global constraints. Both entities act as coroutines performing incremental constraint solving or entailment checking. They wake up by changes in the domains of its arguments. All constraints provided by this package are implemented as indexicals or global constraints. New constraints can be defined by the user.
Indexicals are reactive functional rules that take part in the solver's basic constraint solving algorithm, whereas each global constraint is associated with its particular constraint solving algorithm. The solver maintains two scheduling queues, giving priority to the queue of indexicals.
The feasibility of integrating the indexical approach with a Prolog based on the WAM was clearly demonstrated by Diaz's clp(FD) implementation [Diaz & Codognet 93], one of the fastest finite domains solvers around.
A constraint is called as any other Prolog predicate. When called, the constraint is posted to the store. For example:
| ?- X in 1..5, Y in 2..8, X+Y #= T. X in 1..5, Y in 2..8, T in 3..13 ? yes | ?- X in 1..5, T in 3..13, X+Y #= T. X in 1..5, T in 3..13, Y in -2..12 ? yes
Note that the answer constraint shows the domains of nonground query variables, but not any constraints that may be attached to them.
Constraint satisfaction problems (CSPs) are a major class of problems for which this solver is ideally suited. In a CSP, the goal is to pick values from pre-defined domains for certain variables so that the given constraints on the variables are all satisfied.
As a simple CSP example, let us consider the Send More Money puzzle. In this problem, the variables are the letters S, E, N, D, M, O, R, and Y. Each letter represents a digit between 0 and 9. The problem is to assign a value to each digit, such that SEND + MORE equals MONEY.
A program which solves the puzzle is given below. The program contains the typical three steps of a clp(FD) program:
Sometimes, an extra step precedes the search for a solution: the posting of surrogate constraints to break symmetries or to otherwise help prune the search space. No surrogate constraints are used in this example.
The domains of this puzzle are stated via the domain/3
goal
and by requiring that S and M be greater than zero.
The two problem constraint of this puzzle are the equation (sum/8
)
and the constraint that all letters take distinct values
(all_different/1
).
Finally, the backtrack search is performed by labeling/2
. Different
search strategies can be encoded in the Type
parameter. In the example
query, the default search strategy is used (select the leftmost variable,
try values in ascending order).
:- use_module(library(clpfd)). mm([S,E,N,D,M,O,R,Y], Type) :- domain([S,E,N,D,M,O,R,Y], 0, 9), % step 1 S#>0, M#>0, all_different([S,E,N,D,M,O,R,Y]), % step 2 sum(S,E,N,D,M,O,R,Y), labeling(Type, [S,E,N,D,M,O,R,Y]). % step 3 sum(S, E, N, D, M, O, R, Y) :- 1000*S + 100*E + 10*N + D + 1000*M + 100*O + 10*R + E #= 10000*M + 1000*O + 100*N + 10*E + Y. | ?- mm([S,E,N,D,M,O,R,Y], []). D = 7, E = 5, M = 1, N = 6, O = 0, R = 8, S = 9, Y = 2 ?
Instead of merely posting constraints it is often useful to reflect its truth value into a 0/1-variable B, so that:
This mechanism is known as reification. Several frequently used operations can be defined in terms of reified constraints, such as blocking implication [Saraswat 90] and the cardinality operator [Van Hentenryck & Deville 91], to name a few. A reified constraint is written:
| ?- Constraint #<=> B.
where Constraint is reifiable. As an example of a constraint
that uses reification, consider exactly(X,L,N)
which is true if X
occurs exactly N
times in the list L
. It can be defined thus:
exactly(_, [], 0). exactly(X, [Y|L], N) :- X #= Y #<=> B, N #= M+B, exactly(X, L, M).
This section describes the classes of constraints that can be used with this solver.
?Expr RelOp ?Expr
| ?- X in 1..2, Y in 3..5, X#=<Y #<=> B. B = 1, X in 1..2, Y in 3..5 ?
Linear arithmetic constraints maintain (at least) interval-consistency and their reified versions detect (at least) interval-entailment and -disentailment; see section The Constraint System.
The following constraints are among the library constraints that general arithmetic constraints compile to. They express a relation between a sum or a scalar product and a value, using a dedicated algorithm that avoids creating any temporary variables holding intermediate values. If you are computing a sum or a scalar product, it can be much more efficient to compute lists of coefficients and variables and post a single sum or scalar product constraint than to post a sequence of elementary constraints.
sum(+Xs, +RelOp, ?Value)
scalar_product(+Coeffs, +Xs, +RelOp, ?Value)
domain(+Variables, +Min, +Max)
inf
(minus infinity), and
Max is an integer or the atom sup
(plus infinity). True if
the variables all are elements of the range Min..Max
.
Cannot be reified.
?X in +Range
?X in_set +FDSet
in/2
and in_set/2
constraints can be reified. They
maintain domain-consistency and their reified versions detect
domain-entailment and -disentailment; see section The Constraint System.
Propositional combinators can be used to combine reifiable constraints into propositional formulae over such constraints. Such formulae are goal expanded by the system into sequences of reified constraints and arithmetic constraints. For example,
X #= 4 #\/ Y #= 6
expresses the disjunction of two equality constraints.
The leaves of propositional formulae can be reifiable constraints, the constants 0 and 1, or 0/1-variables. New primitive, reifiable constraints can be defined with indexicals as described in section Defining Primitive Constraints. The following propositional combinators are available:
#\ :Q
:P #/\ :Q
:P #\ :Q
:P #\/ :Q
:P #=> :Q
:Q #<= :P
:P #<=> :Q
Note that the reification scheme introduced in section Reified Constraints is a special case of a propositional constraint.
The constraints listed here are sometimes called symbolic constraints. They are currently not reifiable.
count(+Val,+List,+RelOp,?Count)
count/4
is
a generalization of exactly/3
(not an exported constraint) that was
used in an example earlier.
count/4
maintains domain-consistency; see section The Constraint System.
element(?X,+List,?Y)
element/3
maintains domain-consistency; see section The Constraint System.
relation(?X,+MapList,?Y)
integer-ConstantRange
pairs,
where the integer keys occur uniquely (see section Syntax of Indexicals).
True if MapList contains a pair X-R
and
Y is in the range denoted by R.
Operationally, the domains of X and Y are constrained
so that for every element in the domain of X, there is a
compatible element in the domain of Y, and vice versa.
If MapList is not ground, the constraint must be wrapped in call/1
to postpone goal expansion until runtime.
An arbitrary binary constraint can be defined with relation/3
.
relation/3
maintains domain-consistency; see section The Constraint System.
all_different(+Variables)
all_distinct/1
constraint (see below).
The following constraint is a complete version of the
all_different/1
constraint. The algorithm is due to Regin [Regin
94].
all_distinct(+Variables)
The following is a constraint over two lists of length n of variables. Each variable is constrained to take a value in 1,...,n that is unique for its list. Furthermore, the lists are dual in a sense described below.
assignment(+Xs, +Ys)
The following constraint can be thought of as constraining n nodes in a graph to form a Hamiltonian circuit. The nodes are numbered from 1 to n. The circuit starts in node 1, visits each node, and returns to the origin.
circuit(+Succ)
circuit(+Succ, +Pred)
The following four constraints can be thought of as constraining n tasks, each with a start time Sj and a duration Dj, so that no tasks ever overlap. The tasks can be seen as competing for some exclusive resource.
serialized(+Starts,+Durations)
Si+Di =< Sj OR Sj+Dj =< Si, for all 1 =< i<j =< nThe
serialized/2
constraint is merely a special case of
cumulative/4
(see below).
serialized_resource(+Starts,+Durations,-Resource)
order_resource/2
(see section Enumeration Predicates) in order to find a consistent ordering of the tasks.
serialized_precedence(+Starts,+Durations,+Precedences)
serialized_precedence_resource(+Starts,+Durations,+Precedences,-Resource)
d(i,j,d)
. Here, i and
j should be task numbers, and d should be a positive integer
or sup
. Each term adds the constraint:
Si+d =< Sj OR Sj =< Si, if d is an integer Sj =< Si, if d is sup
serialized_precedence/3
can model a set of tasks to be serialized
with sequence-dependent setup times. For example, the following
constraint models three tasks, all with duration 5, where task 1 must
precede task 2 and task 3 must either complete before task 2 or start
at least 10 time units after task 2 started:
?- domain([S1,S2,S3], 0, 20), serialized_precedence([S1,S2,S3], [5,5,5], [d(2,1,sup),d(2,3,10)]). S1 in 0..15, S2 in 5..20, S3 in 0..20 ?The bounds of
S1
and S2
changed because of the precedence
constraint. Setting S2
to 5 will propagate S1=0
and S3
in 15..20
.
The following constraint can be thought of as constraining n tasks, each with a start time Sj, a duration Dj, and a resource amount Rj, so that the total resource consumption does not exceed Limit at any time:
cumulative(+Starts,+Durations,+Resources,?Limit)
a = min(S1,...,Sn), b = max(S1+D1,...,Sn+Dn) Rij = Rj, if Sj =< i < Sj+Dj Rij = 0 otherwiseThe constraint holds if:
Ri1+...+Rin =< Limit, for all a =< i < bThe
cumulative/4
constraint is due to Aggoun and Beldiceanu
[Aggoun & Beldiceanu 92]. The current implementation is incomplete
and takes O(n^2) space.
New, primitive constraints can be added defined by the user on two different levels. On a higher level, constraints can be defined using the global constraint programming interface; see section Defining Global Constraints. Such constraints can embody specialized algorithms and use the full power of Prolog. They cannot be reified.
On a lower level, new primitive constraints can be defined with indexicals. In this case, they take part in the basic constraint solving algorithm and express custom designed rules for special cases of the overall local propagation scheme. Such constraints are called FD predicates; see section Defining Primitive Constraints. They can optionally be reified.
As is usually the case with finite domain constraint solvers, this solver is not complete. That is, it does not ensure that the set of posted constraints is satisfiable. One must resort to search (enumeration) to check satisfiability and get particular solutions.
The following predicates provide several variants of search:
indomain(?X)
minimize(:Goal,?X)
maximize(:Goal,?X)
labeling/2
goal. The algorithm calls Goal
repeatedly with a progressively tighter upper (lower) bound on X
until a proof of optimality is obtained, at which time Goal and
X are unified with values corresponding to the optimal solution.
labeling(+Options, +Variables)
The search options control the order in which variables are selected for assignment (variable choice heuristic), the way in which choices are made for the selected variable (value choice heuristic), and whether all solutions or a single, optimal solution should be found. The search options are divided into four groups. One option may be selected per group. Finally, the number of assumptions (choices) made during the search can be collected.
labeling/2
may succeed with
such variables still unassigned.
leftmost
min
max
ff
ffc
step
X #= B
and X #\= B
, where B is the lower or upper
bound of X. This is the default.
enum
bisect
X #=< M
and X #> M
, where M is the midpoint
of the domain of X. This strategy is also known as domain
splitting.
up
down
all
minimize(X)
maximize(X)
statistics(K)
For example, to enumerate solutions using a static variable ordering, use:
| ?- constraints(Variables), labeling([], Variables). %same as [leftmost,step,up,all]
To minimize a cost function using branch-and-bound search, a dynamic variable ordering using the first-fail principle, and domain splitting exploring the upper part of domains first, use:
| ?- constraints(Variables, Cost), labeling([ff,bisect,down,minimize(Cost)], Variables).
As opposed to the predicates above which search for consistent assignments to domain variables, the following predicate searches for a consistent ordering among tasks competing for an exclusive resource, without necessarily fixing their start times:
order_resource(+Options, +Resource)
serialized_resource/3
(see section Combinatorial Constraints) on which tasks must be serialized.
True if a total ordering can be imposed on the tasks, enumerating all
such orderings via backtracking.
The search options control the construction of the total ordering.
It may contain at most one of the following atoms, selecting a strategy:
first
last
first
is chosen (the default), the task with
the smallest value is selected, otherwise the task with the greatest
value is selected.
est
lst
ect
lct
[first,est]
(the default) and [last,lct]
can be good heuristics.
The following predicates can be used to get execution statistics.
fd_statistics(?Key, ?Value)
statistics/2
.
For each of the possible keys Key, Value is unified with the
current value of a counter which is simultaneously zeroed. The
following counters are maintained. See section The Constraint System, for
details of what they all mean:
resumptions
entailments
prunings
backtracks
constraints
fd_statistics
user_error
stream a summary of the above statistics.
All counters are zeroed.
By default, the answer constraint only shows the projection of the store onto the variables that occur in the query, but not any constraints that may be attached to these variables, nor any domains or constraints attached to other variables. This is a conscious decision, as no efficient algorithm for projecting answer constraints onto the query variables is known for this constraint system.
It is possible, however, to get a complete answer constraint including all variables that took part in the computation and their domains and attached constraints. This is done by asserting a clause for the following predicate:
clpfd:full_answer
The constraint system is based on domain constraints and indexicals. A
domain constraint is an expression X::I
, where
X is a domain variable and
I is a nonempty set of integers.
A set S of domain constraints is called a store. D(X,S),
the domain of X in S, is defined as the intersection
of all I such that X::I
belongs to S.
The store is contradictory if the domain of some variable is empty;
otherwise, it is consistent. A consistent store S' is an
extension of a store S iff, for all variables X,
D(X,S') is contained in D(X,S).
The following definitions, adapted from [Van Hentenryck et al. 95], define important notions of consistency and entailment of constraints wrt. stores.
A ground constraint is true if it holds and false otherwise.
A constraint C is domain-consistent wrt. S iff, for each variable Xi and value Vi in D(Xi,S), there exist values Vj in D(Xj,S), 1 =< j =< n, i\=j, such that C(V1,...,Vn) is true.
A constraint C is domain-entailed by S iff, for all values Vj in D(Xj,S), 1 =< j =< n, C(V1,...,Vn) is true.
Let D'(X,S) denote the interval min(D(X,S))..max(D(X,S)).
A constraint C is interval-consistent wrt. S iff, for each variable Xi there exist values Vj in D'(Xj,S), 1 =< j =< n, i\=j, such that C(V1,...,min(D(Xi,S)),...,Vn) and C(V1,...,max(D(Xi,S)),...,Vn) are both true.
A constraint C is interval-entailed by S iff, for all values Vj in D'(Xj,S), 1 =< j =< n, C(V1,...,Vn) is true.
Finally, a constraint is domain-disentailed (interval-disentailed) by S iff its negation is domain-entailed (interval-entailed) by S.
In most circumstances, arithmetic constraints only maintain
interval-consistency and only detect interval-entailment and
-disentailment. Note that there are cases where an interval-consistency
maintaining constraint may detect a contradiction when the constraint
is not yet interval-disentailed, as the following example illustrates.
Note that X #\= Y
maintains domain consistency if
both arguments are constants or variables:
| ?- X+Y #= Z, X=1, Z=6, Y in 1..10, Y #\= 5. no | ?- X+Y #= Z #<=> B, X=1, Z=6, Y in 1..10, Y #\= 5. X = 1, Z = 6, Y in(1..4)\/(6..10), B in 0..1
Since 1+5#=6
holds, X+Y #= Z
is not interval-disentailed,
although any attempt to make it interval-consistent wrt. the store
results in a contradictory store.
This section describes a programming interface by means of which new constraints can be written. The interface consists of a set of predicates provided by this library module. Constraints defined in this way can take arbitrary arguments and may use any constraint solving algorithm, provided it makes sense. Reification cannot be expressed in this interface; instead, reification may be achieved by explicitly passing a 0/1-variable to the constraint in question.
Global constraints have state which may be updated each time the constraint is resumed. The state information may be used e.g. in incremental constraint solving.
The following two predicates are the principal entrypoints for defining and posting new global constraints:
clpfd:dispatch_global(+Constraint, +State0, -State, -Actions)
exit
fail
X = V
X in R
X in_set S
call(Goal)
clpfd
module.
fd_global(+Constraint, +State, +Susp)
dom(X)
min(X)
max(X)
minmax(X)
val(X)
For an example of usage, see section A Global Constraint Example.
The constraint solving method needs access to information about the current domains of variables. This is provided by the following predicates, which are all constant time operations.
fd_min(?X, ?Min)
inf
denoting minus infinity.
fd_max(?X, ?Max)
sup
denoting infinity.
fd_size(?X, ?Size)
sup
otherwise.
fd_set(?X, ?Set)
fd_dom(?X, ?Range)
The following predicate can be used for computing the set of variables that are transitively connected via constraints to some given variables.
fd_closure(+Vars, -Closure)
The domains of variables are internally represented compactly as FD set terms. The details of this representation are subject to change and should not be relied on. Therefore, a number of operations on FD sets are provided, as such terms play an important role in the interface. The following operations are the primitive ones:
is_fdset(+Set)
empty_fdset(?Set)
fdset_parts(?Set, ?Min, ?Max, ?Rest)
inf
and sup
, denoting minus and plus infinity, respectively.
Either Set or all the other arguments must be ground.
The following operations can all be defined in terms of the primitive ones, but in most cases, a more efficient implementation is used:
empty_interval(+Min, +Max)
fdset_interval(?Set, ?Min, ?Max)
fdset_singleton(?Set, ?Elt)
fdset_min(+Set, -Min)
fdset_max(+Set, -Min)
fdset_size(+Set, -Size)
sup
if Set is infinite.
This operation is linear in the number of intervals of Set.
list_to_fdset(+List, -Set)
fdset_to_list(+Set, -List)
range_to_fdset(+Range, -Set)
fdset_to_range(+Set, -Range)
fdset_add_element(+Set1, +Elt -Set2)
fdset_del_element(+Set1, +Elt, -Set2)
fdset_disjoint(+Set1, +Set2)
fdset_intersect(+Set1, +Set2)
fdset_intersection(+Set1, +Set2, -Intersection)
fdset_intersection(+Sets, -Intersection)
fdset_member(?Elt, +Set)
fdset_eq(+Set1, +Set2)
fdset_subset(+Set1, +Set2)
fdset_subtract(+Set1, +Set2, -Difference)
fdset_union(+Set1, +Set2, -Union)
fdset_union(+Sets, -Union)
fdset_complement(+Set, -Complement)
inf..sup
.
The following example defines a new global constraint
exactly(X,L,N)
which is true if X occurs
exactly N times in the list L of integers and domain
variables. A version defined in terms of reified equalities was
presented earlier; see section Reified Constraints.
This example illustrates the use of state information. The state has two components: the list of variables that could still be X, and the number of variables still required to be X.
The constraint is defined to wake up on any domain change.
/* An implementation of exactly(I, X[1]...X[m], N): Necessary condition: 0 =< N =< m. Rewrite rules: [1] |= X[i]=I ==> exactly(I, X[1]...X[i-1],X[i+1]...X[m], N-1): [2] |= X[i]\=I ==> exactly(I, X[1]...X[i-1],X[i+1]...X[m], N): [3] |= N=0 ==> X[1]\=I ... X[m]\=I [4] |= N=m ==> X[1]=I ... X[m]=I */ :- use_module(library(clpfd)). % the entrypoint exactly(I, Xs, N) :- dom_suspensions(Xs, Susp), fd_global(exactly(I,Xs,N), state(Xs,N), Susp). dom_suspensions([], []). dom_suspensions([X|Xs], [dom(X)|Susp]) :- dom_suspensions(Xs, Susp). % the solver method :- multifile clpfd:dispatch_global/4. clpfd:dispatch_global(exactly(I,_,_), state(Xs0,N0), state(Xs,N), Actions) :- exactly_solver(I, Xs0, Xs, N0, N, Actions). exactly_solver(I, Xs0, Xs, N0, N, Actions) :- ex_filter(Xs0, Xs, N0, N, I), length(Xs, M), ( N=:=0 -> Actions = [exit|Ps], ex_neq(Xs, I, Ps) ; N=:=M -> Actions = [exit|Ps], ex_eq(Xs, I, Ps) ; N>0, N<M -> Actions = [] ; Actions = [fail] ). % rules [1,2]: filter the X's, decrementing N ex_filter([], [], N, N, _). ex_filter([X|Xs], Ys, L, N, I) :- X==I, !, M is L-1, ex_filter(Xs, Ys, M, N, I). ex_filter([X|Xs], Ys0, L, N, I) :- fd_set(X, Set), fdset_member(I, Set), !, Ys0 = [X|Ys], ex_filter(Xs, Ys, L, N, I). ex_filter([_|Xs], Ys, L, N, I) :- ex_filter(Xs, Ys, L, N, I). % rule [3]: all must be neq I ex_neq(Xs, I, Ps) :- fdset_singleton(Set0, I), fdset_complement(Set0, Set), eq_all(Xs, Set, Ps). % rule [4]: all must be eq I ex_eq(Xs, I, Ps) :- fdset_singleton(Set, I), eq_all(Xs, Set, Ps). eq_all([], _, []). eq_all([X|Xs], Set, [X in_set Set|Ps]) :- eq_all(Xs, Set, Ps). end_of_file. % sample queries: | ?- exactly(5,[A,B,C],1), A=5. A = 5, B in(inf..4)\/(6..sup), C in(inf..4)\/(6..sup) | ?- exactly(5,[A,B,C],1), A in 1..2, B in 3..4. C = 5, A in 1..2, B in 3..4
Indexicals are the principal means of defining constraints, but it is usually not necessary to resort to this level of programming--most commonly used constraints are available in a library and/or via macro-expansion. The key feature about indexicals is that they give the programmer precise control over aspects of the operational semantics of the constraints. Trade-offs can be made between the computational cost of the constraints and their pruning power. The indexical language provides many degrees of freedom for the user to select the level of consistency to be maintained depending on application-specific needs.
An indexical is a reactive functional rule of the form
X in R
, where R is a set valued range
expression (see below). See section Syntax of Indexicals, for a grammar defining
indexicals and range expressions.
Indexicals can play one of two roles: propagating indexicals are
used for constraint solving, and checking indexicals are used for
entailment checking. When a propagating indexical fires, R is
evaluated in the current store S, which is extended by adding the
new domain constraint X::S(R)
to the store, where
S(R) denotes the value of R in S. When a checking
indexical fires, it checks if D(X,S) is contained in S(R),
and if so, the constraint corresponding to the indexical is detected as
entailed.
A range expression has one of the following forms, where Ri denote range expressions, Ti denote integer valued term expressions, S(Ti) denotes the integer value of Ti in S, X denotes a variable, I denotes an integer, and S denotes the current store.
dom(X)
{T1,...,Tn}
{S(T1),...,S(Tn)}
.
Any term expression containing a
subexpression which is a variable that is not "quantified" by
unionof/3
will only be evaluated when this variable has been assigned.
T1..T2
R1/\R2
R1\/R2
\R2
R1+R2
R1+T2
-R2
R1-R2
R1-T2
T1-R2
R1 mod R2
R1 mod T2
R1 ? R2
(R1 ? (inf..sup) \/ R3)
, which evaluates to
S(R3) if S(R1) is an empty set; otherwise, evaluates to
inf..sup
. As an optimization, R3 is not evaluated
while the value of R1 is a non-empty set.
unionof(X,R1,R2)
switch(T1,MapList)
Key-Expr
. Otherwise,
evaluates to the empty set.
When used in the body of an FD predicate (see section Goal Expanded Constraints),
a relation/3
expression expands to two
indexicals, each consisting of a switch/2
expression nested
inside a unionof/3
expression. Thus, the following constraints
are equivalent:
p(X, Y) +: relation(X, [1-{1},2-{1,2},3-{1,2,3}], Y). q(X, Y) +: X in unionof(B,dom(Y),switch(B,[1-{1,2,3},2-{2,3},3-{3}])), Y in unionof(B,dom(X),switch(B,[1-{1},2-{1,2},3-{1,2,3}])).
A term expression has one of the following forms, where T1 and T2 denote term expressions, X denotes a variable, I denotes an integer, and S denotes the current store.
min(X)
max(X)
card(X)
X
unionof/3
,
will cause the evaluation to suspend until the variable is assigned.
I
inf
sup
-T1
T1+T2
T1-T2
T1*T2
T1/>T2
T1/<T2
T1 mod T2
A range R is monotone in S iff the value of R in
S' is contained in the value of R in S, for every
extension S' of S. A range R is anti-monotone in
S iff the value of R in S is contained in the value
of R in S', for every extension S' of S. By
abuse of notation, we will say that X in R
is
(anti-)monotone iff R is (anti-)monotone.
The consistency or entailment of a constraint C expressed as
indexicals X in R
in a store S is checked by
considering the relationship between D(X,S) and S(R),
together with the (anti-)monotonicity of R in S. The
details are given in section Execution of Propagating Indexicals and
section Execution of Checking Indexicals.
The solver checks (anti-)monotonicity by requiring that certain variables occurring in the indexical be ground. This sufficient condition can sometimes be false for an (anti-)monotone indexical, but such situations are rare in practice.
The following example defines the constraint X+Y=T as an FD predicate in terms of three indexicals. Each indexical is a rule responsible for removing values detected as incompatible from one particular constraint argument. Indexicals are not Prolog goals; thus, the example does not express a conjunction. However, an indexical may make the store contradictory, in which case backtracking is triggered:
plus(X,Y,T) +: X in min(T) - max(Y) .. max(T) - min(Y), Y in min(T) - max(X) .. max(T) - min(X), T in min(X) + min(Y) .. max(X) + max(Y).
The above definition contains a single clause used for constraint solving. The first indexical wakes up whenever the bounds of S(T) or S(Y) are updated, and removes from D(X,S) any values that are not compatible with the new bounds of T and Y. Note that in the event of "holes" in the domains of T or Y, D(X,S) may contain some values that are incompatible with X+Y=T but go undetected. Like most built-in arithmetic constraints, the above definition maintains interval-consistency, which is significantly cheaper to maintain than domain-consistency and suffices in most cases. The constraint could for example be used as follows:
| ?- X in 1..5, Y in 2..8, plus(X,Y,T). X in 1..5, Y in 2..8, T in 3..13 ? yes
Thus, when an FD predicate is called, the `+:' clause is activated.
The definition of a user constraint has to specify what domain constraints should be added to the constraint store when the constraint is posted. Therefore the FD predicate contains a set of indexicals, each representing a domain constraint to be added to the constraint store. The actual domain constraint depends on the constraint store itself. For example, the third indexical in the above FD predicate prescribes the domain constraint `T :: 3..13' if the store contains `X :: 1..5, Y :: 2..8'. As the domain of some variables gets narrower, the indexical may enforce a new, stricter constraint on some other variables. Therefore such an indexical (called a propagating indexical) can be viewed as an agent reacting to the changes in the store by enforcing further changes in the store.
In general there are three stages in the lifetime of a propagating indexical. When it is posted it may not be evaluated immediately (e.g. has to wait until some variables are ground before being able to modify the store). Until the preconditions for the evaluation are satisfied, the agent does not enforce any constraints. When the indexical becomes evaluable the resulting domain constraint is added to the store. The agent then waits and reacts to changes in the domains of variables occurring in the indexical by re-evaluating it and adding the new, stricter constraint to the store. Eventually the computation reaches a phase when no further refinement of the store can result in a more precise constraint (the indexical is entailed by the store), and then the agent can cease to exist.
A necessary condition for the FD predicate to be correctly defined is the following: for any store mapping each variable to a singleton domain the execution of the indexicals should succeed without contradiction exactly when the predicate is intended to be true.
There can be several alternative definitions for the same user
constraint with different strengths in propagation. For example, the
definition of plusd
below encodes the same X+Y=T
constraint as
the plus
predicate above, but maintaining domain consistency:
plusd(X,Y,T) +: X in dom(T) - dom(Y), Y in dom(T) - dom(X), T in dom(X) + dom(Y). | ?- X in {1}\/{3}, Y in {10}\/{20}, plusd(X, Y, T). X in{1}\/{3}, Y in{10}\/{20}, T in{11}\/{13}\/{21}\/{23} ? yes
This costs more in terms of execution time, but gives more precise
results. For singleton domains plus
and plusd
behave in
the same way.
In our design, general indexicals can only appear in the context of FD predicate definitions. The rationale for this restriction is the need for general indexicals to be able to suspend and resume, and this ability is only provided by the FD predicate mechanism.
If the program merely posts a constraint, it suffices for the definition to contain a single clause for solving the constraint. If a constraint is reified or occurs in a propositional formula, the definition must contain four clauses for solving and checking entailment of the constraint and its negation. The role of each clause is reflected in the "neck" operator. The following table summarizes the different forms of indexical clauses corresponding to a constraint C. In all cases, Head should be a compound term with all arguments being distinct variables:
Head +: Indexicals.
Head -: Indexicals.
Head +? Indexical.
Head -? Indexical.
When a constraint is reified, the solver spawns two reactive agents corresponding to detecting entailment and disentailment. Eventually, one of them will succeed in this and consequently will bind B to 0 or 1. A third agent is spawned, waiting for B to become assigned, at which time the constraint (or its negation) is posted. In the mean time, the constraint may have been detected as (dis)entailed, in which case the third agent is dismissed. The waiting is implemented by means of the coroutining facilities of SICStus Prolog.
As an example of a constraint with all methods defined, consider the following library constraint defining a disequation between two domain variables:
'x\\=y'(X,Y) +: X in \{Y}, Y in \{X}. 'x\\=y'(X,Y) -: X in dom(Y), Y in dom(X). 'x\\=y'(X,Y) +? X in \dom(Y). 'x\\=y'(X,Y) -? X in {Y}.
The following sections provide more precise coding rules and operational
details for indexicals. X in R
denotes an indexical
corresponding to a constraint C. S denotes the current store.
Consider the definition of a constraint C containing a propagating
indexical X in R
. Let TV(X,C,S) denote the set of
values for X that can make C true in some ground extension of
the store S. Then the indexical should obey the following coding rules:
If the coding rules are observed, S(R) can be proven to contain
TV(X,C,S) for all stores in which R is monotone. Hence it is
natural for the implementation to wait until R becomes monotone
before admitting the propagating indexical for execution.
The execution of X in R
thus involves the following:
X::S(R)
is added
to the store (X is pruned), and the indexical suspends,
unless R is ground in S, in which case C is detected as entailed.
A propagating indexical is scheduled for execution as follows:
dom(Y)
or card(Y)
in R has been updated
min(Y)
in R has been updated
max(Y)
in R has been updated
Consider the definition of a constraint C containing a checking
indexical X in R
. Let FV(X,C,S) denote the set of
values for X that can make C false in some ground extension of
the store S. Then the indexical should obey the following coding rules:
If the coding rules are observed, S(R) can be proven to exclude
FV(X,C,S) for all stores in which R is anti-monotone. Hence it is
natural for the implementation to wait until R becomes anti-monotone
before admitting the checking indexical for execution.
The execution of X in R
thus involves the following:
A checking indexical is scheduled for execution as follows:
dom(Y)
or card(Y)
in R has been pruned
min(Y)
in R has been increased
max(Y)
in R has been decreased
The arithmetic, membership, and propositional constraints described earlier are transformed at compile time to conjunctions of goals of library constraints.
Sometimes it is necessary to postpone the expansion of a constraint
until runtime, e.g. if the arguments are not instantiated enough.
This can be achieved by wrapping call/1
around the constraint.
Although space economic (linear in the size of the source code), the expansion of a constraint to library goals can have an overhead compared to expressing the constraint in terms of indexicals. Temporary variables holding intermediate values may have to be introduced, and the grain size of the constraint solver invocations can be rather small. The translation of constraints to library goals has been greatly improved in the current version, so these problems have virtually disappeared. However, for backward compatibility, an implementation by compilation to indexicals of the same constraints is also provided. An FD predicate may be defined by a single clause:
Head +: Constraint.
where Constraint is an arithmetic constraint or an element/3
or
a relation/3
constraint. This translation is only available for
`+:' clauses; thus, Head cannot be reified.
In the case of arithmetic constraints, the constraint must be over
linear terms (see section Syntax of Indexicals). The memory consumption of
the FD predicate will be quadratic in the size of the source code. The
alternative version of sum/8
in section Send More Money illustrates
this technique.
In the case of element(X,L,Y)
or relation(X,L,Y)
, the
memory consumption of the FD predicate will be linear in the size of the
source code. The execution time of the initial evaluation of the FD
predicate will be linear in the size of the initial domains for X
and Y; if these domains are infinite, no propagation will
take place.
This section contains a few example programs. The first two programs are included in a benchmark suite that comes with the distribution. The benchmark suite is run by typing:
| ?- compile(library('clpfd/examples/bench')). | ?- bench.
Let us return briefly to the Send More Money problem
(see section A Constraint Satisfaction Problem). Its sum/8
predicate
will expand to a space-efficient conjunction of library constraints.
A faster but more memory consuming version is defined simply by changing
the neck symbol of sum/8
from `:-' to `+:', thus
turning it into an FD predicate:
sum(S, E, N, D, M, O, R, Y) +: 1000*S + 100*E + 10*N + D + 1000*M + 100*O + 10*R + E #= 10000*M + 1000*O + 100*N + 10*E + Y.
The problem is to place N queens on an NxN chess board so that no queen is threatened by another queen.
The variables of this problem are the N queens. Each queen has a designated row. The problem is to select a column for it.
The main constraint of this problem is that no queen threaten another.
This is encoded by the no_threat/3
constraint and holds between
all pairs (X,Y)
of queens. It could be defined as
no_threat(X, Y, I) :- X #\= Y, X+I #\= Y, X-I #\= Y.
However, this formulation introduces new temporary domain variables and creates twelve fine-grained indexicals. Worse, the arithmetic constraints are only guaranteed to maintain interval-consistency and so may miss some opportunities for pruning elements in the middle of domains.
A better idea is to formulate no_threat/3
as an FD predicate with
two indexicals, as shown in the program below. This constraint will not
fire until one of the queens has been assigned (the corresponding
indexical does not become monotone until then). Hence, the constraint
is still not as strong as it could be.
For example, if the domain of one queen is (2..3), then it will
threaten any queen placed in column 2 or 3 on an adjacent row, no
matter which of the two open positions is chosen for the first queen.
The commented out formulation of the constraint captures this
reasoning, and illustrates the use of the unionof/3
operator.
This stronger version of the constraint indeed gives less
backtracking, but is computationally more expensive and does not pay
off in terms of execution time, except possibly for very large chess
boards.
It is clear that no_threat/3
cannot detect any incompatible values
for a queen with domain of size greater than three. This observation is
exploited in the third version of the constraint.
The first-fail principle is appropriate in the enumeration part of this problem.
:- use_module(library(clpfd)). queens(N, L, LabelingType) :- length(L, N), domain(L, 1, N), constrain_all(L), labeling(LabelingType, L). constrain_all([]). constrain_all([X|Xs]) :- constrain_between(X, Xs, 1), constrain_all(Xs). constrain_between(_X, [], _N). constrain_between(X, [Y|Ys], N) :- no_threat(X, Y, N), N1 is N+1, constrain_between(X, Ys, N1). % version 1: weak but efficient no_threat(X, Y, I) +: X in \({Y} \/ {Y+I} \/ {Y-I}), Y in \({X} \/ {X+I} \/ {X-I}). /* % version 2: strong but very inefficient version no_threat(X, Y, I) +: X in unionof(B,dom(Y),\({B} \/ {B+I} \/ {B-I})), Y in unionof(B,dom(X),\({B} \/ {B+I} \/ {B-I})). % version 3: strong but somewhat inefficient version no_threat(X, Y, I) +: X in (4..card(Y)) ? (inf..sup) \/ unionof(B,dom(Y),\({B} \/ {B+I} \/ {B-I})), Y in (4..card(X)) ? (inf..sup) \/ unionof(B,dom(X),\({B} \/ {B+I} \/ {B-I})). */ | ?- queens(8, L, [ff]). L = [1,5,8,6,3,7,2,4] ?
This example is a very small scheduling problem. We consider seven tasks where each task has a fixed duration and a fixed amount of used resource:
TASK DURATION RESOURCE ==== ======== ======== t1 16 2 t2 6 9 t3 13 3 t4 7 7 t5 5 10 t6 18 1 t7 4 11
The goal is to find a schedule that minimizes the completion time for
the schedule while not exceeding the capacity 13 of the resource. The
resource constraint is succinctly captured by a cumulative/4
constraint. Branch-and-bound search is used to find the minimal
completion time.
This example was borrowed from [Beldiceanu & Contejean 94].
:- use_module(library(clpfd)). schedule(Ss, End) :- length(Ss, 7), Ds = [16, 6,13, 7, 5,18, 4], Rs = [ 2, 9, 3, 7,10, 1,11], domain(Ss, 1, 30), domain([End], 1, 50), after(Ss, Ds, End), cumulative(Ss, Ds, Rs, 13), labeling([minimize(End)], [End|Ss]). after([], [], _). after([S|Ss], [D|Ds], E) :- E #>= S+D, after(Ss, Ds, E). %% End of file | ?- schedule(Ss, End). Ss = [1,17,10,10,5,5,1], End = 23 ?
X --> variable { domain variable } Constant --> integer | inf { minus infinity } | sup { plus infinity } Term --> Constant | X { suspend until assigned } | min(X) { min. of domain of X } | max(X) { max. of domain of X } | card(X) { size of domain of X } | - Term | Term + Term | Term - Term | Term * Term | Term /> Term { division rounded up } | Term /< Term { division rounded down } | Term mod Term TermSet --> {Term,...,Term} Range --> TermSet | dom(X) { domain of X } | Term..Term { interval } | Range/\Range { intersection } | Range\/Range { union } | \Range { complement } | - Range { pointwise negation } | Range + Range { pointwise addition } | Range - Range { pointwise subtraction } | Range mod Range { pointwise modulo } | Range + Term { pointwise addition } | Range - Term { pointwise subtraction } | Term - Range { pointwise subtraction } | Range mod Term { pointwise modulo } | Range ? Range | unionof(X,Range,Range) | switch(Term,MapList) ConstantSet --> {integer,...,integer} ConstantRange --> ConstantSet | Constant..Constant | ConstantRange/\ConstantRange | ConstantRange\/ConstantRange | \ConstantRange MapList --> [] | [integer-ConstantRange|MapList] CList --> [] | [integer|CList] Indexical --> X in Range Indexicals --> Indexical | Indexical, Indexicals ConstraintBody --> Indexicals | LinExpr RelOp LinExpr | element(X,CList,X) | relation(X,MapList,X) Head --> term { a compound term with unique variable args } TellPos --> Head +: ConstraintBody. TellNeg --> Head -: ConstraintBody. AskPos --> Head +? Indexical. AskNeg --> Head -? Indexical. ConstraintDef --> TellPos. [TellNeg.] [AskPos.] [AskNeg.]
X --> variable { domain variable } N --> integer LinExpr --> N { linear expression } | X | N * X | N * N | LinExpr + LinExpr | LinExpr - LinExpr Expr --> LinExpr | Expr + Expr | Expr - Expr | Expr * Expr | Expr / Expr { integer division } | Expr mod Expr | min(Expr,Expr) | max(Expr,Expr) | abs(Expr) RelOp --> #= | #\= | #< | #=< | #> | #>=
:- op(1200, xfx, [+:,-:,+?,-?]). :- op(760, yfx, #<=>). :- op(750, xfy, #=>). :- op(750, yfx, #<=). :- op(740, yfx, #\/). :- op(730, yfx, #\). :- op(720, yfx, #/\). :- op(710, fy, #\). :- op(700, xfx, [in,in_set]). :- op(700, xfx, [#=,#\=,#<,#=<,#>,#>=]). :- op(550, xfx, ..). :- op(500, fy, \). :- op(490, yfx, ?). :- op(400, yfx, [/>,/<]).
Go to the first, previous, next, last section, table of contents.