# Constraint Logic Programming over Finite Domains

## Introduction

The clp(FD) solver described in this chapter is an instance of the general Constraint Logic Programming scheme introduced in [Jaffar & Michaylov 87]. This constraint domain is particularly useful for modeling discrete optimization and verification problems such as scheduling, planning, packing, timetabling etc. The treatise [Van Hentenryck 89] is an excellent exposition of the theoretical and practical framework behind constraint solving in finite domains, and summarizes the work up to 1989.

This solver has the following highlights:

• Two classes of constraints are handled internally: primitive constraints and global constraints.
• The constraints described in this chapter are automatically translated to conjunctions of primitive and global library constraints.
• The truth value of a primitive constraint can be reflected into a 0/1-variable (reification).
• New primitive constraints can be added by writing so-called indexicals.
• New global constraints can be written in Prolog, by means of a programming interface.

The rest of this chapter is organized as follows: How to load the solver and how to write simple programs is explained in section Solver Interface. A description of all constraints that the solver provides is contained in section Available Constraints. The predicates for searching for solution are documented in section Enumeration Predicates. The predicates for getting execution statistics are documented in section Statistics Predicates. A few example programs are given in section Example Programs. Finally, section Syntax Summary contains syntax rules for all expressions.

The following sections discuss advanced features and are probably only relevant to experienced users: How to control the amount of information presented in answers to queries is explained in section Answer Constraints. The solver's execution mechanism and primitives are described in section The Constraint System. How to add new global constraints via a programming interface is described in section Defining Global Constraints. How to define new primitive constraints with indexicals is described in section Defining Primitive Constraints.

### Referencing this Software

When referring to this implementation of clp(FD) in publications, please use the following reference:

Carlsson M., Ottosson G., Carlson B. "An Open-Ended Finite Domain Constraint Solver" Proc. Programming Languages: Implementations, Logics, and Programs, 1997.

### Acknowledgments

The first version of this solver was written as part of Key Hyckenberg's MSc thesis in 1995. The code was later rewritten by Mats Carlsson, with contributions from Greger Ottosson at the Computing Science Department, Uppsala University. Peter Szeredi contributed material for this manual chapter.

The development of this software was supported by the Swedish National Board for Technical and Industrial Development (NUTEK) under the auspices of Advanced Software Technology (ASTEC) Center of Competence at Uppsala University.

We include a collection of examples, some of which have been distributed with the INRIA implementation of clp(FD) [Diaz & Codognet 93].

## Solver Interface

The solver is available as a library module and can be loaded with a query

```:- use_module(library(clpfd)).
```

The solver contains predicates for checking the consistency and entailment of finite domain constraints, as well as solving for solution values for your problem variables.

In the context of this constraint solver, a finite domain is a subset of the integers, and a finite domain constraint denotes a relation over one or more finite domains. All domain variables, i.e. variables that occur as arguments to finite domain constraints get associated with a finite domain, either explicitly declared by the program, or implicitly imposed by the constraint solver. Temporarily, the domain of a variable may actually be infinite, if it does not have a finite lower or upper bound.

The domain of all variables gets narrower and narrower as more constraints are added. If a domain becomes empty, the accumulated constraints are unsatisfiable, and the current computation branch fails. At the end of a successful computation, all domains have usually become singletons, i.e. the domain variables have become assigned.

The domains do not become singletons automatically. Usually, it takes some amount of search to find an assignment that satisfies all constraints. It is the programmer's responsibility to do so. If some domain variables are left unassigned in a computation, the garbage collector will preserve all constraint data that is attached to them. Furthermore, the behavior of the predicates `assert/1`, `findall/3`, `copy_term/2` and friends is undefined on non-ground terms containing domain variables.

The heart of the constraint solver is a scheduler for indexicals [Van Hentenryck et al. 92] and global constraints. Both entities act as coroutines performing incremental constraint solving or entailment checking. They wake up by changes in the domains of its arguments. All constraints provided by this package are implemented as indexicals or global constraints. New constraints can be defined by the user.

Indexicals are reactive functional rules that take part in the solver's basic constraint solving algorithm, whereas each global constraint is associated with its particular constraint solving algorithm. The solver maintains two scheduling queues, giving priority to the queue of indexicals.

The feasibility of integrating the indexical approach with a Prolog based on the WAM was clearly demonstrated by Diaz's clp(FD) implementation [Diaz & Codognet 93], one of the fastest finite domains solvers around.

### Posting Constraints

A constraint is called as any other Prolog predicate. When called, the constraint is posted to the store. For example:

```| ?- X in 1..5, Y in 2..8, X+Y #= T.

X in 1..5,
Y in 2..8,
T in 3..13 ?

yes
| ?- X in 1..5, T in 3..13, X+Y #= T.

X in 1..5,
T in 3..13,
Y in -2..12 ?

yes
```

Note that the answer constraint shows the domains of nonground query variables, but not any constraints that may be attached to them.

### A Constraint Satisfaction Problem

Constraint satisfaction problems (CSPs) are a major class of problems for which this solver is ideally suited. In a CSP, the goal is to pick values from pre-defined domains for certain variables so that the given constraints on the variables are all satisfied.

As a simple CSP example, let us consider the Send More Money puzzle. In this problem, the variables are the letters S, E, N, D, M, O, R, and Y. Each letter represents a digit between 0 and 9. The problem is to assign a value to each digit, such that SEND + MORE equals MONEY.

A program which solves the puzzle is given below. The program contains the typical three steps of a clp(FD) program:

1. declare the domains of the variables
2. post the problem constraints
3. look for a feasible solution via backtrack search, or look for an optimal solution via branch-and-bound search

Sometimes, an extra step precedes the search for a solution: the posting of surrogate constraints to break symmetries or to otherwise help prune the search space. No surrogate constraints are used in this example.

The domains of this puzzle are stated via the `domain/3` goal and by requiring that S and M be greater than zero. The two problem constraint of this puzzle are the equation (`sum/8`) and the constraint that all letters take distinct values (`all_different/1`). Finally, the backtrack search is performed by `labeling/2`. Different search strategies can be encoded in the `Type` parameter. In the example query, the default search strategy is used (select the leftmost variable, try values in ascending order).

```:- use_module(library(clpfd)).

mm([S,E,N,D,M,O,R,Y], Type) :-
domain([S,E,N,D,M,O,R,Y], 0, 9),      % step 1
S#>0, M#>0,
all_different([S,E,N,D,M,O,R,Y]),     % step 2
sum(S,E,N,D,M,O,R,Y),
labeling(Type, [S,E,N,D,M,O,R,Y]).    % step 3

sum(S, E, N, D, M, O, R, Y) :-
1000*S + 100*E + 10*N + D
+            1000*M + 100*O + 10*R + E
#= 10000*M + 1000*O + 100*N + 10*E + Y.

| ?- mm([S,E,N,D,M,O,R,Y], []).

D = 7,
E = 5,
M = 1,
N = 6,
O = 0,
R = 8,
S = 9,
Y = 2 ?
```

### Reified Constraints

Instead of merely posting constraints it is often useful to reflect its truth value into a 0/1-variable B, so that:

• the constraint is posted if B is set to 1
• the negation of the constraint is posted if B is set to 0
• B is set to 1 if the constraint becomes entailed
• B is set to 0 if the constraint becomes disentailed

This mechanism is known as reification. Several frequently used operations can be defined in terms of reified constraints, such as blocking implication [Saraswat 90] and the cardinality operator [Van Hentenryck & Deville 91], to name a few. A reified constraint is written:

```| ?- Constraint #<=> B.
```

where Constraint is reifiable. As an example of a constraint that uses reification, consider `exactly(X,L,N)` which is true if `X` occurs exactly `N` times in the list `L`. It can be defined thus:

```exactly(_, [], 0).
exactly(X, [Y|L], N) :-
X #= Y #<=> B,
N #= M+B,
exactly(X, L, M).
```

## Available Constraints

This section describes the classes of constraints that can be used with this solver.

### Arithmetic Constraints

`?Expr RelOp ?Expr`
defines an arithmetic constraint. The syntax for Expr and RelOp is defined by a grammar (see section Syntax of Arithmetic Expressions). Note that the expressions are not restricted to being linear. Constraints over non-linear expressions, however, will usually yield less constraint propagation than constraints over linear expressions. Arithmetic constraints can be reified as e.g.
```| ?- X in 1..2, Y in 3..5, X#=<Y #<=> B.

B = 1,
X in 1..2,
Y in 3..5 ?
```

Linear arithmetic constraints maintain (at least) interval-consistency and their reified versions detect (at least) interval-entailment and -disentailment; see section The Constraint System.

The following constraints are among the library constraints that general arithmetic constraints compile to. They express a relation between a sum or a scalar product and a value, using a dedicated algorithm that avoids creating any temporary variables holding intermediate values. If you are computing a sum or a scalar product, it can be much more efficient to compute lists of coefficients and variables and post a single sum or scalar product constraint than to post a sequence of elementary constraints.

`sum(+Xs, +RelOp, ?Value)`
where Xs is a list of integers or domain variables, RelOp is a relational symbol as above, and Value is an integer or a domain variable. True if Xs RelOp Value. Cannot be reified.
`scalar_product(+Coeffs, +Xs, +RelOp, ?Value)`
where Coeffs is a list of length n of integers, Xs is a list of length n of integers or domain variables, RelOp is a relational symbol as above, and Value is an integer or a domain variable. True if Coeffs*Xs RelOp Value. Cannot be reified.

### Membership Constraints

`domain(+Variables, +Min, +Max)`
where Variables is a list of domain variables or integers, Min is an integer or the atom `inf` (minus infinity), and Max is an integer or the atom `sup` (plus infinity). True if the variables all are elements of the range `Min..Max`. Cannot be reified.
`?X in +Range`
defines a membership constraint. X is an integer or a domain variable and Range is a ConstantRange (see section Syntax of Indexicals). True if X is an element of the range.
`?X in_set +FDSet`
defines a membership constraint. X is an integer or a domain variable and FDSet is an FD set term (see section FD Set Operations). True if X is an element of the FD set.

`in/2` and `in_set/2` constraints can be reified. They maintain domain-consistency and their reified versions detect domain-entailment and -disentailment; see section The Constraint System.

### Propositional Constraints

Propositional combinators can be used to combine reifiable constraints into propositional formulae over such constraints. Such formulae are goal expanded by the system into sequences of reified constraints and arithmetic constraints. For example,

```X #= 4 #\/ Y #= 6
```

expresses the disjunction of two equality constraints.

The leaves of propositional formulae can be reifiable constraints, the constants 0 and 1, or 0/1-variables. New primitive, reifiable constraints can be defined with indexicals as described in section Defining Primitive Constraints. The following propositional combinators are available:

`#\ :Q`
True if the constraint Q is false.
`:P #/\ :Q`
True if the constraints P and Q are both true.
`:P #\ :Q`
True if exactly one of the constraints P and Q is true.
`:P #\/ :Q`
True if at least one of the constraints P and Q is true.
`:P #=> :Q`
`:Q #<= :P`
True if the constraint Q is true or the constraint P is false.
`:P #<=> :Q`
True if the constraints P and Q are both true or both false.

Note that the reification scheme introduced in section Reified Constraints is a special case of a propositional constraint.

### Combinatorial Constraints

The constraints listed here are sometimes called symbolic constraints. They are currently not reifiable.

`count(+Val,+List,+RelOp,?Count)`
where Val is an integer, List is a list of integers or domain variables, Count an integer or a domain variable, and RelOp is a relational symbol as in section Arithmetic Constraints. True if N is the number of elements of List that are equal to Val and N RelOp Count. Thus, `count/4` is a generalization of `exactly/3` (not an exported constraint) that was used in an example earlier. `count/4` maintains domain-consistency; see section The Constraint System.
`element(?X,+List,?Y)`
where X and Y are integers or domain variables and List is a list of integers or domain variables. True if the X:th element of List is Y. Operationally, the domains of X and Y are constrained so that for every element in the domain of X, there is a compatible element in the domain of Y, and vice versa. This constraint uses an optimized algorithm for the special case where List is ground. `element/3` maintains domain-consistency; see section The Constraint System.
`relation(?X,+MapList,?Y)`
where X and Y are integers or domain variables and MapList is a list of `integer-ConstantRange` pairs, where the integer keys occur uniquely (see section Syntax of Indexicals). True if MapList contains a pair `X-R` and Y is in the range denoted by R. Operationally, the domains of X and Y are constrained so that for every element in the domain of X, there is a compatible element in the domain of Y, and vice versa. If MapList is not ground, the constraint must be wrapped in `call/1` to postpone goal expansion until runtime. An arbitrary binary constraint can be defined with `relation/3`. `relation/3` maintains domain-consistency; see section The Constraint System.
`all_different(+Variables)`
where Variables is a list of domain variables or integers. Each variable is constrained to take a value that is unique among the variables. Operationally, this is equivalent to posting an inequality constraint for each pair of variables. This constraint wakes up each time a variable becomes ground, pruning the domains of the other variables. The implementation is incomplete, i.e. domain elements for Variables for which the constraint has no solutions are not always eliminated. A complete version is provided by the `all_distinct/1` constraint (see below).

The following constraint is a complete version of the `all_different/1` constraint. The algorithm is due to Regin [Regin 94].

`all_distinct(+Variables)`
where Variables is a list of domain variables or integers. Each variable is constrained to take a value that is unique among the variables. The implementation is complete, i.e. it maintains domain-consistency; see section The Constraint System.

The following is a constraint over two lists of length n of variables. Each variable is constrained to take a value in 1,...,n that is unique for its list. Furthermore, the lists are dual in a sense described below.

`assignment(+Xs, +Ys)`
where Xs and Ys are lists of domain variables or integers, both of length n. True if all Xi, Yi in 1,...,n and Xi=j iff Yj=i.

The following constraint can be thought of as constraining n nodes in a graph to form a Hamiltonian circuit. The nodes are numbered from 1 to n. The circuit starts in node 1, visits each node, and returns to the origin.

`circuit(+Succ)`
`circuit(+Succ, +Pred)`
where Succ is a list of length n of domain variables or integers. The i:th element of Succ (Pred) is the successor (predecessor) of i in the graph. True if the values form a Hamiltonian circuit.

The following four constraints can be thought of as constraining n tasks, each with a start time Sj and a duration Dj, so that no tasks ever overlap. The tasks can be seen as competing for some exclusive resource.

`serialized(+Starts,+Durations)`
where Starts = [S1,...,Sn] and Durations = [D1,...,Dn] are lists of domain variables with finite bounds or integers. True if:
```Si+Di =< Sj OR Sj+Dj =< Si, for all 1 =< i<j =< n
```
The `serialized/2` constraint is merely a special case of `cumulative/4` (see below).
`serialized_resource(+Starts,+Durations,-Resource)`
expresses the same constraint, returning in Resource a term which can be passed to `order_resource/2` (see section Enumeration Predicates) in order to find a consistent ordering of the tasks.
`serialized_precedence(+Starts,+Durations,+Precedences)`
`serialized_precedence_resource(+Starts,+Durations,+Precedences,-Resource)`
express the same constraint, combined with some extra precedence constraint as encoded in Precedences which should be a list of terms of the form `d(i,j,d)`. Here, i and j should be task numbers, and d should be a positive integer or `sup`. Each term adds the constraint:
```Si+d =< Sj OR Sj =< Si, if d is an integer
Sj =< Si, if d is sup
```
`serialized_precedence/3` can model a set of tasks to be serialized with sequence-dependent setup times. For example, the following constraint models three tasks, all with duration 5, where task 1 must precede task 2 and task 3 must either complete before task 2 or start at least 10 time units after task 2 started:
```?- domain([S1,S2,S3], 0, 20),
serialized_precedence([S1,S2,S3], [5,5,5], [d(2,1,sup),d(2,3,10)]).

S1 in 0..15,
S2 in 5..20,
S3 in 0..20 ?
```
The bounds of `S1` and `S2` changed because of the precedence constraint. Setting `S2` to 5 will propagate `S1=0` and ```S3 in 15..20```.

The following constraint can be thought of as constraining n tasks, each with a start time Sj, a duration Dj, and a resource amount Rj, so that the total resource consumption does not exceed Limit at any time:

`cumulative(+Starts,+Durations,+Resources,?Limit)`
where Starts = [S1,...,Sn], Durations = [D1,...,Dn], Resource = [R1,...,Rn] are lists of domain variables with finite bounds or integers, and Limit is a domain variable with finite bounds or an integer. Let
```a = min(S1,...,Sn),
b = max(S1+D1,...,Sn+Dn)
Rij = Rj, if Sj =< i < Sj+Dj
Rij = 0 otherwise
```
The constraint holds if:
```Ri1+...+Rin =< Limit, for all a =< i < b
```
The `cumulative/4` constraint is due to Aggoun and Beldiceanu [Aggoun & Beldiceanu 92]. The current implementation is incomplete and takes O(n^2) space.

### User-Defined Constraints

New, primitive constraints can be added defined by the user on two different levels. On a higher level, constraints can be defined using the global constraint programming interface; see section Defining Global Constraints. Such constraints can embody specialized algorithms and use the full power of Prolog. They cannot be reified.

On a lower level, new primitive constraints can be defined with indexicals. In this case, they take part in the basic constraint solving algorithm and express custom designed rules for special cases of the overall local propagation scheme. Such constraints are called FD predicates; see section Defining Primitive Constraints. They can optionally be reified.

## Enumeration Predicates

As is usually the case with finite domain constraint solvers, this solver is not complete. That is, it does not ensure that the set of posted constraints is satisfiable. One must resort to search (enumeration) to check satisfiability and get particular solutions.

The following predicates provide several variants of search:

`indomain(?X)`
where X is a domain variable with finite bounds or an integer. Assigns, in increasing order via backtracking, a feasible value to X.
`minimize(:Goal,?X)`
`maximize(:Goal,?X)`
Uses a branch-and-bound algorithm with restart to find an assignment that minimizes (maximizes) the domain variable X. Goal should be a Prolog goal that constrains X to become assigned, and could be a `labeling/2` goal. The algorithm calls Goal repeatedly with a progressively tighter upper (lower) bound on X until a proof of optimality is obtained, at which time Goal and X are unified with values corresponding to the optimal solution.
`labeling(+Options, +Variables)`
where Variables is a list of domain variables or integers and Options is a list of search options. True if an assignment of the variables can be found which satisfies the posted constraints.

The search options control the order in which variables are selected for assignment (variable choice heuristic), the way in which choices are made for the selected variable (value choice heuristic), and whether all solutions or a single, optimal solution should be found. The search options are divided into four groups. One option may be selected per group. Finally, the number of assumptions (choices) made during the search can be collected.

The following options control the order in which the next variable is selected for assignment. None of these methods will select a variable with an unbounded domain. Thus `labeling/2` may succeed with such variables still unassigned.
`leftmost`
The leftmost variable is selected. This is the default.
`min`
The leftmost variable with the smallest lower bound is selected.
`max`
The leftmost variable with the greatest upper bound is selected.
`ff`
The first-fail principle is used: the leftmost variable with the smallest domain is selected.
`ffc`
The most constrained heuristic is used: a variable with the smallest domain is selected, breaking ties by (a) selecting the variable that has the most constraints suspended on it and (b) selecting the leftmost one.
The following options control the way in which choices are made for the selected variable X:
`step`
Makes a binary choice between `X #= B` and `X #\= B`, where B is the lower or upper bound of X. This is the default.
`enum`
Makes a multiple choice for X corresponding to the values in its domain.
`bisect`
Makes a binary choice between `X #=< M` and `X #> M`, where M is the midpoint of the domain of X. This strategy is also known as domain splitting.
The following options control the order in which the choices are made for the selected variable X:
`up`
The domain is explored in ascending order. This is the default.
`down`
The domain is explored in descending order.
The following options control whether all solutions should be enumerated by backtracking or whether a single solution that minimizes (maximizes) X is returned, if one exists.
`all`
All solutions are enumerated. This is the default.
`minimize(X)`
`maximize(X)`
Uses a branch-and-bound algorithm to find an assignment that minimizes (maximizes) the domain variable X. The labeling should constrain X to become assigned for all assignments of Variables.
Finally, the following option counts the number of assumptions (choices) made during the search:
`statistics(K)`
When a solution is found, K is unified with the number of choices made.

For example, to enumerate solutions using a static variable ordering, use:

```| ?- constraints(Variables),
labeling([], Variables).
%same as [leftmost,step,up,all]
```

To minimize a cost function using branch-and-bound search, a dynamic variable ordering using the first-fail principle, and domain splitting exploring the upper part of domains first, use:

```| ?- constraints(Variables, Cost),
labeling([ff,bisect,down,minimize(Cost)], Variables).
```

As opposed to the predicates above which search for consistent assignments to domain variables, the following predicate searches for a consistent ordering among tasks competing for an exclusive resource, without necessarily fixing their start times:

`order_resource(+Options, +Resource)`
where Options is a list of search options and Resource represents a resource as returned by `serialized_resource/3` (see section Combinatorial Constraints) on which tasks must be serialized. True if a total ordering can be imposed on the tasks, enumerating all such orderings via backtracking. The search options control the construction of the total ordering. It may contain at most one of the following atoms, selecting a strategy:
`first`
The ordering is built by repetitively selecting some task to be placed before all others.
`last`
The ordering is built by repetitively selecting some task to be placed after all others.
and at most one of the following atoms, controlling which task to select at each step. If `first` is chosen (the default), the task with the smallest value is selected, otherwise the task with the greatest value is selected.
`est`
The tasks are ordered by earliest start time.
`lst`
The tasks are ordered by latest start time.
`ect`
The tasks are ordered by earliest completion time.
`lct`
The tasks are ordered by latest completion time.
`[first,est]` (the default) and `[last,lct]` can be good heuristics.

## Statistics Predicates

The following predicates can be used to get execution statistics.

`fd_statistics(?Key, ?Value)`
This allows a program to access execution statistics specific to this solver. General statistics about CPU time and memory consumption etc. is available from the built-in predicate `statistics/2`. For each of the possible keys Key, Value is unified with the current value of a counter which is simultaneously zeroed. The following counters are maintained. See section The Constraint System, for details of what they all mean:
`resumptions`
The number of times a constraint was resumed.
`entailments`
The number of times a (dis)entailment was detected by a constraint.
`prunings`
The number of times a domain was pruned.
`backtracks`
The number of times a contradiction was found by a domain being wiped out, or by a global constraint signalling failure. Other causes of backtracking, such as failed Prolog tests, are not covered by this counter.
`constraints`
The number of constraints created.
`fd_statistics`
Displays on the `user_error` stream a summary of the above statistics. All counters are zeroed.

By default, the answer constraint only shows the projection of the store onto the variables that occur in the query, but not any constraints that may be attached to these variables, nor any domains or constraints attached to other variables. This is a conscious decision, as no efficient algorithm for projecting answer constraints onto the query variables is known for this constraint system.

It is possible, however, to get a complete answer constraint including all variables that took part in the computation and their domains and attached constraints. This is done by asserting a clause for the following predicate:

`clpfd:full_answer`
A dynamic hook predicate. If false (the default), the answer constraint only contains the domains of the query variables. If true, the answer constraints contains the domains and any attached constraints of all variables.

## The Constraint System

### Definitions

The constraint system is based on domain constraints and indexicals. A domain constraint is an expression `X::I`, where X is a domain variable and I is a nonempty set of integers.

A set S of domain constraints is called a store. D(X,S), the domain of X in S, is defined as the intersection of all I such that `X::I` belongs to S. The store is contradictory if the domain of some variable is empty; otherwise, it is consistent. A consistent store S' is an extension of a store S iff, for all variables X, D(X,S') is contained in D(X,S).

The following definitions, adapted from [Van Hentenryck et al. 95], define important notions of consistency and entailment of constraints wrt. stores.

A ground constraint is true if it holds and false otherwise.

A constraint C is domain-consistent wrt. S iff, for each variable Xi and value Vi in D(Xi,S), there exist values Vj in D(Xj,S), 1 =< j =< n, i\=j, such that C(V1,...,Vn) is true.

A constraint C is domain-entailed by S iff, for all values Vj in D(Xj,S), 1 =< j =< n, C(V1,...,Vn) is true.

Let D'(X,S) denote the interval min(D(X,S))..max(D(X,S)).

A constraint C is interval-consistent wrt. S iff, for each variable Xi there exist values Vj in D'(Xj,S), 1 =< j =< n, i\=j, such that C(V1,...,min(D(Xi,S)),...,Vn) and C(V1,...,max(D(Xi,S)),...,Vn) are both true.

A constraint C is interval-entailed by S iff, for all values Vj in D'(Xj,S), 1 =< j =< n, C(V1,...,Vn) is true.

Finally, a constraint is domain-disentailed (interval-disentailed) by S iff its negation is domain-entailed (interval-entailed) by S.

### Pitfalls of Interval Reasoning

In most circumstances, arithmetic constraints only maintain interval-consistency and only detect interval-entailment and -disentailment. Note that there are cases where an interval-consistency maintaining constraint may detect a contradiction when the constraint is not yet interval-disentailed, as the following example illustrates. Note that `X #\= Y` maintains domain consistency if both arguments are constants or variables:

```| ?- X+Y #= Z, X=1, Z=6, Y in 1..10, Y #\= 5.

no
| ?- X+Y #= Z #<=> B, X=1, Z=6, Y in 1..10, Y #\= 5.

X = 1,
Z = 6,
Y in(1..4)\/(6..10),
B in 0..1
```

Since `1+5#=6` holds, `X+Y #= Z` is not interval-disentailed, although any attempt to make it interval-consistent wrt. the store results in a contradictory store.

## Defining Global Constraints

### The Global Constraint Programming Interface

This section describes a programming interface by means of which new constraints can be written. The interface consists of a set of predicates provided by this library module. Constraints defined in this way can take arbitrary arguments and may use any constraint solving algorithm, provided it makes sense. Reification cannot be expressed in this interface; instead, reification may be achieved by explicitly passing a 0/1-variable to the constraint in question.

Global constraints have state which may be updated each time the constraint is resumed. The state information may be used e.g. in incremental constraint solving.

The following two predicates are the principal entrypoints for defining and posting new global constraints:

`clpfd:dispatch_global(+Constraint, +State0, -State, -Actions)`
A multifile hook predicate, telling the solver how to solve constraints of the form Constraint. When defining a new constraint, a clause of this predicate must be added. Its body defines a constraint solving method and should always succeed deterministically. When a global constraint is called or resumed, the solver will call this predicate to deal with the constraint. State0 and State are the old and new state respectively. The constraint solving method must not invoke the constraint solver recursively e.g. by binding variables or posting new constraints; instead, Actions should be unified with a list of requests to the solver. Each request should be of the following form:
`exit`
The constraint has become entailed, and ceases to exist.
`fail`
The constraint has become disentailed, causing the solver to backtrack.
`X = V`
The solver binds X to V.
`X in R`
The solver constrains X to be a member of the ConstantRange R (see section Syntax of Indexicals).
`X in_set S`
The solver constrains X to be a member of the FD set S (see section FD Set Operations).
`call(Goal)`
The solver calls the goal or constraint Goal, which should be module prefixed unless it is a built-in predicate or an exported predicate of the `clpfd` module.
`fd_global(+Constraint, +State, +Susp)`
where Constraint is a constraint goal, State is its initial state, and Susp is a term encoding how the constraint should wake up in response to domain changes. This posts the constraint. Susp is a list of F(Var) terms where Var is a variable to suspend on and F is a functor encoding when to wake up:
`dom(X)`
wake up when the domain of X has changed
`min(X)`
wake up when the lower bound of X has changed
`max(X)`
wake up when the upper bound of X has changed
`minmax(X)`
wake up when the lower or upper of X has changed
`val(X)`
wake up when X has become ground

For an example of usage, see section A Global Constraint Example.

### Reflection Predicates

The constraint solving method needs access to information about the current domains of variables. This is provided by the following predicates, which are all constant time operations.

`fd_min(?X, ?Min)`
where X is a domain variable (or an integer). Min is unified with the smallest value in the current domain of X, i.e. an integer or the atom `inf` denoting minus infinity.
`fd_max(?X, ?Max)`
where X is a domain variable (or an integer). Max is unified with the upper bound of the current domain of X, i.e. an integer or the atom `sup` denoting infinity.
`fd_size(?X, ?Size)`
where X is a domain variable (or an integer). Size is unified with the size of the current domain of X, if the domain is finite, or the atom `sup` otherwise.
`fd_set(?X, ?Set)`
where X is a domain variable (or an integer). Set is unified with an FD set term denoting the internal representation of the current domain of X; see below.
`fd_dom(?X, ?Range)`
where X is a domain variable (or an integer). Range is unified with a ConstantRange (see section Syntax of Indexicals) denoting the the current domain of X.

The following predicate can be used for computing the set of variables that are transitively connected via constraints to some given variables.

`fd_closure(+Vars, -Closure)`
Given a list Vars of domain variables, Closure is the set of variables (including Vars) that can be transitively reached via constraints posted so far.

### FD Set Operations

The domains of variables are internally represented compactly as FD set terms. The details of this representation are subject to change and should not be relied on. Therefore, a number of operations on FD sets are provided, as such terms play an important role in the interface. The following operations are the primitive ones:

`is_fdset(+Set)`
Set is a valid FD set.
`empty_fdset(?Set)`
Set is the empty FD set.
`fdset_parts(?Set, ?Min, ?Max, ?Rest)`
Set is an FD set which is a union of the non-empty interval Min..Max and the FD set Rest, and all elements of Rest are greater than Max+1. Min and Max are both integers or the atoms `inf` and `sup`, denoting minus and plus infinity, respectively. Either Set or all the other arguments must be ground.

The following operations can all be defined in terms of the primitive ones, but in most cases, a more efficient implementation is used:

`empty_interval(+Min, +Max)`
Min..Max is an empty interval.
`fdset_interval(?Set, ?Min, ?Max)`
Set is an fdset which is the non-empty interval Min..Max.
`fdset_singleton(?Set, ?Elt)`
Set is an FD set containing Elt only. At least one of the arguments must be ground.
`fdset_min(+Set, -Min)`
Min is the lower bound of Set.
`fdset_max(+Set, -Min)`
Max is the upper bound of Set. This operation is linear in the number of intervals of Set.
`fdset_size(+Set, -Size)`
Size is the cardinality of Set, represented as `sup` if Set is infinite. This operation is linear in the number of intervals of Set.
`list_to_fdset(+List, -Set)`
Set is the FD set containing the elements of List. Slightly more efficient if List is ordered.
`fdset_to_list(+Set, -List)`
List is an ordered list of the elements of Set, which must be finite.
`range_to_fdset(+Range, -Set)`
Set is the FD set containing the elements of the ConstantRange (see section Syntax of Indexicals) Range.
`fdset_to_range(+Set, -Range)`
Range is a constant interval, a singleton constant set, or a union of such, denoting the same set as Set.
`fdset_add_element(+Set1, +Elt -Set2)`
Set2 is Set1 with Elt inserted in it.
`fdset_del_element(+Set1, +Elt, -Set2)`
Set2 is like Set1 but with Elt removed.
`fdset_disjoint(+Set1, +Set2)`
The two FD sets have no elements in common.
`fdset_intersect(+Set1, +Set2)`
The two FD sets have at least one element in common.
`fdset_intersection(+Set1, +Set2, -Intersection)`
Intersection is the intersection between Set1 and Set2.
`fdset_intersection(+Sets, -Intersection)`
Intersection is the intersection of all the sets in Sets.
`fdset_member(?Elt, +Set)`
is true when Elt is a member of Set. If Elt is unbound, Set must be finite.
`fdset_eq(+Set1, +Set2)`
Is true when the two arguments represent the same set i.e. they are identical.
`fdset_subset(+Set1, +Set2)`
Every element of Set1 appears in Set2.
`fdset_subtract(+Set1, +Set2, -Difference)`
Difference contains all and only the elements of Set1 which are not also in Set2.
`fdset_union(+Set1, +Set2, -Union)`
Union is the union of Set1 and Set2.
`fdset_union(+Sets, -Union)`
Union is the union of all the sets in Sets.
`fdset_complement(+Set, -Complement)`
Complement is the complement of Set wrt. `inf..sup`.

### A Global Constraint Example

The following example defines a new global constraint `exactly(X,L,N)` which is true if X occurs exactly N times in the list L of integers and domain variables. A version defined in terms of reified equalities was presented earlier; see section Reified Constraints.

This example illustrates the use of state information. The state has two components: the list of variables that could still be X, and the number of variables still required to be X.

The constraint is defined to wake up on any domain change.

```/*
An implementation of exactly(I, X...X[m], N):

Necessary condition: 0 =< N =< m.
Rewrite rules:

 |= X[i]=I  ==> exactly(I, X...X[i-1],X[i+1]...X[m], N-1):
 |= X[i]\=I ==> exactly(I, X...X[i-1],X[i+1]...X[m], N):
 |= N=0     ==> X\=I ... X[m]\=I
 |= N=m     ==> X=I  ... X[m]=I
*/
:- use_module(library(clpfd)).

% the entrypoint
exactly(I, Xs, N) :-
dom_suspensions(Xs, Susp),
fd_global(exactly(I,Xs,N), state(Xs,N), Susp).

dom_suspensions([], []).
dom_suspensions([X|Xs], [dom(X)|Susp]) :-
dom_suspensions(Xs, Susp).

% the solver method
:- multifile clpfd:dispatch_global/4.
clpfd:dispatch_global(exactly(I,_,_), state(Xs0,N0), state(Xs,N), Actions) :-
exactly_solver(I, Xs0, Xs, N0, N, Actions).

exactly_solver(I, Xs0, Xs, N0, N, Actions) :-
ex_filter(Xs0, Xs, N0, N, I),
length(Xs, M),
(   N=:=0 -> Actions = [exit|Ps], ex_neq(Xs, I, Ps)
;   N=:=M -> Actions = [exit|Ps], ex_eq(Xs, I, Ps)
;   N>0, N<M -> Actions = []
;   Actions = [fail]
).

% rules [1,2]: filter the X's, decrementing N
ex_filter([], [], N, N, _).
ex_filter([X|Xs], Ys, L, N, I) :- X==I, !,
M is L-1,
ex_filter(Xs, Ys, M, N, I).
ex_filter([X|Xs], Ys0, L, N, I) :-
fd_set(X, Set),
fdset_member(I, Set), !,
Ys0 = [X|Ys],
ex_filter(Xs, Ys, L, N, I).
ex_filter([_|Xs], Ys, L, N, I) :-
ex_filter(Xs, Ys, L, N, I).

% rule : all must be neq I
ex_neq(Xs, I, Ps) :-
fdset_singleton(Set0, I),
fdset_complement(Set0, Set),
eq_all(Xs, Set, Ps).

% rule : all must be eq I
ex_eq(Xs, I, Ps) :-
fdset_singleton(Set, I),
eq_all(Xs, Set, Ps).

eq_all([], _, []).
eq_all([X|Xs], Set, [X in_set Set|Ps]) :-
eq_all(Xs, Set, Ps).

end_of_file.

% sample queries:
| ?- exactly(5,[A,B,C],1), A=5.

A = 5,
B in(inf..4)\/(6..sup),
C in(inf..4)\/(6..sup)

| ?- exactly(5,[A,B,C],1), A in 1..2, B in 3..4.

C = 5,
A in 1..2,
B in 3..4
```

## Defining Primitive Constraints

Indexicals are the principal means of defining constraints, but it is usually not necessary to resort to this level of programming--most commonly used constraints are available in a library and/or via macro-expansion. The key feature about indexicals is that they give the programmer precise control over aspects of the operational semantics of the constraints. Trade-offs can be made between the computational cost of the constraints and their pruning power. The indexical language provides many degrees of freedom for the user to select the level of consistency to be maintained depending on application-specific needs.

### Indexicals

An indexical is a reactive functional rule of the form `X in R`, where R is a set valued range expression (see below). See section Syntax of Indexicals, for a grammar defining indexicals and range expressions.

Indexicals can play one of two roles: propagating indexicals are used for constraint solving, and checking indexicals are used for entailment checking. When a propagating indexical fires, R is evaluated in the current store S, which is extended by adding the new domain constraint `X::S(R)` to the store, where S(R) denotes the value of R in S. When a checking indexical fires, it checks if D(X,S) is contained in S(R), and if so, the constraint corresponding to the indexical is detected as entailed.

### Range Expressions

A range expression has one of the following forms, where Ri denote range expressions, Ti denote integer valued term expressions, S(Ti) denotes the integer value of Ti in S, X denotes a variable, I denotes an integer, and S denotes the current store.

`dom(X)`
evaluates to D(X,S)
`{T1,...,Tn}`
evaluates to `{S(T1),...,S(Tn)}`. Any term expression containing a subexpression which is a variable that is not "quantified" by `unionof/3` will only be evaluated when this variable has been assigned.
`T1..T2`
evaluates to the interval between S(T1) and S(T2).
`R1/\R2`
evaluates to the intersection of S(R1) and S(R2)
`R1\/R2`
evaluates to the union of S(R1) and S(R2)
`\R2`
evaluates to the complement of S(R2)
`R1+R2`
`R1+T2`
evaluates to S(R2) or S(T2) added pointwise to S(R1)
`-R2`
evaluates to S(R2) negated pointwise
`R1-R2`
`R1-T2`
`T1-R2`
evaluates to S(R2) or S(T2) subtracted pointwise from S(R1) or S(T1)
`R1 mod R2`
`R1 mod T2`
evaluates to S(R1) pointwise modulo S(R2) or S(T2)
`R1 ? R2`
evaluates to S(R2) if S(R1) is a non-empty set; otherwise, evaluates to the empty set. This expression is commonly used in the context `(R1 ? (inf..sup) \/ R3)`, which evaluates to S(R3) if S(R1) is an empty set; otherwise, evaluates to `inf..sup`. As an optimization, R3 is not evaluated while the value of R1 is a non-empty set.
`unionof(X,R1,R2)`
evaluates to the union of S(Expr_1)...S(Expr_N), where each Expr_I has been formed by substituting K for X in R2, where K is the I:th element of S(R1). See section N Queens, for an example of usage. N.B. If S(R1) is infinite, the evaluation of the indexical will be abandoned, and the indexical will simply suspend.
`switch(T1,MapList)`
evaluates to S(Expr) if S(T1) equals Key and MapList contains a pair `Key-Expr`. Otherwise, evaluates to the empty set.

When used in the body of an FD predicate (see section Goal Expanded Constraints), a `relation/3` expression expands to two indexicals, each consisting of a `switch/2` expression nested inside a `unionof/3` expression. Thus, the following constraints are equivalent:

```p(X, Y) +: relation(X, [1-{1},2-{1,2},3-{1,2,3}], Y).

q(X, Y) +:
X in unionof(B,dom(Y),switch(B,[1-{1,2,3},2-{2,3},3-{3}])),
Y in unionof(B,dom(X),switch(B,[1-{1},2-{1,2},3-{1,2,3}])).
```

### Term Expressions

A term expression has one of the following forms, where T1 and T2 denote term expressions, X denotes a variable, I denotes an integer, and S denotes the current store.

`min(X)`
evaluates to the minimum of D(X,S)
`max(X)`
evaluates to the maximum of D(X,S)
`card(X)`
evaluates to the size of D(X,S)
`X`
evaluates to the integer value of X. A subexpression of this form, not "quantified" by `unionof/3`, will cause the evaluation to suspend until the variable is assigned.
`I`
an integer
`inf`
minus infinity
`sup`
plus infinity
`-T1`
evaluates to S(T1) negated
`T1+T2`
evaluates to the sum of S(T1) and S(T2)
`T1-T2`
evaluates to the difference of S(T1) and S(T2)
`T1*T2`
evaluates to the product of S(T1) and S(T2), where S(T2) must not be negative
`T1/>T2`
evaluates to the quotient of S(T1) and S(T2), rounded up, where S(T2) must be positive
`T1/<T2`
evaluates to the quotient of S(T1) and S(T2), rounded down, where S(T2) must be positive
`T1 mod T2`
evaluates to the modulo of S(T1) and S(T2)

### Monotonicity of Indexicals

A range R is monotone in S iff the value of R in S' is contained in the value of R in S, for every extension S' of S. A range R is anti-monotone in S iff the value of R in S is contained in the value of R in S', for every extension S' of S. By abuse of notation, we will say that `X in R` is (anti-)monotone iff R is (anti-)monotone.

The consistency or entailment of a constraint C expressed as indexicals `X in R` in a store S is checked by considering the relationship between D(X,S) and S(R), together with the (anti-)monotonicity of R in S. The details are given in section Execution of Propagating Indexicals and section Execution of Checking Indexicals.

The solver checks (anti-)monotonicity by requiring that certain variables occurring in the indexical be ground. This sufficient condition can sometimes be false for an (anti-)monotone indexical, but such situations are rare in practice.

### FD predicates

The following example defines the constraint X+Y=T as an FD predicate in terms of three indexicals. Each indexical is a rule responsible for removing values detected as incompatible from one particular constraint argument. Indexicals are not Prolog goals; thus, the example does not express a conjunction. However, an indexical may make the store contradictory, in which case backtracking is triggered:

```plus(X,Y,T) +:
X in min(T) - max(Y) .. max(T) - min(Y),
Y in min(T) - max(X) .. max(T) - min(X),
T in min(X) + min(Y) .. max(X) + max(Y).
```

The above definition contains a single clause used for constraint solving. The first indexical wakes up whenever the bounds of S(T) or S(Y) are updated, and removes from D(X,S) any values that are not compatible with the new bounds of T and Y. Note that in the event of "holes" in the domains of T or Y, D(X,S) may contain some values that are incompatible with X+Y=T but go undetected. Like most built-in arithmetic constraints, the above definition maintains interval-consistency, which is significantly cheaper to maintain than domain-consistency and suffices in most cases. The constraint could for example be used as follows:

```| ?- X in 1..5, Y in 2..8, plus(X,Y,T).

X in 1..5,
Y in 2..8,
T in 3..13 ?

yes
```

Thus, when an FD predicate is called, the `+:' clause is activated.

The definition of a user constraint has to specify what domain constraints should be added to the constraint store when the constraint is posted. Therefore the FD predicate contains a set of indexicals, each representing a domain constraint to be added to the constraint store. The actual domain constraint depends on the constraint store itself. For example, the third indexical in the above FD predicate prescribes the domain constraint `T :: 3..13' if the store contains `X :: 1..5, Y :: 2..8'. As the domain of some variables gets narrower, the indexical may enforce a new, stricter constraint on some other variables. Therefore such an indexical (called a propagating indexical) can be viewed as an agent reacting to the changes in the store by enforcing further changes in the store.

In general there are three stages in the lifetime of a propagating indexical. When it is posted it may not be evaluated immediately (e.g. has to wait until some variables are ground before being able to modify the store). Until the preconditions for the evaluation are satisfied, the agent does not enforce any constraints. When the indexical becomes evaluable the resulting domain constraint is added to the store. The agent then waits and reacts to changes in the domains of variables occurring in the indexical by re-evaluating it and adding the new, stricter constraint to the store. Eventually the computation reaches a phase when no further refinement of the store can result in a more precise constraint (the indexical is entailed by the store), and then the agent can cease to exist.

A necessary condition for the FD predicate to be correctly defined is the following: for any store mapping each variable to a singleton domain the execution of the indexicals should succeed without contradiction exactly when the predicate is intended to be true.

There can be several alternative definitions for the same user constraint with different strengths in propagation. For example, the definition of `plusd` below encodes the same `X+Y=T` constraint as the `plus` predicate above, but maintaining domain consistency:

```plusd(X,Y,T) +:
X in dom(T) - dom(Y),
Y in dom(T) - dom(X),
T in dom(X) + dom(Y).

| ?- X in {1}\/{3}, Y in {10}\/{20}, plusd(X, Y, T).

X in{1}\/{3},
Y in{10}\/{20},
T in{11}\/{13}\/{21}\/{23} ?

yes
```

This costs more in terms of execution time, but gives more precise results. For singleton domains `plus` and `plusd` behave in the same way.

In our design, general indexicals can only appear in the context of FD predicate definitions. The rationale for this restriction is the need for general indexicals to be able to suspend and resume, and this ability is only provided by the FD predicate mechanism.

If the program merely posts a constraint, it suffices for the definition to contain a single clause for solving the constraint. If a constraint is reified or occurs in a propositional formula, the definition must contain four clauses for solving and checking entailment of the constraint and its negation. The role of each clause is reflected in the "neck" operator. The following table summarizes the different forms of indexical clauses corresponding to a constraint C. In all cases, Head should be a compound term with all arguments being distinct variables:

`Head +: Indexicals.`
The clause consists of propagating indexicals for solving C.
`Head -: Indexicals.`
The clause consists of propagating indexicals for solving the negation of C.
`Head +? Indexical.`
The clause consists of a single checking indexical for testing entailment of C.
`Head -? Indexical.`
The clause consists of a single checking indexical for testing entailment of the negation of C.

When a constraint is reified, the solver spawns two reactive agents corresponding to detecting entailment and disentailment. Eventually, one of them will succeed in this and consequently will bind B to 0 or 1. A third agent is spawned, waiting for B to become assigned, at which time the constraint (or its negation) is posted. In the mean time, the constraint may have been detected as (dis)entailed, in which case the third agent is dismissed. The waiting is implemented by means of the coroutining facilities of SICStus Prolog.

As an example of a constraint with all methods defined, consider the following library constraint defining a disequation between two domain variables:

```'x\\=y'(X,Y) +:
X in \{Y},
Y in \{X}.
'x\\=y'(X,Y) -:
X in dom(Y),
Y in dom(X).
'x\\=y'(X,Y) +?
X in \dom(Y).
'x\\=y'(X,Y) -?
X in {Y}.
```

The following sections provide more precise coding rules and operational details for indexicals. `X in R` denotes an indexical corresponding to a constraint C. S denotes the current store.

### Execution of Propagating Indexicals

Consider the definition of a constraint C containing a propagating indexical `X in R`. Let TV(X,C,S) denote the set of values for X that can make C true in some ground extension of the store S. Then the indexical should obey the following coding rules:

• all arguments of C except X should occur in R
• if R is ground in S, S(R) = TV(X,C,S)

If the coding rules are observed, S(R) can be proven to contain TV(X,C,S) for all stores in which R is monotone. Hence it is natural for the implementation to wait until R becomes monotone before admitting the propagating indexical for execution. The execution of `X in R` thus involves the following:

• If D(X,S) is disjoint from S(R), a contradiction is detected.
• If D(X,S) is contained in S(R), D(X,S) does not contain any values known to be incompatible with C, and the indexical suspends, unless R is ground in S, in which case C is detected as entailed.
• Otherwise, D(X,S) contains some values that are known to be incompatible with C. Hence, `X::S(R)` is added to the store (X is pruned), and the indexical suspends, unless R is ground in S, in which case C is detected as entailed.

A propagating indexical is scheduled for execution as follows:

• it is evaluated initially as soon as it has become monotone
• it is re-evaluated when one of the following conditions occurs:
1. the domain of a variable Y that occurs as `dom(Y)` or `card(Y)` in R has been updated
2. the lower bound of a variable Y that occurs as `min(Y)` in R has been updated
3. the upper bound of a variable Y that occurs as `max(Y)` in R has been updated

### Execution of Checking Indexicals

Consider the definition of a constraint C containing a checking indexical `X in R`. Let FV(X,C,S) denote the set of values for X that can make C false in some ground extension of the store S. Then the indexical should obey the following coding rules:

• all arguments of C except X should occur in R
• if R is ground in S, S(R) = TV(X,C,S)

If the coding rules are observed, S(R) can be proven to exclude FV(X,C,S) for all stores in which R is anti-monotone. Hence it is natural for the implementation to wait until R becomes anti-monotone before admitting the checking indexical for execution. The execution of `X in R` thus involves the following:

• If D(X,S) is contained in S(R), none of the possible values for X can make C false, and so C is detected as entailed.
• Otherwise, if D(X,S) is disjoint from S(R) and R is ground in S, all possible values for X will make C false, and so C is detected as disentailed.
• Otherwise, D(X,S) contains some values that could make C true and some that could make C false, and the indexical suspends.

A checking indexical is scheduled for execution as follows:

• it is evaluated initially as soon as it has become anti-monotone
• it is re-evaluated when one of the following conditions occurs:
1. the domain of X has been pruned, or X has been assigned
2. the domain of a variable Y that occurs as `dom(Y)` or `card(Y)` in R has been pruned
3. the lower bound of a variable Y that occurs as `min(Y)` in R has been increased
4. the upper bound of a variable Y that occurs as `max(Y)` in R has been decreased

### Goal Expanded Constraints

The arithmetic, membership, and propositional constraints described earlier are transformed at compile time to conjunctions of goals of library constraints.

Sometimes it is necessary to postpone the expansion of a constraint until runtime, e.g. if the arguments are not instantiated enough. This can be achieved by wrapping `call/1` around the constraint.

Although space economic (linear in the size of the source code), the expansion of a constraint to library goals can have an overhead compared to expressing the constraint in terms of indexicals. Temporary variables holding intermediate values may have to be introduced, and the grain size of the constraint solver invocations can be rather small. The translation of constraints to library goals has been greatly improved in the current version, so these problems have virtually disappeared. However, for backward compatibility, an implementation by compilation to indexicals of the same constraints is also provided. An FD predicate may be defined by a single clause:

```    Head +: Constraint.
```

where Constraint is an arithmetic constraint or an `element/3` or a `relation/3` constraint. This translation is only available for `+:' clauses; thus, Head cannot be reified.

In the case of arithmetic constraints, the constraint must be over linear terms (see section Syntax of Indexicals). The memory consumption of the FD predicate will be quadratic in the size of the source code. The alternative version of `sum/8` in section Send More Money illustrates this technique.

In the case of `element(X,L,Y)` or `relation(X,L,Y)`, the memory consumption of the FD predicate will be linear in the size of the source code. The execution time of the initial evaluation of the FD predicate will be linear in the size of the initial domains for X and Y; if these domains are infinite, no propagation will take place.

## Example Programs

This section contains a few example programs. The first two programs are included in a benchmark suite that comes with the distribution. The benchmark suite is run by typing:

```| ?- compile(library('clpfd/examples/bench')).
| ?- bench.
```

### Send More Money

Let us return briefly to the Send More Money problem (see section A Constraint Satisfaction Problem). Its `sum/8` predicate will expand to a space-efficient conjunction of library constraints. A faster but more memory consuming version is defined simply by changing the neck symbol of `sum/8` from `:-' to `+:', thus turning it into an FD predicate:

```sum(S, E, N, D, M, O, R, Y) +:
1000*S + 100*E + 10*N + D
+            1000*M + 100*O + 10*R + E
#= 10000*M + 1000*O + 100*N + 10*E + Y.
```

### N Queens

The problem is to place N queens on an NxN chess board so that no queen is threatened by another queen.

The variables of this problem are the N queens. Each queen has a designated row. The problem is to select a column for it.

The main constraint of this problem is that no queen threaten another. This is encoded by the `no_threat/3` constraint and holds between all pairs `(X,Y)` of queens. It could be defined as

```no_threat(X, Y, I) :-
X   #\= Y,
X+I #\= Y,
X-I #\= Y.
```

However, this formulation introduces new temporary domain variables and creates twelve fine-grained indexicals. Worse, the arithmetic constraints are only guaranteed to maintain interval-consistency and so may miss some opportunities for pruning elements in the middle of domains.

A better idea is to formulate `no_threat/3` as an FD predicate with two indexicals, as shown in the program below. This constraint will not fire until one of the queens has been assigned (the corresponding indexical does not become monotone until then). Hence, the constraint is still not as strong as it could be.

For example, if the domain of one queen is (2..3), then it will threaten any queen placed in column 2 or 3 on an adjacent row, no matter which of the two open positions is chosen for the first queen. The commented out formulation of the constraint captures this reasoning, and illustrates the use of the `unionof/3` operator. This stronger version of the constraint indeed gives less backtracking, but is computationally more expensive and does not pay off in terms of execution time, except possibly for very large chess boards.

It is clear that `no_threat/3` cannot detect any incompatible values for a queen with domain of size greater than three. This observation is exploited in the third version of the constraint.

The first-fail principle is appropriate in the enumeration part of this problem.

```:- use_module(library(clpfd)).

queens(N, L, LabelingType) :-
length(L, N),
domain(L, 1, N),
constrain_all(L),
labeling(LabelingType, L).

constrain_all([]).
constrain_all([X|Xs]) :-
constrain_between(X, Xs, 1),
constrain_all(Xs).

constrain_between(_X, [], _N).
constrain_between(X, [Y|Ys], N) :-
no_threat(X, Y, N),
N1 is N+1,
constrain_between(X, Ys, N1).

% version 1: weak but efficient
no_threat(X, Y, I) +:
X in \({Y} \/ {Y+I} \/ {Y-I}),
Y in \({X} \/ {X+I} \/ {X-I}).

/*
% version 2: strong but very inefficient version
no_threat(X, Y, I) +:
X in unionof(B,dom(Y),\({B} \/ {B+I} \/ {B-I})),
Y in unionof(B,dom(X),\({B} \/ {B+I} \/ {B-I})).

% version 3: strong but somewhat inefficient version
no_threat(X, Y, I) +:
X in (4..card(Y)) ? (inf..sup) \/
unionof(B,dom(Y),\({B} \/ {B+I} \/ {B-I})),
Y in (4..card(X)) ? (inf..sup) \/
unionof(B,dom(X),\({B} \/ {B+I} \/ {B-I})).
*/

| ?- queens(8, L, [ff]).

L = [1,5,8,6,3,7,2,4] ?

```

### Cumulative Scheduling

This example is a very small scheduling problem. We consider seven tasks where each task has a fixed duration and a fixed amount of used resource:

```TASK    DURATION        RESOURCE
====    ========        ========
t1            16               2
t2             6               9
t3            13               3
t4             7               7
t5             5              10
t6            18               1
t7             4              11
```

The goal is to find a schedule that minimizes the completion time for the schedule while not exceeding the capacity 13 of the resource. The resource constraint is succinctly captured by a `cumulative/4` constraint. Branch-and-bound search is used to find the minimal completion time.

This example was borrowed from [Beldiceanu & Contejean 94].

```:- use_module(library(clpfd)).

schedule(Ss, End) :-
length(Ss, 7),
Ds = [16, 6,13, 7, 5,18, 4],
Rs = [ 2, 9, 3, 7,10, 1,11],
domain(Ss, 1, 30),
domain([End], 1, 50),
after(Ss, Ds, End),
cumulative(Ss, Ds, Rs, 13),
labeling([minimize(End)], [End|Ss]).

after([], [], _).
after([S|Ss], [D|Ds], E) :- E #>= S+D, after(Ss, Ds, E).

%% End of file

| ?- schedule(Ss, End).

Ss = [1,17,10,10,5,5,1],
End = 23 ?
```

## Syntax Summary

### Syntax of Indexicals

```X --> variable                { domain variable }

Constant --> integer
|   inf                         { minus infinity }
|   sup                         { plus infinity }

Term --> Constant
|   X                     { suspend until assigned }
|   min(X)                { min. of domain of X }
|   max(X)                { max. of domain of X }
|   card(X)               { size of domain of X }
|   - Term
|   Term + Term
|   Term - Term
|   Term * Term
|   Term /> Term               { division rounded up }
|   Term /< Term               { division rounded down }
|   Term mod Term

TermSet --> {Term,...,Term}

Range --> TermSet
|   dom(X)                          { domain of X }
|   Term..Term                      { interval }
|   Range/\Range                     { intersection }
|   Range\/Range                     { union }
|   \Range                          { complement }
|   - Range                         { pointwise negation }
|   Range + Range                   { pointwise addition }
|   Range - Range                   { pointwise subtraction }
|   Range mod Range                 { pointwise modulo }
|   Range + Term                    { pointwise addition }
|   Range - Term                    { pointwise subtraction }
|   Term - Range                    { pointwise subtraction }
|   Range mod Term                  { pointwise modulo }
|   Range ? Range
|   unionof(X,Range,Range)
|   switch(Term,MapList)

ConstantSet --> {integer,...,integer}

ConstantRange --> ConstantSet
|   Constant..Constant
|   ConstantRange/\ConstantRange
|   ConstantRange\/ConstantRange
|   \ConstantRange

MapList --> []
|   [integer-ConstantRange|MapList]

CList --> []
|   [integer|CList]

Indexical --> X in Range

Indexicals --> Indexical
|   Indexical, Indexicals

ConstraintBody --> Indexicals
|  LinExpr RelOp LinExpr
|  element(X,CList,X)
|  relation(X,MapList,X)

Head --> term  { a compound term with unique variable args }

ConstraintDef -->
```

### Syntax of Arithmetic Expressions

```X --> variable                { domain variable }

N --> integer

LinExpr --> N                { linear expression }
|   X
|   N * X
|   N * N
|   LinExpr + LinExpr
|   LinExpr - LinExpr

Expr --> LinExpr
|   Expr + Expr
|   Expr - Expr
|   Expr * Expr
|   Expr / Expr                { integer division }
|   Expr mod Expr
|   min(Expr,Expr)
|   max(Expr,Expr)
|   abs(Expr)

RelOp --> #= | #\= | #< | #=< | #> | #>=
```

### Operator Declarations

```:- op(1200, xfx, [+:,-:,+?,-?]).
:- op(760,  yfx, #<=>).
:- op(750,  xfy, #=>).
:- op(750,  yfx, #<=).
:- op(740,  yfx, #\/).
:- op(730,  yfx, #\).
:- op(720,  yfx, #/\).
:- op(710,   fy, #\).
:- op(700,  xfx, [in,in_set]).
:- op(700,  xfx, [#=,#\=,#<,#=<,#>,#>=]).
:- op(550,  xfx, ..).
:- op(500,   fy, \).
:- op(490,  yfx, ?).
:- op(400,  yfx, [/>,/<]).
```