Next: Statistics Predicates, Previous: Available Constraints, Up: lib-clpfd [Contents][Index]
As is usually the case with finite domain constraint solvers, this solver is not complete. That is, it does not ensure that the set of posted constraints is satisfiable. One must resort to search (enumeration) to check satisfiability and get particular solutions.
The following predicates provide several variants of search:
indomain(?X)
where X is a domain variable with a bounded domain. Assigns, in increasing order via backtracking, a feasible value to X.
labeling(:Options, +Variables)
where Variables is a list of domain variables and Options is a list of search options. The domain variables must all have bounded domains. True if an assignment of the variables can be found, which satisfies the posted constraints.
first_bound(+BB0, -BB)
later_bound(+BB0, -BB)
Provides an auxiliary service for the value(Enum)
option
(see below).
minimize(:Goal,?X)
minimize(:Goal,?X,+Options) since release 4.3
maximize(:Goal,?X)
maximize(:Goal,?X,+Options) since release 4.3
Uses a restart algorithm to find an assignment that minimizes
(maximizes) the domain variable X. Goal should be a Prolog
goal that constrains X to become assigned, and could be a
labeling/2
goal. The algorithm calls Goal repeatedly with
a progressively tighter upper (lower) bound on X until a proof of
optimality is obtained.
Whether to enumerate every solution that improves the objective function, or only the optimal one after optimality has been proved, is controlled by Options. If given, then it whould be a list containing a single atomic value, one of:
best since release 4.3
Return the optimal solution after proving its optimality. This is the default.
all since release 4.3
Enumerate all improving solutions, on backtracking seek the next improving solution. Merely fail after proving optimality.
The Options argument of labeling/2
controls the order in
which variables are selected for assignment (variable choice heuristic),
the way in which choices are made for the selected variable (value
choice heuristic), whether the problem is a satisfaction one or an
optimization one, and whether all solutions or only the optimal one
should be returned. The options are divided into five groups. One
option may be selected per group. Also, the number of assumptions
(choices) made during the search can be counted. Finally, limits on
the execution time and discrepancy of the search can be imposed:
The following options control the order in which the next variable is selected for assignment.
leftmost
input_order
The leftmost variable is selected. This is the default.
min
smallest
The leftmost variable with the smallest lower bound is selected.
max
largest
The leftmost variable with the greatest upper bound is selected.
ff
first_fail
The first-fail principle is used: the leftmost variable with the smallest domain is selected.
anti_first_fail since release 4.3
The leftmost variable with the largest domain is selected.
occurrence since release 4.3
The leftmost variable among those that have the most constraints suspended on it is selected.
ffc
most_constrained
The most constrained heuristic is used: a variable with the smallest domain is selected, breaking ties by (a) selecting the variable that has the most constraints suspended on it and (b) selecting the leftmost one.
max_regret since release 4.3
The variable with the largest difference between its first two domain elements is selected. Ties are broken by selecting the leftmost variable.
variable(Sel)
Sel is a predicate to select the next variable. Given Vars, the variables that remain to label, it will be called as Sel(Vars,Selected,Rest).
Sel is expected to succeed determinately, unifying Selected and Rest with the selected variable and the remaining list, respectively.
Sel should be a callable term, optionally with a module prefix,
and the arguments Vars,Selected,Rest will be appended to it. For
example, if Sel is mod:sel(Param)
, then it will be called as
mod:sel(Param,Vars,Selected,Rest)
.
The following options control the way in which choices are made for the selected variable X:
step
Makes a binary choice between X #= B
and
X #\= B
, where B is the lower or upper bound of
X. This is the default.
enum
Makes a multiple choice for X corresponding to the values in its domain.
bisect
Makes a binary choice between X #=< M
and
X #> M
, where M is the middle of the domain of
X, i.e. the mean of min(X)
and max(X)
rounded down to the nearest integer. This strategy is also known as
domain splitting.
median since release 4.3
Makes a binary choice between X #= M
and
X #\= M
, where M is the median of the domain of
X. If the domain has an even number of elements, then the smaller
middle value is used.
middle since release 4.3
Makes a binary choice between X #= M
and
X #\= M
, where M is the middle of the domain
of X, i.e. the mean of min(X)
and max(X)
rounded down to the nearest integer.
value(Enum)
Enum is a predicate that should prune the domain of X, possibly but not necessarily to a singleton. It will be called as Enum(X,Rest,BB0,BB) where Rest is the list of variables that need labeling except X, and BB0 and BB are parameters described below.
Enum is expected to succeed nondeterminately, pruning the domain
of X, and to backtrack one or more times, providing alternative
prunings. To ensure that branch-and-bound search works correctly, it
must call the auxiliary predicate first_bound(BB0,BB)
in its first solution. Similarly, it must call the auxiliary predicate
later_bound(BB0,BB)
in any alternative solution.
Enum should be a callable term, optionally with a module prefix,
and the arguments X,Rest,BB0,BB will be appended to it. For
example, if Enum is mod:enum(Param)
, then it will be called as
mod:enum(Param,X,Rest,BB0,BB)
.
The following options control the order in which the choices are made
for the selected variable X. Not useful with the
value(Enum)
option:
up
The domain is explored in ascending order. This is the default.
down
The domain is explored in descending order.
The following options tell the solver whether the given problem is a satisfaction problem or an optimization problem. In a satisfaction problem, we wish to find values for the domain variables, but we do not care about which values. In an optimization problem, we wish to find values that minimize or maximize some objective function reflected in a domain variable:
satisfy since release 4.3
We have a satisfication problem. Its solutions are enumerated by backtracking. This is the default.
minimize(X)
maximize(X)
We have an optimization problem, seeking an assignment that minimizes
(maximizes) the domain variable X. The labeling should constrain
X to become assigned for all assignments of Variables. It
is useful to combine these option with the time_out/2
,
best
, and all
options (see below). If these options occur
more than once, then the last occurrence overrides previous ones.
The following options are only meaningful for optimization problems. They tell the solver whether to enumerate every solution that improves the objective function, or only the optimal one after optimality has been proved:
best since release 4.3
Return the optimal solution after proving its optimality. This is the default.
all since release 4.3
Enumerate all improving solutions, on backtracking seek the next improving solution. Merely fail after proving optimality.
The following options are only meaningful for optimization problems. They tell the solver what search scheme to use, but have no effect on the semantics or on the meaning of other options:
bab since release 4.3
Use a branch-and-bound scheme, which incrementally tightens the bound on the objective as more and more solutions are found. This is the default, and is usually the more efficient scheme.
restart since release 4.3
Use a scheme that restarts the search with a tighter bound on the objective each time a solution is found.
The following option counts the number of assumptions (choices) made during the search:
assumptions(K)
When a solution is found, K is unified with the number of choices made.
Finally, limits on the discrepancy of the search and the execution time can be imposed:
discrepancy(D)
On the path leading to the solution there are at most D choicepoints in which a non-leftmost branch was taken.
time_out(Time,Flag)
See lib-timeout. Time should be an integer number of milliseconds. If the search is exhausted within this time and no solution is found, then the search merely fails, as usual. Otherwise, Flag is bound to a value reflecting the outcome:
optimality since release 4.4
If best
was selected in an optimization problem, then the
search space was exhausted, having found the optimal solution. The
variables are bound to the corresponding values. If best
was
not selected, this flag value is not used.
success since release 4.4
If best
was selected in an optimization problem, then the
search timed out before the search space was exhausted, having found
at least one solution. If best
was not selected, then a
solution was simply found before the time limit. In any case, the
variables are bound to the values corresponding to the latest solution
found.
time_out since release 4.4
If best
was selected in an optimization problem, then the
search timed out before any solution was found. If
best
was not selected, then the search timed out while
searching for the next solution. The variables are left unbound.
For example, to enumerate solutions using a static variable ordering, use:
| ?- constraints(Variables), labeling([], Variables). %same as [leftmost,step,up,satisfy]
To minimize a cost function using branch-and-bound search, computing the best solution only, with a dynamic variable ordering using the first-fail principle, and domain splitting exploring the upper part of domains first, use:
| ?- constraints(Variables, Cost), labeling([ff,bisect,down,minimize(Cost)], Variables).
To give a time budget and collect the solutions of a satisfiability problem up to the time limit, use:
| ?- constraints(Variables), findall(Variables, labeling([time_out(Budget,success)|Options]), Solutions).
where Flag=success
will hold if all solutions were found, and
Flag=time_out
will hold if the time expired.
The file library('clpfd/examples/tsp.pl')
contains an example of
user-defined variable and value choice heuristics.
Note that, when used for optimization, labeling/2
has a
limitation compared to minimize/[2,3]
and maximize/[2,3]
:
the variable and value choice heuristics specified by labeling/2
must apply to the whole set of variables, with no provision for
different heuristics for different subsets. As of release 4.3, this
limitation has been lifted by the following predicate:
solve(:Options, :Searches) since release 4.3
where Options is a list of options of the same shape as taken by
labeling/2
, and Searches is a list of labeling/2
and
indomain/1
goals, or a single such goal. The domain variables of
Searches must all have bounded domains. True if the conjunction
of Searches is true.
The main purpose of this predicate is for optimization, allowing to
use different heuristics in the different Searches.
For satisfiability problems, a simple sequence of labeling/2
and
indomain/1
goals does the trick.
The treatment of the Options, as well as the suboption lists given
in the labeling/2
goals of Searches, is a bit special.
Some options are global for the whole search, and are ignored if they
occur in the suboption lists. Others are local to the given
labeling/2
goal, but provides a default value for the whole
search if it occurs in Options. The following table defines the
role of each option as global
or local
:
all | global |
anti_first_fail | local |
assumptions/1 | global |
bab | global |
best | global |
bisect | local |
discrepancy/1 | local |
down | local |
enum | local |
ffc | local |
ff | local |
first_fail | local |
input_order | local |
largest | local |
leftmost | local |
maximize/1 | global |
max | local |
max_regret | local |
median | local |
middle | local |
minimize/1 | global |
min | local |
most_constrained | local |
occurrence | local |
restart | global |
satisfy | global |
smallest | local |
step | local |
time_out/2 | global |
up | local |
value/1 | local |
variable/1 | local |
For example, suppose that you want to minimize a cost function using branch-and-bound search, enumerating every improving solution, using left-to-right search on some variables followed by first-fail domain splitting search on some other variables. This can be expressed as:
| ?- constraints([X1,X2,X3,Y1,Y2,Y3], Cost), solve([minimize(Cost),all], [labeling([leftmost],[X1,X2,X3]), labeling([ff,bisect],[Y1,Y2,Y3])]).