Reputation: 21
I am trying to research, evaluate, and compare some searching methods in ECLIPSe-CLP. The key method of evaluating complexity and the efficacy of a method in said system is counting backtracks, implemented with the predicate search/6 from lib(ic_search). However, in my testing, I noticed that the Eclipse software applies some process similar to domain propagation automatically before entering the search, which I have no way of controlling, which apparently solves certain problems without any backtracks. This hinders my evaluation of search methods as I have no control over the domains and/or problem statement when it passes into search. I would like to know what the process employed is, and whether it can be disabled or if I can work around it. Attached below is a very primitive sample code and its tracing. Problems such as SEND + MORE = MONEY and the Australian Map coloring problems can be similarly solved with 0 backtracks.
Thanks in advance!
Eclipse code
:- lib(ic), lib(ic_global), lib(ic_search).
go :-
simpleproblem(Vars).
simpleproblem(Vars) :-
Vars = [A,B],
Vars :: [0..1],
A + B #= 2,
search(Vars,0,input_order,indomain,complete,[backtrack(Backtracks)]),
writeln(backtracks:Backtracks),
writeln(Vars).
Tracing log
(1) 1 CALL go
(2) 2 CALL simpleproblem(_598)
(3) 3 CALL ... = ...
(3) 3 EXIT ... = ...
(4) 3 CALL '::_body'([_1028, _1030], [0 .. 1], eclipse)
(4) 3 EXIT '::_body'([_1476{[0, ...]}, _1490{[...]}], [0 .. 1], eclipse)
(5) 3 CALL _1490{[0, ...]} + _1476{[0, ...]} #= 2
(9) 4 CALL wake
(6) 5 RESUME<2> 0 #= 0
(6) 5 EXIT<2> 0 #= 0
(9) 4 EXIT wake
(5) 3 EXIT 1 + 1 #= 2
(10) 3 CALL search_body([1, 1], 0, input_order, indomain, complete, [backtrack(_3046)], eclipse)
Upvotes: 1
Views: 66
Reputation: 5034
The very reason for the effectiveness of Constraint Programming is the principle of interleaving propagation with search. Because this is really the essence of CP, I am not sure it makes much sense for you to look at search methods in isolation.
Propagation is performed in a data-driven way by the implementation of each individual constraint: whenever a variable domain changes, e.g. during the initial constraint setup or by a search decision, the constraint will try to propagate the consequences. This may lead to more domain changes, which cause further propagation, and so on, until a fixpoint is reached.
Once no further propagation is possible, the search control takes over again, looks at the the constraint network and current variable domains, and decides on the next guess. When a guess is made (in the form of a variable instantiation, domain splitting, etc), propagation is triggered again.
Implementation-wise, there is a clear separation: propagation is done by the individual constraints (in your example #=/2
only), while the search control algorithm is in the search/6
predicate. However, there is a strong interdependency between them, because they communicate back-and-forth via the constraint network.
Note that most search techniques, such as the popular first-fail heuristics, heavily rely on the result of propagation, otherwise they could not make their informed guesses. While in principle it would be possible for you to use non-propagating implementations for all constraints (e.g. wait for all their variables to be instantiated, and only then test for satisfaction), this would make many search options pointless, and you would not learn much by counting backtracks.
Upvotes: 0