Guide to Constraint Programming © Roman Barták, 1998 Contents Prev Up Next

# Constraint Satisfaction

Constraint Satisfaction Problems (CSP) have been a subject of research in Artificial Intelligence for many years. The pioneering works on networks of constraints were motivated mainly by problems arising in the field of picture processing [Waltz, Montanari]. AI research, concentrated on difficult combinatorial problems, is dated back to sixties and seventies and it has contributed to considerable progress in constraint-based reasoning. Many powerful algorithms was designed that became a basis of current constraint satisfaction algorithms.

A Constraint Satisfaction Problem (CSP) consist of:

• a set of variables X={x1,...,xn},
• for each variable xi, a finite set Di of possible values (its domain),
• and a set of constraints restricting the values that the variables can simultaneously take.

Note that values need not be a set of consecutive integers (although often they are), they need not even be numeric.

A solution to a CSP is an assignment of a value from its domain to every variable, in such a way that every constraint is satisfied. We may want to find:

• just one solution, with no preference as to which one,
• all solutions,
• an optimal, or at least a good solution, given some objective function defined in terms of some or all of the variables.

Solutions to CSPs can be found by searching systematically through the possible assignments of values to variables. Search methods divide into two broad classes, those that traverse the space of partial solutions (or partial value assignments), and those that explore the space of complete value assignments (to all variables) stochastically.

The reasons for choosing to represent and solve a problem as a CSP rather than, say as a mathematical programming problem are twofold.

• Firstly, the the representation as a CSP is often much closer to the original problem: the variables of the CSP directly correspond to problem entities, and the constraints can be expressed without having to be translated into linear inequalities. This makes the formulation simpler, the solution easier to understand, and the choice of good heuristics to guide the solution strategy more straightforward.
• Secondly, although CSP algorithms are essentially very simple, they can sometimes find solution more quickly than if integer programming methods are used.

This tutorial is intended to give a basic grounding in constraint satisfaction problems and some of the algorithms used to solve them. In general, the tasks posed in the constraint satisfaction problem paradigm are computationally intractable (NP-hard).

### Binarization of Constraints

A constraint can affect any number of variables form 1 to n (n is the number of variables in the problem). It is useful to distinguish two particular cases: unary and binary constraints. Since unary constraints are dealt with by preprocessing the domains of the affected variables, they can be ignored thereafter. If all the constraints of a CSP are binary, the variables and constraints can be represented in a constraint graph and the constraint satisfaction algorithm can exploit the graph search techniques. This is interesting because any constraint of higher arity can be expressed in terms of binary constraints. Hence, binary CSPs are representative of all CSPs.

### Systematic Search Algorithms

A CSP can be solved using generate-and-test paradigm (GT) that systematically generates each possible value assignment and then it tests to see if it satisfies all the constraints. A more efficient method uses the backtracking paradigm (BT) that is the most common algorithm for performing systematic search. Backtracking incrementally attempts to extend a partial solution toward a complete solution, by repeatedly choosing a value for another variable.

### Consistency Techniques

The late detection of inconsistency is the disadvantage of GT and BT paradigms. Therefore various consistency techniques for constraint graphs were introduced to prune the search space. The consistency-enforcing algorithm makes any partial solution of a small subnetwork extensible to some surrounding network. Thus, the inconsistency is detected as soon as possible. The consistency techniques range from simple node-consistency and the very popular arc-consistency to full, but expensive path consistency.

### Constraint Propagation

By integrating systematic search algorithms with consistency techniques, it is possible to get more efficient constraint satisfaction algorithms. Improvements of backtracking algorithm have focused on two phases of the algorithm: moving forward (forward checking and look-ahead schemes) and backtracking (look-back schemes).

### Variable and Value Ordering

The efficiency of search algorithms which incrementally attempts to extend a partial solution depends considerably on the order in which variables are considered for instantiations. Having selected the next variable to assign a value to, a search algorithm has to select a value to assign. Again, this ordering effects the efficiency of the algorithm. There exist various heuristics for dynamic or static ordering of values and variables.

### Reducing Search

The problem of most systematic search algorithms based on backtracking is the occurence of many "backtracks" to alternative choices which degrades the efficiency of the system. In some special cases, it is possible to complety eliminate the need for backtracking. Also, there exist algorithms which reduce the backtracking by choosing the special variable ordering.

### Heuristics and Stochastic Algorithms

In the last few years, greedy local search strategies became popular, again. These algorithms incrementally alter inconsistent value assignments to all the variables. They use a "repair" or "hill climbing" metaphor to move towards more and more complete solutions. To avoid getting stuck at "local maxima" they are equipped with various heuristics for randomizing the search. Their stochastic nature generally voids the guarantee of "completeness" provided by the systematic search methods.

### Benchmarking and Algorithm Analysis

It is possible to analyze the efficiency of the constraint satisfaction algorithms by traditional approaches that compute the worst-case, average etc. complexity of the algorithm. However, the methodology for experimental evaluation of the algorithms was also developed. This methodology is based on analyzing populations of randomly-generated binary constraint satisfaction problems and it enables one to analyze the algorithm according to a given class of problems. It also helps to identify where the extremely hard problems occur.

 Contents Prev Up Next Designed and maintained by Roman Barták