It turns out that my insights about search were not so far off the mark... but then the elimination procedure is the real deal (in some cases, it can entirely solve an easy problem on its own). In fact the real power is unleashed when the two are combined. The way I understand it, elimination is like a mini-search, where the consequences of a move are carried over their logical conclusion, revealing, many steps ahead, if it's good or not. It is more than a heuristic, it is a solution space simplifier, and a very efficient one at that.
My reaction when I understood how it worked was to ask myself if there's a way I could adapt it for my current Python implementation, without modifying it too much. It is not exactly trivial, because the two problem representation mechanisms are quite different: Peter Norvig's one explicitly models the choices for a given square, while mine only does it implicitly. This meant that I couldn't merely translate the elimination algorithm in terms of my implementation: I'd have to find some correspondence, a way to express one in terms of the other. After some tinkering, what I got is a drop-in replacement for my
Sudoku.set method:
def set(self, i, j, v, propagate_constraints):
self.grid[i][j] = v
self.rows[i].add(v)
self.cols[j].add(v)
self.boxes[self.box(i, j)].add(v)
if propagate_constraints:
for a in range(self.size):
row_places = defaultdict(list)
row_available = set(self.values)
col_places = defaultdict(list)
col_available = set(self.values)
box_places = defaultdict(list)
box_available = set(self.values)
for b in range(self.size):
options = []
row_available.discard(self.grid[a][b])
col_available.discard(self.grid[b][a])
bi, bj = self.box_coords[a][b]
box_available.discard(self.grid[bi][bj])
for vv in self.values:
if not self.grid[a][b] and self.isValid(a, b, vv):
options.append(vv)
row_places[vv].append(b)
if not self.grid[b][a] and self.isValid(b, a, vv):
col_places[vv].append(b)
if not self.grid[bi][bj] and self.isValid(bi, bj, vv):
box_places[vv].append((bi,bj))
if not self.grid[a][b]:
if len(options) == 0:
return False
elif len(options) == 1:
# square with single choice found
return self.set(a, b, options[0], propagate_constraints)
if row_available != set(row_places.keys()): return False
if col_available != set(col_places.keys()): return False
if box_available != set(box_places.keys()): return False
for vv, cols in row_places.items():
if len(cols) == 1:
# row with with single place value found
return self.set(a, cols[0], vv, propagate_constraints)
for vv, rows in col_places.items():
if len(rows) == 1:
# col with with single place value found
return self.set(rows[0], a, vv, propagate_constraints)
for vv, boxes in box_places.items():
if len(boxes) == 1:
# box with with single place value found
return self.set(boxes[0][0], boxes[0][1], vv, propagate_constraints)
return True
set method to make it seek two types of things:
- a square with a single possible value
- a row/column/box with a value that has only one place to go
set it right away. While doing that, it checks for certain conditions that would make this whole chain of moves (triggered by the first call to set) invalid:
- a square with no possible value
- a row/column/box with a set of unused values that is not equal to the set of values having a place to go (this one was a bit tricky!)
Finally, we need to adapt the solver (or search method). The major difference with the previous greedy solver (the non-eliminative one) is the fact that a move is no longer a single change we do to the grid (and that can be easily undone when we backtrack). This time, an elimination call can change many squares, which is a problem with this method, because we cannot do all the work with the same
Sudoku instance, for backtracking purposes, and such an instance is not as efficiently copied as a dict of strings. There are probably many other ways, but to keep the program object-oriented, here is what I found:
def solveGreedilyWithConstraintPropagation(self):
nopts = {} # n options -> (opts, (i,j))
for i in range(self.size):
for j in range(self.size):
if self.grid[i][j]: continue
opts_ij = []
for v in self.values:
if self.isValid(i, j, v):
opts_ij.append(v)
n = len(opts_ij)
if n == 0: return None
nopts[n] = (opts_ij, (i,j))
if nopts:
opts_ij, (i,j) = min(nopts.items())[1]
for v in opts_ij:
S = deepcopy(self)
if S.set(i, j, v, propagate_constraints=True):
T = S.solveGreedilyWithConstraintPropagation()
if T: return T
return None
return self
set), we deepcopy the current Sudoku instance (self), and perform the elimination on the copy instead. If it succeeds, we carry the recursion over with the copy. When a solution is found, the instance is returned, so that's why this method has to be called like this:
S = S.solveGreedilyWithConstraintPropagation()
Again, the code for this exercise is available on my GitHub.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.