AI taught to beat Sudoku puzzles. Now how about a time machine to 2005?

Y'know, back when this would have been useful. Naw, just kidding. This is neat

32 Reg comments Got Tips?

AI can now solve some of the hardest Sudoku puzzles to a high degree of accuracy, according to new research that teaches machines to logically reason.

Sudoku was, if you can recall, all the rage in the West at least a decade ago. Sure, there are computational algorithms to crack the puzzles within a second, however, it’s an interesting problem for deep-learning software to tackle, as it's an exercise for neural networks to practice complex reasoning.

A paper describing the method that uses recurrent relational networks was published through ArXiv earlier this month. It builds upon DeepMind’s previous work with relational networks (RNs), a type of neural network that focuses on the relationship between pairs of objects.

Traditional networks used in deep learning may excel at recognition tasks but they often struggle with reasoning.

DeepMind’s RN, described as a “plug-and-play” module, was tacked onto larger architectures such as convolutional neural networks to allow computers to solve visual reasoning tasks. It requires studying a scenario carefully and working out how each object is associated with others.

But Sudoku, the popular number puzzle that involves filling a 9x9 grid with the digits one to nine in a way that each number can only appear once in the cells in each row and column, is much more challenging.

Completing it requires performing several sequences of mathematical deduction, and trial and error with different numbers. It’s too complex for RNs since they can only work out the relationships between objects or events that are directly related to one another.

Rasmus Berg Palm, co-author of the paper and a PhD student at the Technical University of Denmark, explained that if “object A [affects] object B, which in turn affects object C, and so on. [For] the RN, object A must directly affect object C, or not at all. Going through the interaction with object B is not an option.”

But recurrent RNs can handle multiple steps, where the result depends on the previous state. “This allows interactions to propagate from one object to the next, forming complex chains of interactions,” Palm said.

Psst, there's already a 7 here!

The neural network understands the cells in the 9x9 Sudoku grid as a graph, where the digits in the columns and rows are nodes. There are a total of 81 nodes, and each one is connected to another node if they lie on the same row and column.

A recurrent RN is trained to send and receive messages between nodes that roughly translate to things like “I’m a 7, hence you can’t also be a 7”, so that it can work out by deduction what numbers fit where on the grid.

It spits out a probability distribution for each digit to work out what number is most likely the correct fit for a particular cell. To train the model, the researchers generated a dataset containing 216,000 puzzles of varying difficulty. The easiest level had 34 out of 81 numbers already filled, and the hardest level had only 17 digits given.

The puzzles were all solved and split into a training, validation and test set. The results show that the recurrent RN learns to solve 94.1 per cent of the most difficult Sudokus after 32 steps, running it longer to 64 steps increases the accuracy to 96.6 per cent. For more straightforward puzzles, the accuracy reaches 100 per cent.

Although the recurrent RN is pretty good at cracking Sudoku puzzles, it’s designed to be a general purpose module that can be added to different types neural network models so that it can perform many-step relational reasoning.

It’s still early stages and the applications have yet to be explored. It’s an important area for machines since logical reasoning is key to making them smarter in the future. ®

Updated to add

Palm has been in touch to say the software may be useful for self-driving cars. They’ll make better decisions if they can reason about how objects such as vehicles, cyclists, stop signs affect each other, for example, we're told.

Another area is language. Palm and his colleagues showed that recurrent RNs can be used to answer basic comprehension questions based on the Facebook’s bAbI dataset, often used to benchmark text understanding and reasoning – an important skill for chatbots.

Here's also posted in the article comments, responding to reader feedback.


Biting the hand that feeds IT © 1998–2020