Sign Up For Card Player's Newsletter And Free Bi-Monthly Online Magazine

Google's AlphaGo Artificial Itelligence Beats Go Champion 5-0 In Big Step Forward For AI

The Ancient Chinese Board Game Is More Complex Than Chess

Print-icon
 

Games have been one of the key testing grounds for the development of Artificial Intelligence. The complex but controlled systems are excellent platforms with which to test the limits of an AI. To draw attention to the research the programs are often pitted against human players who are masters of certain games, with chess computer Deep Blue playing Garry Kasparov in the late 1990’s and more recent battles between top poker pros and AI’s like Polaris, Cepheus and Claudico.

The latest breakthrough in the battle of man vs. machine took place this week when Google’s AlphaGo AI beat European Go champion Fan Hui 5-0. The 2,500-year-old Chinese board game is one of the most complex in the world in terms of strategy and in regards to “state space,” or how many total positions are possible in the game. According to an article on statistics website FiveThirtyEight, Tic-tac-toe has a state space of 10^x3 and has been solved. Checkers, with a state space of 10^x20, is another of several less complex games that are solved.

Chess is much more complex at 10^x50. These confusing looking numbers undoubtedly offer little insight to the average person as to just how large of an amount of possible setups we are talking about, so to put it in perspective, the total number of atoms in the universe is estimated to be around 10^x80.

Go played on the standard 19 by 19 grid has a state space of 10^x171.

Written out that is 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000.

Demis Hassabis of Google’s DeepMind artificial intelligence team explained a bit about how the complex game was approached.

“We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time (the previous record before AlphaGo was 44 percent). But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning. Of course, all of this requires a huge amount of computing power, so we made extensive use of Google Cloud Platform.”

Go, like chess and many variants of poker, is still far from being solved, but this was a huge step forward for artificial intelligence researchers.

Check out a video about the match released by Google below: