Google’s New AI Can Beat Human Champions at the Game of Go
Google appears to have won the long race to develop a Go-winning artificial intelligence, considered a major step towards more human-like AIs
Almost exactly 20 years ago, the IBM computer Deep Blue beat World Chess Champion Garry Kasparov at his own game. It was a pivotal moment in the history of artificial intelligence—the first time a computer had roundly defeated a human chess champion.
But to all those who saw this as a sign that the AI revolution was afoot, critics said “not so fast.” Chess was relatively simple to crack, they said. The true test of AI would be a computer that could beat a human champion at Go, the complex ancient Chinese strategy game thought to involve intuition and an understanding of aesthetics. And that day was unlikely to come any time soon.
''It may be a hundred years before a computer beats humans at Go—maybe even longer,'' astrophysicist and Go fan Piet Hut told The New York Times in 1997. ''If a reasonably intelligent person learned to play Go, in a few months he could beat all existing computer programs. You don't have to be a Kasparov.''
If a computer defeated a Go champion, the Times opined, it would be “a sign that artificial intelligence is truly beginning to become as good as the real thing.”
Well, folks, that moment has arrived, a hundred years or so ahead of schedule. AlphaGo, a program developed by Google's DeepMind artificial intelligence team, has beaten European Go champion Fan Hui 5 to 0.
The findings were reported today in the journal Nature.
Go starts off simply, with a 19 by 19 grid and two colors of pieces (called stones), black for one player, white for the other. Players take turns putting their stones on empty intersections—the crossing points of two grid lines. Slowly, each player attempts to encircle the other player’s stones, at which point they are captured and moved off the board. There can be several encirclings in progress on the board at any given time, and it’s often difficult to tell who is about to capture who.
“The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves,” the paper’s authors write.
AlphaGo “learns” through both training from human experts, and through practice by playing against itself. Since Go has far too many possible moves for a computer to simply crunch the data when making its next decision—a major sticking point for past Go-playing AI efforts—AlphaGo instead uses two different “deep neural networks.” One network is called the “policy network”—this gives the computer a handful of promising moves to consider, based on past games, so it doesn’t have to crunch through every possible move. The “value network” reduces the depth of the search—that is, instead of searching all the way to the end of the game, hundreds of moves away, the program can look just a handful of moves away to make its choices.
This is a big deal: in addition to being a test of AI’s powers, creating a Go-playing program capable of beating human champions has been something of an arms race. For years, various programmers and companies have clamored to create the Go version of Deep Blue. Some have gotten close. A French program called Crazy Stone beat five-time Japanese Go champion Yoshio Ishida in 2013, though Crazy Stone was given a handicap (AlphaGo was not) and Ishida hadn't been considered a top player in several decades. So far, AlphaGo has beaten other Go programs 99.8 percent of the time.
Just hours before Google officially released their news, Facebook, no doubt peeved at being beaten to the punch, dropped the announcement that their own AI was “getting close” to beating human Go champions.
So why is Go considered such a powerful test of AI? It would be too reductive to say that Go is easier than chess.
“The game reflects the skills of the players in balancing attack and defence, making stones work efficiently, remaining flexible in response to changing situations, timing, analysing accurately and recognising the strengths and weaknesses of the opponent,” explains the British Go Association on their website, accounting for Go’s complex appeal.
While chess has an average of 35 legal moves per turn, Go has an average of 200. And while there are some 10⁴³ possible configurations of a chess board, a Go board has at least 2.08 X 10¹⁷⁰ —more configurations than there are atoms in the universe. Unlike chess, where the number of pieces on the board is a very good indicator of who is winning, it’s very hard to know who is ahead in Go.
“There’s no good heuristic for determining whether a position is good or bad for a player,” explains British Go Association president Jon Diamond. “It’s partly analysis, and it’s partly pattern recognition. You assess the board in some complicated way we haven’t worked out how to replicate in computers.”
Diamond says he was quite surprised to hear of AlphaGo’s success. “I guess I wasn’t expecting this for about between five and ten years to be honest,” he says. “They’ve done a hell of a good job.”
The success of AlphaGo may mean we’re much closer than previously thought to having AIs that can perform at human levels in other areas. AlphaGo may be a “stepping stone” to other kinds of AIs, say its developers. An AI that can make the kinds of complex, intuitive-seeming decisions necessary to win Go might be able to, for example, diagnose a sick patient and prescribe an individualized course of treatment, according to the developers.
In March, AlphaGo will have its mettle tested again, when it goes head-to-head with Korea’s Lee Sedol, considered the world’s top Go player.
“Regardless of the result, it will be a meaningful event in the baduk (Go) history,” says Lee in a press release. “I heard Google DeepMind's AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time."