For the first time in history, a computer program is poised to beat one of the world's best human players at the 2,500-year-old game of Go — widely considered one of the most difficult games ever invented.
AlphaGo, a software program developed by British AI company Google DeepMind, has defeated Korean Go champion Lee Sedol in two out of five matches so far. It will play its third match at 10:30 p.m. EST Friday night, streamed live on YouTube. (By tradition, they will play all five matches regardless of the outcome.)
If AlphaGo beats Sedol in this tournament, it will cement its place in the annals of AI history.
But how long will it be before machines can match human-level intelligence in the real world? We asked an expert on artificial intelligence, computer scientist Richard Sutton of the University of Alberta in Canada.
An 'unprecedented' feat
There's no question that AlphaGo's achievement — and the speed with which it improved— was "unprecedented," Sutton told Business Insider. When IBM's Deep Blue computer beat chess champion Garry Kasparov in 1997, it had been expected for at least a decade, he said. By contrast, AlphaGo went from playing at an amateur level to beating a world champion within a year.
And this victory is all the more impressive because Go has exponentially more possible moves than chess, making it a much harder problem for a machine to solve, even with today's computing power.
Go is played with black and white game pieces, or "stones," on a 19 x 19 grid board. Each player places the stones on the board in an attempt to surround the opponent's pieces. The goal is to surround the largest area of the board by the game's end, which is reached when neither player wishes to make another move.
According to Sutton, AlphaGo's success can largely be traced to a combination of the following two powerful technologies:
- Monte Carlo tree search: This involves choosing moves at random and then simulating the game to the very end to find a winning strategy
- Deep reinforcement learning: A multi-layered neural network that mimics brain connections, which contains of a "policy network" that selects the next move and a "value network" that predicts the winner of the game
But when it comes to big-picture intelligence, Sutton said, AlphaGo is missing one key thing: the ability to learn how the world works — such as an understanding of the laws of physics, and the consequences of one's actions.
The missing piece
An intelligent system can be defined as something that can set goals and try to achieve them. Many of today's powerful AI programs don't have goals and can only learn things with the aid of human supervision. By contrast, DeepMind's AlphaGo has a goal — winning at Go — and learns on its own by playing games against itself.
However, games like Go have a clear set of rules, so AlphaGo can follow those rules to achieve its goal. "But in real life, we don't have the rules of the game, and we don't know the consequences of our actions," Sutton said.
That said, Sutton doesn't think we're that far away from developing AIs that can function at a human level in the world.
"There's a 50% chance we figure out [human-level] intelligence by 2040 — and it could well happen by 2030," he said.
It's something we need to prepare for, he added, though he didn't specify how.
Other experts agree that AI is progressing much faster than we thought. AI expert Stuart Russell, a professor of computer science at UC Berkeley, told Business Insider in an email, "We're seeing dramatic progress on many fronts in AI at the moment and it seems to be accelerating."
But that's not a reason to panic about AI. "I don't think people should be scared," Sutton said, "but I do think people should be paying attention."
You can watch AlphaGo's third match against Lee Sedol here:
NEXT UP: ROUND TWO: Google's DeepMind AI just beat a human Go champion for a second time