Google has made massive strides in refining its artificial intelligence, DeepMind, in just the last year.
The most recent example of that fact took place in late January, when DeepMind was able to beat a human for the very first time at the complex game of Go.
But last Thursday Google showed yet another indicator of how far its AI has advanced: its ability to master computer games like a human.
Here's a breakdown of what the AI mastered and what it means for the future:
Google's AI first made waves in February 2015 when it learned to play and win games on the Atari 2600 without any prior instructions on how to play.
The computer beat all human players in 29 Atari games, and performed better than any other known computer algorithm in 43 games.
AI researchers have told Tech Insider multiple times that this was the most impressive technology demonstration they've ever seen.
The AI was able to master the Atari games by combining reinforcement learning with a deep neural network.
Reinforcement learning is when AI is rewarded for taking steps to improve its score. Combining this technique with a deep neural network, which is when the AI analyzes and learns patterns on the game screen, allowed DeepMind to master the Atari games.
But it's difficult to use that technique to solve more advanced issues — so the researchers came up with a new plan.
The AI instead used asynchronous reinforcement learning, which is when it sees multiple versions of AI tackling a problem and sees what method works best.
Here we see that tactic being used in a driving computer game.
See the rest of the story at Business Insider