Full transcript below
Tim O'Reilly: There are people who express the worry that AI is going to become more intelligent than humans. I’m not really that worried about it. I actually have an alternate theory of artificial intelligence, that we’re already building AI’s. Facebook is an AI, Google is an AI.
And the question really is what are the rules that we use to construct this organism?
Because already these AI’s are potentially hostile to humanity.
All of our vast algorithmic systems, like Google, like Facebook, like our financial markets actually also have this runaway objective function, this thing that we ask them to do and it doesn’t always have the consequences that we expected.
Our algorithmic systems, whether they’re simply big data systems or true AI, all have this characteristic that we give them something that we ask them to optimise and that optimisation function can get out of control.
Facebook’s creators thought that their optimisation function of engagement, of showing people more of what they liked, more of what they shared.
They didn’t expect it to lead to the amplification of partisan divides, that it would be an invitation for spammers.
Our algorithmic systems are a little bit like the genies in Arabian mythology. We ask them to do something but if we don’t express the wish quite right, they misinterpret it and give us unexpected and often alarming results.