Quantcast
Channel: Artificial Intelligence
Viewing all articles
Browse latest Browse all 1375

Here's The One Problem We Need To Solve To Create Computers With Human-Like Intelligence

$
0
0

Watson on Jeopardy

Computers have been making trades on Wall Street, diagnosing patients like doctors, and even composing music that moves and inspires. But no matter how moving the movie "Her" was — they don't yet have human-like intelligence.

Why is that?

Well, there's one big problem that we need to solve, according to David Deutsch, an Oxford physicist widely regarded as the father of quantum computing. The problem? We don't even understand how to define how human intelligence operates.

In Aeon, Deutsch argues that artificial general intelligence, or AGI — the creation of a mind that can truly think like a human mind, not merely perform some of the same tasks — "cannot possibly be defined purely behaviourally," meaning we won't be able to tell if AI is human-like just based on the computer's output.

The definition of artificial intelligence relies on how thinking as we know it works on the inside, not on what comes out of it. AGI is a stricter definition of artificial intelligence, and is sometimes called "Strong AI," as opposed to "Weak AI," which refers to AI that can mimic some human capacities but does not attempt to capture the whole range of what our minds can do.

He invokes a classic thought experiment about a brain in a vat, which invites us to consider a human brain kept alive and alert, but disconnected from outside stimuli. This has never been done and wouldn't work in reality, but it illustrates a point.

In Deutsch's version of the thought experiment, the brain has no sensory data coming in, and cannot express itself, but nevertheless, the brain itself continues "thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI," Deutsch writes. "So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs."

Even without a way of talking to anyone, we can imagine the brain is still doing what brains do: coming up with ideas and explanations for what's happening in its world. (In this case, trying to answer the question: "How did I get into this vat?")Brain in a vat

Because we can't define exactly how our minds work, we are stuck saying about AGI what Supreme Court Justice Potter Stewart said about obscenity: "I know it when I see it."

In other words, we can't define AGI simply by what it produces — whether that's billions in trading profits, life-saving medical diagnoses, or soaring musical compositions. While impressive, these AI-creations aren't enough to tell us that the intelligence behind them is human-like.

As Deutsch writes:

What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.

In other words, before we can think seriously about creating anything that can be called an AGI, we need to know how our brains generate theories about how things work in the absence of information — and how to capture that process in a program. Simply put, we can't even agree on how our brains work, which is a pretty important thing to figure out before we can translate that process into a machine.

Deutsch elaborates:

[I]t is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI's thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable. But an AGI programmer needs to know where the theories come from in the first place.

It's worth mentioning that the thinking process Deutsch describes closely resembles the thinking of the very scientists and engineers who are working to make AGI into a reality. But human minds don't need all of the information before coming up with a theory — which is good, because we rarely, if ever, have all the information about anything.

Even more importantly, human minds don't rely solely or even mostly on justified, true information. A human who thinks that the Earth is flat or that the moon is made of cheese isn't any less human for it, nor would we consider a computer who has the correct information in a database to be more intelligent than the human.

Based on how little we currently know about how our brains work, a theory of how we invent theories "is beyond present-day knowledge," Deutsch says.

Until we can define how our brains actually come up with theories, how are we supposed to be able to recreate this process in a computer? Until we better understand the mind, we will be no closer to creating real artificial intelligence than we were 50 years ago, when the first supercomputer was created. Until we solve the problem of what it means for us to think, computers will keep getting faster and better at all kinds of tasks, but they won't be truly intelligent the way a human being is.

SEE ALSO: The Most Advanced Artificial Intelligence In Existence Is Only As Smart As A Preschooler

READ MORE: The Most Ambitious Artificial Intelligence Project In The World Has Been Operating In Near Secrecy For 30 Years

Join the conversation about this story »


Viewing all articles
Browse latest Browse all 1375

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>