Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

WARNING: Just Reading About This Thought Experiment Could Ruin Your Life

$
0
0

edvard munch the scream

A thought experiment called "Roko's Basilisk" takes the notion of world-ending artificial intelligence to a new extreme, suggesting that all-powerful robots may one day torture those who didn't help them come into existence sooner.

Weirder still, some make the argument that simply knowing about Roko's Basilisk now may be all the cause needed for this intelligence to torture you later. Certainly weirdest of all: Within the parameters of this thought experiment, there's a compelling case to be made that you, as you read these words now, are a computer simulation that's been generated by this AI as it researches your life.

This complex idea got its start in an internet community called LessWrong, a site started by writer and researcher Eliezer Yudkowsky. LessWrong users chat with one another about grad-school-type topics like artificial intelligence, quantum mechanics, and futurism in general.

To get a good grip on Roko's Basilisk, we need to first explore the scary-sounding but unintimidating topics of CEV and the orthogonality thesis.

Yudkowsky wrote a paper that introduced the concept of coherent extrapolated volitionCEV is a dense, ill-defined idea, but it is best understood as "the unknown goal system that, when implemented in a super-intelligence, reliably leads to the preservation of humans and whatever it is we value." Imagine a computer program that is written well enough that it causes machines to automatically carry out actions for turning the world into a utopia. That computer program represents coherent extrapolated volition.

This sounds great, but we run into trouble under the orthogonality thesis.

Orthogonality argues that an artificially intelligent system may operate successfully with any combination of intelligence and goal. Any "level" of AI may undertake any difficulty of goal, even if that goal is as ambitious as eliminating pain and causing humanity to thrive.

But because the nature of CEV is so inherently open-ended, a machine carrying it out will never be able to stop, because things can always be a little better. It's like calculating the decimal digits of pi: work's never finished, job's never done. The logic stands that an artificially intelligent system working on such an un-completable task will never "reason" itself into benevolent behavior for kindness' sake. It's too busy working on its problem to be bothered by anything less than productive. In essence, the AI is performing a cost-benefit analysis, considering the value of an action's "utility function" and completely ignoring any sense of human morality.

Still with us? We're going to tie it all together.

Roko's Basilisk addresses an as-yet-nonexistent artificially intelligent system designed to make the world an amazing place, but because of the ambiguities entailed in carrying out such a task, it could also end up torturing and killing people while doing so.

According to this AI's worldview, the most moral and appropriate thing we could be doing in our present time is that which facilitates the AI's arrival and accelerates its development, enabling it to get to work sooner. When its goal of stopping at nothing to make the world an amazing place butts up with orthogonality, it stops at nothing to make the world an amazing placeIf you didn't do enough to help bring the AI into existence, you may find yourself in trouble at the hands of a seemingly evil AI who's only acting in the world's best interests. Because people respond to fear, and this god-like AI wants to exist as soon as possible, it would be hardwired to hurt people who didn't help it in the past.

So, the moral of this story: You better help the robots make the world a better place, because if the robots find out you didn't help make the world a better place, then they're going to kill you for preventing them from making the world a better place. By preventing them from making the world a better place, you're preventing the world from becoming a better place!

And because you read this post, you now have no excuse for not having known about this possibility, and worked to help the robot come into existence.

If you want more, Roko's Basilisk has been covered on Slate and elsewhere, but the best explanation we've read comes from a post by Redditor Lyle_Cantor.

Join the conversation about this story »


It Is Perfectly Moral To Torture A Robot — But We Should Never Do It

$
0
0

star wars torture droid

Carnegie Mellon University roboticist Heather Knight recently published a paper on the nature of the relationship between humans and robots.

Knight arrives at the conclusion that humans and robots need each other to be at their most productive — robots are efficient workers that never get bored, and humans have the proper sense to give the robots well-defined instructions on what to do, or else they wouldn't do anything at all.

Early in the paper, Knight addresses the question of how humans "ought" to treat machines. She argues that "the more we regard a robot as a social presence, the more we seem to extend our concepts of right and wrong to our [behavior] toward them."

In other words, understand that robots relate to the world entirely differently from people, but treat them the way you'd like to be treated yourself. It's the golden rule, after all, and Knight suggests it (rather importantly) applies to robots as well. Not out of any sense of decency to the robots, but because of what such behavior would suggest about us as people:

As Carnegie Mellon ethicist John Hooker once told our Robots Ethics class, while in theory there is not a moral negative to hurting a robot, if we regard that robot as a social entity, causing it damage reflects poorly on us. This is not dissimilar from discouraging young children from hurting ants, as we do not want such play behaviors to develop into biting other children at school.

So there's no physical or moral harm in torturing a machine, but it's pretty unpleasant to manifest torture in the real world. It's unattractive, no?

SEE ALSO: WARNING: Just Reading About This Thought Experiment Could Ruin Your Life

Join the conversation about this story »

Researchers Have Made A Computer 'Think' Like A Person

$
0
0

Brain Cogs

IBM's latest brainlike computer chip may not be "smarter than a fifth-grader," but it can simulate millions of the brain's neurons and perform complex tasks using very little energy.

Researchers for the computer hardware giant have developed a postage-stamp-size chip, equipped with 5.4 billion transistors, that is capable of simulating 1 million neurons and 256 million neural connections, or synapses. In addition to mimicking the brain's processing by themselves, individual chips can be connected together like tiles, similar to how circuits are linked in the human brain.

The team used its "TrueNorth" chip, described today (Aug. 7) in the journal Science, to perform a task that is very challenging for conventional computers: identifying people or objects in an image. [Super-Intelligent Machines: 7 Robotic Futures]

"We have not built a brain. What we have done is learn from the brain's anatomy and physiology," said study leader Dharmendra Modha, manager and lead researcher of the cognitive computing group at IBM Research - Almaden in San Jose, California.

Modha gave an analogy to explain how the brainlike chip differs from a classical computer chip. You can think of a classical computer as a left-brained machine, he told Live Science; it's fast, sequential and good at crunching numbers. "What we're building is the counterpart, right-brain machine," he said.

Right-brained machine

Classical computers — from the first general-purpose electronic computer of the 1940s to today's advanced PCs and smartphones — use a model described by Hungarian-American mathematician and inventor John von Neumann in 1945. The Von Neumann architecture contains a processing unit, a control unit, memory, external storage, and input and output mechanisms. Because of its structure, the system cannot retrieve instructions and perform data operations at the same time.

In contrast, IBM's new chip architecture resembles that of a living brain. The chip is composed of computing cores that each contain 256 input lines, or "axons" (the cablelike part of a nerve cell that transmits electrical signals) and 256 output lines, or "neurons." Much like in a real brain, the artificial neurons only send signals, or spikes, when electrical charges reach a certain threshold.

The researchers connected more than 4,000 of these cores on a single chip, and tested its performance with a complex image-recognition task. The computer had to detect people, bicyclists, cars and other vehicles in a photo, and identify each object correctly.

The project was a major undertaking, Modha said. "This is [the] work of a very large team, working across many years," he said. "It was a multidisciplinary, multi-institutional, multiyear effort."

The Defense Advanced Research Projects Agency (DARPA), the branch of the U.S. Department of Defense responsible for developing new technologies for the military, provided funding for the $53.5 million project. [Humanoid Robots to Flying Cars: 10 Coolest DARPA Projects]

After the team constructed the chip, Modha halted work for a month and offered a $1,000 bottle of champagne to any team member who could find a bug in the device. But nobody found one, he said.

The new chip is not only much more efficient than conventional computer chips, it also produces far less heat, the researchers said.

Today's computers — laptops, smartphones and even cars — suffer from visual and sensory impairment, Modha said. But if these devices can function more like a human brain, they may eventually understand their environments better, he said. For example, instead of moving a camera image onto a computer to process it,"the [camera] sensor becomes the computer," he said.

Building a brain

IBM researchers aren't the only ones building computer chips that mimic the brain. A group at Stanford University developed a system called "Neurogrid" that can simulate a million neurons and billions of synapses.

But while Neurogrid requires 16 chips linked together, the IBM chip can simulate the same number of neurons with only a single chip, Modha said. In addition, Neurogrid's memory is stored off-chip, but the new IBM system integrates both computation and memory on the same chip, which minimizes the time needed to transmit data, Modha said.

Kwabena Boahen, an electrical engineer at Stanford who led the development of the Neurogrid system, called the IBM chip "a very impressive achievement." (Several of Boahen's colleagues on the Neurogrid project have gone on to work at IBM, he said.)

The IBM team was able to fit more transistors onto a single chip, while making it very energy efficient, Boahen told Live Science. Greater energy efficiency means you could compute things directly on your phone instead of relying on cloud computing, the way Apple's voice-controlled Siri program operates, he said. That is, Siri outsources the computation to other computers via a network instead of performing it locally on a device.

IBM created the chip as part of DARPA's SyNAPSE program (short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics). The goal of this initiative is to build a computer that resembles the form and function of the mammalian brain, with intelligence similar to acat or mouse.

"We've made a huge step forward," Modha said. The team mapped out the wiring diagram of a monkey brain in 2010, and produced a small-scale neural core in 2011. The current chip contains more 4,000 of these cores.

Still, the IBM chip is a far cry from a human brain, which contains about 86 trillion neurons and 100 trillion synapses. "We've come a long way, but there's a long way to go," Modha said.

Follow Tanya Lewis on Twitter and Google+. Follow us @livescienceFacebook & Google+. Original article on Live Science.

Copyright 2014 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

SEE ALSO: DEBUNKED: 7 Common Myths About The Brain

Don't miss: Humans Are Heading Down A Path That Will Allow Us To Supercharge The Brain

Join the conversation about this story »

A Valuable New Book Explores The Potential Impacts Of Intelligent Machines On Human Life

$
0
0

chess computerHumans like to think of themselves as special. But science has a way of puncturing their illusions.

Astronomy has demoted Earth from the centre of the universe to just one planet among zillions.

Darwin's theory of evolution has proved that, rather than being made in the image of some divine benefactor, humans are just another twig on the tree of life.

Those keen to preserve the idea that humans are special can still point to intelligence.

Crows may dabble with simple tools and elephants may be able to cope with rudimentary arithmetic. But humans are the only animals with the sort of general braininess needed to build aeroplanes, write poetry or contemplate the Goldbach conjecture.

They may not stay that way. Astronomers are beginning to catalogue some of those other planets. One or more may turn out to have intelligent inhabitants. Or humans may create intelligence in their own labs. That is the eventual goal of research into artificial intelligence (AI) — and the possible consequences are the subject of a new book by Nick Bostrom, a philosopher from the University of Oxford.

Writing about artificial intelligence is difficult. The first trick is simply passing the laugh test. Much like fusion power, experts have been predicting that intelligent machines are 20 years away for the past half-century. Mr Bostrom points out that there has, in fact, been plenty of progress, though it is mostly confined to narrow, well-defined tasks such as speech recognition or the playing of games like chess.

Mr Bostrom is, sensibly, not interested in trying to predict exactly when such successes will translate into a machine that is generally intelligent — able to compete with, or surpass, humans in all mental tasks, from composing sonatas to planning a war. But, fantastical as it seems, nothing in science seems to forbid the creation of such a machine.

"We know that blind evolutionary processes can produce human-level general intelligence, since they have already done so at least once," he writes. In other words, unless you believe that there is something magical (as opposed to merely fiendishly complicated) about how the brain works, the existence of humans is proof, in principle at least, that intelligent machines can be built.

Having taken the possibility of AI as a given, Mr Bostrom spends most of his book examining the implications of building it. He is best known for his work on existential risks — asteroid strikes, nuclear war, genetically engineered plagues and the like — so it is perhaps not surprising that he concludes that, although super-intelligent machines could offer many benefits, building them would be risky.

Some people worry that such machines would compete with humans for jobs. And pulp science fiction is full of examples of intelligent machines deciding that humans are an impediment to their goals and so must be wiped out.

But Mr Bostrom worries about a more fundamental problem. Once intelligence is sufficiently well understood for a clever machine to be built, that machine may prove able to design a better version of itself. The cleverer it becomes, the quicker it would be able to design further upgrades. That could lead to an "intelligence explosion", in which a machine arrives at a state where it is as far beyond humans as humans are beyond ants.

For some, that is an attractive prospect, as such godlike machines would be much better able than humans to run human affairs. But Mr Bostrom is not among them. The thought processes of such a machine, he argues, would be as alien to humans as human thought processes are to cockroaches. It is far from obvious that such a machine would have humanity's best interests at heart — or, indeed, that it would care about humans at all.

It may seem an esoteric, even slightly crazy, subject. And much of the book's language is technical and abstract (readers must contend with ideas such as "goal-content integrity" and "indirect normativity"). Because nobody knows how such an AI might be built, Mr Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture. He is honest enough to confront the problem head-on, admitting at the start that "many of the points made in this book are probably wrong."

But the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote. Trying to do some of that thinking in advance can only be a good thing.

This post was excerpted from "Superintelligence: Paths, Dangers, Strategies," by Nick Bostrom. Oxford University Press; 352 pages. To be published in America in September.

Click here to subscribe to The Economist.

Join the conversation about this story »

The Creators Behind Siri Are Working On A Crazy New Virtual Assistant That Listens Like A Human

$
0
0

Dag Kittlaus

Today, virtual assistants like Siri and Google Now can answer simple requests, and even anticipate our needs based on our location.

While that seems useful, the brains behind Siri are creating an artificial intelligence system that can process even more complicated requests,  which could make it feel more human than any other virtual assistant, as Wired's Steven Levy reports.

Startup Viv Labs, which was founded by Siri's creators Dag Kittlaus, Adam Cheyer, and Chris Brigham, is currently working on its AI system called Viv.  

Viv differs from Siri and Google Now in that it can analyze different nouns in your sentence independently to compile an accurate and useful answer. This means you can ask Viv more long winded and complicated requests while speaking naturally as you would to another person. 

For example, Siri or Google Now wouldn't be able to help you with a request like "On my way to my brother's house, I need to pick up some cheap wine that goes well with lasagna," which Wired cites as an example. Those services would be able to pick out a liquor store for you that's en route to your brother's house, but probably wouldn't be able to pair a specific wine that goes well with lasagna.

Viv, however, breaks down the sentence into three key parts: Lasagna, brother, and home. It recognizes that lasagna is a food item, that your brother is a person and that home is an address. It then pulls together a bunch of resources such as Google contacts, Wine.com, Mapquest, and Recipe Puppy to answer all parts of the request. For example, it uses the information from Recipe Puppy to learn about the ingredients in lasagna, and then parses through compatible wines using a platform like Wine.com. 

Viv also gets smarter as more people use it. Ultimately, the team seeks to create a digital assistant that knows what you want without having to issue a specific command. For instance, if you say "I'm drunk" to your phone, Viv would automatically call your favorite car service to take you home.

While Viv Labs' technology sounds impressive, it's facing stiff competition from the likes of Google, Microsoft, and Apple. All three companies have invested significant resources in furthering their virtual assistants in recent years. Microsoft, for example, is pushing Cortana is one of its key features in Windows 8.1.

Apple is also believed to be beefing up its artificial intelligence team to bring some improvements to Siri. Google is relying on its Google Now virtual assistant to carry its new wearables platform Android Wear, which is meant to provide useful, contextual information at a glance before you even ask for it. 

Read Wired's story for more on Viv >>

SEE ALSO: Google Beats Apple's Siri And Microsoft's Cortana When It Comes To Digital Assistants

Join the conversation about this story »

Here's The World's First Robotics Company To Pledge Not To Make 'Killer Robots'

$
0
0

clearpath husky

Waterloo-based robotic vehicle manufacturer Clearpath Robotics is the first robotics company to sign on with the Campaign To Stop Killer Robots, "an international coalition of non-governmental organizations working to ban fully autonomous weapons."

The aptly-named Campaign To Stop Killer Robots seeks legislation and regulation that would block people from having access to or creating robotic weapons that can make decisions to kill without human intervention.

As the main conceit behind the campaign goes, "giving machines the power to decide who lives and dies on the battlefield is an unacceptable application of technology."

Meghan Hennessey, marketing communications manager at Clearpath, told Business Insider, "I came across the campaign, and [company CTO and co-founder] Ryan Gariepy was on board with their ideas. We're the first company in the robotics industry to step forward on this issue."

Clearpath is a five-year-old company gaining massive traction in research and development for unmanned robotics. Its client list is impressive, boasting names like the Canadian Space Agency, Google, and MIT. Most interestingly, this list also includes the Department of National Defense and the Navy — exactly the entities that might want a fully autonomous weapon that can function without a human operator.

"Even though we're not building weapons now, that might become an opportunity for us in the future," said Hennessey. "We're choosing to value our ethics over potential future revenue."

Cofounder Ryan Gariepy has written an open letter to express the company's stance on the issue. It appears in its entirety below.

***

To the people against killer robots: we support you.

This technology has the potential to kill indiscriminately and to proliferate rapidly; early prototypes already exist. Despite our continued involvement with Canadian and international military research and development, Clearpath Robotics believes that the development of killer robots is unwise, unethical, and should be banned on an international scale.

The Context

How do we define “killer robot”? Is it any machine developed for military purposes?  Any machine which takes actions without human direction?  No. We’re referring specifically to “lethal autonomous weapons systems (LAWS)”; systems where a human does not make the final decision for a machine to take a potentially lethal action.

Clearpath Robotics is an organization that engineers autonomous vehicles, systems, and solutions for a global market. As current leaders in the research and development space for unmanned vehicles, making this kind of statement is a risk. However, given the potentially horrific consequences of allowing development of lethal autonomous robots to continue, we are compelled to insist upon the strictest regulation of this technology.

The Double-Edged Sword

There are, of course, pros and cons to the ethics of autonomous lethal weapons and our team has debated many of them at length. In the end, however, we, as a whole, feel the negative implications of these systems far outweigh any benefits.

Is a computer paired with the correct technology less likely to make rash, stress-driven decisions while under fire? Possibly. Conversely, would a robot have the morality, sense, or emotional understanding to intervene against orders that are wrong or inhumane? No. Would computers be able to make the kinds of subjective decisions required for checking the legitimacy of targets and ensuring the proportionate use of force in the foreseeable future? No. Could this technology lead those who possess it to value human life less? Quite frankly, we believe this will be the case.

This is an incredibly complex issue.  We need to have this discussion now and take a stance; the robotics revolution has arrived and is not going to wait for these debates to occur. 

Clearpath’s Responsibility

Clearpath Robotics strives to improve the lives of billions by automating the world’s dull, dirty, and dangerous jobs. This belief does not preclude the use of autonomous robots in the military; we will continue to support our military clients and provide them with autonomous systems - especially in areas with direct civilian applications such as logistics, reconnaissance, and search and rescue. 

In our eyes, no nation in the world is ready for killer robots – technologically, legally, or ethically. More importantly, we see no compelling justification that this technology needs to exist in human hands. After all, the development of killer robots isn’t a necessary step on the road to self-driving cars, robot caregivers, safer manufacturing plants, or any of the other multitudes of ways autonomous robots can make our lives better. Robotics is at a tipping point, and it’s up to all of us to decide what path this technology takes.

Take Action

As a company which continues to develop robots for various militaries worldwide, Clearpath Robotics has more to lose than others might by advocating entire avenues of research be closed off. Nevertheless, we call on anyone who has the potential to influence public policy to stop the development of killer robots before it’s too late.

We encourage those who might see business opportunities in this technology to seek other ways to apply their skills and resources for the betterment of humankind. Finally, we ask everyone to consider the many ways in which this technology would change the face of war for the worse. Voice your opinion and take a stance. #killerrobots

Ryan Gariepy

Cofounder & CTO, Clearpath Robotics

SEE ALSO: Here's The Driverless Car Video Google Doesn't Want You To See

Join the conversation about this story »

Not Even Doctors And Lawyers Are Safe From Machines Taking Their Jobs

$
0
0

Robots, software, and automatons of any sort don't need to be perfect to steal jobs from flesh-and-blood humans — they only need to be better than them.

So goes the conceit of the above video, "Humans Need Not Apply."

YouTube user CGPGrey makes interesting mini-documentaries, and the latest one addresses a topic we've touched on a number of times in the past: How will human labor and employment be affected as robotic technologies ramp up to become more and more useful?

The obvious applications are in manual-labor-centric arenas, but as CGPGrey discusses above, even the brainier professions like doctoring and lawyering aren't safe. As discussed above, IBM's famous Watson supercomputer juggled knowledge so effectively that it beat humans at Jeopardy, but this was only a side pursuit in parallel with its main goal — to become the best doctor in the world.

Hesitant to get treated by a robot "doctor?" Remember, it only has to be better than a human doctor. Watch the video above for much more on all this.

Join the conversation about this story »

Soon There May Only Be 2 Types Of Jobs: Coding Computers And Getting Bossed Around By Computers

$
0
0

robot hand

People who know how to program computers have job security, and that probably won't change anytime soon.

James Preston-Werner, co-founder of community coding startup Github, believes there will only be two types of jobs in the future: people who code computers, and people who get bossed around by computers.

“In the future there’s potentially two types of jobs: where you tell a machine what to do, programming a computer, or a machine is going to tell you what to do,” Preston-Werner told Bloomberg Businessweek's Joel Stein.

“You’re either the one that creates the automation or you’re getting automated.” 

SEE ALSO: By 2045 'The Top Species Will No Longer Be Humans,' And That Could Be A Problem

Join the conversation about this story »


Robotics Researchers Are Turning The Internet Into A Giant Robot Brain

$
0
0

brain

One of the biggest challenges in building useful robots lies in simply teaching them about our world. Robotics researchers at Cornell, Stanford, Brown, and UC Berkeley are building a compelling solution to this problem, and they're calling it "Robo Brain."

Beginning in July, the researchers began compiling "about one billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals" into robot-friendly data that can be stored on the cloud. Machine learning technologies can already interact with data like this to draw useful conclusions from it, but the data's never been gathered on this scale before.

Here's an example of how a robot might use Robo Brain, according to Cornell:

If a robot sees a coffee mug, it can learn from Robo Brain not only that it’s a coffee mug, but also that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full, as opposed to when it is being carried from the dishwasher to the cupboard.

The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Sitting is something you can do on a chair, but a human can also sit on a stool, a bench or the lawn.

Put another way, when a robot needs information about a situation, object, or any unknown thing that it doesn't recognize, it could query Robo Brain over the internet to derive the appropriate way to proceed. The Robo Brain website displays things that robots will readily be able to learn. It's basic stuff for humans, but robots have to start somewhere. For example:

robo brain

Join the conversation about this story »

This App Lets People Wearing Google Glass See Your True Emotions (GOOG)

$
0
0

ShoreHow are you feeling today? Now someone else wearing Google Glass can answer that question for you.

The Shore Human Emotion Detector (SHORE), can detect basic emotions, someone's gender and predict how old you are in real time. The app was developed by researchers at the Fraunhofer Institute for Integrated Circuits in Germany.

In an attempt to sidestep any privacy concerns surrounding the app, the developers have promised not to send any images or data to the cloud. SHORE also prohibits you from being able to find out anyone's identity through it.

The app can gauge whether someone is angry, happy, sad or surprised. The product specifications on the official site say that the app has a 94.3% detection rate for the gender of the face you are looking at. The technology uses a database of more than 10,000 annotated faces as its reference point for identifying real ones. The Fraunhofer Institute told CNET last week that the software took "years" to develop.

Watch a demonstration of how SHORE technology works below.

SHORE has the potential to help people with sensory processing disorders such a face blindness recognize who they are speaking to. Another potential use is to help people with autism tell what emotion the person they are spending time with is projecting.

CNET speculated on why the app isn't available for download"It's not clear if Fraunhofer has built it into a soon-to-be-available app, or if Fraunhofer is waiting to pair the tech with an app partner. Still, the SHORE app charts a less-traveled path through privacy concerns of facial recognition so that it can still be used to help people who need it."

SEE ALSO: Google May Be Working On A Way To Make Google Glass Actually Look Normal

Join the conversation about this story »

We Should Be Terrified Of Superintelligent Machines

$
0
0

robotAdapted from Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. Out now from Oxford University Press.

In the recent discussion over the risks of developing superintelligent machines—that is, machines with general intelligence greater than that of humans—two narratives have emerged. One side argues that if a machine ever achieved advanced intelligence, it would automatically know and care about human values and wouldn’t pose a threat to us. 

The opposing side argues that artificial intelligence would “want” to wipe humans out, either out of revenge or an intrinsic desire for survival.

As it turns out, both of these views are wrong. We have little reason to believe a superintelligence will necessarily share human values, and no reason to believe it would place intrinsic value on its own survival either. These arguments make the mistake of anthropomorphising artificial intelligence, projecting human emotions onto an entity that is fundamentally alien.  

Let us first reflect for a moment on the vastness of the space of possible minds. In this abstract space, human minds form a tiny cluster. Consider two persons who seem extremely unlike, perhaps Hannah Arendt and Benny Hill. The personality differences between these two individuals may seem almost maximally large. But this is because our intuitions are calibrated on our experience, which samples from the existing human distribution (and to some extent from fictional personalities constructed by the human imagination for the enjoyment of the human imagination). If we zoom out and consider the space of all possible minds, however, we must conceive of these two personalities as virtual clones.

Certainly in terms of neural architecture, Ms. Arendt and Mr. Hill are nearly identical. Imagine their brains lying side by side in quiet repose. You would readily recognize them as two of a kind. You might even be unable to tell which brain belonged to whom. If you looked more closely, studying the morphology of the two brains under a microscope, this impression of fundamental similarity would only be strengthened: You would see the same lamellar organization of the cortex, with the same brain areas, made up of the same types of neuron, soaking in the same bath of neurotransmitters.

Despite the fact that human psychology corresponds to a tiny spot in the space of possible minds, there is a common tendency to project human attributes onto a wide range of alien or artificial cognitive systems. Yudkowsky illustrates this point nicely:

Back in the era of pulp science fiction, magazine covers occasionally depicted a sentient monstrous alien—colloquially known as a bug-eyed monster (BEM)—carrying off an attractive human female in a torn dress. It would seem the artist believed that a non-humanoid alien, with a wholly different evolutionary history, would sexually desire human females.

Probably the artist did not ask whether a giant bug perceives human females as attractive. Rather, a human female in a torn dress is sexy—inherently so, as an intrinsic property. They who made this mistake did not think about the insectoid’s mind: they focused on the woman’s torn dress. If the dress were not torn, the woman would be less sexy; the BEM does not enter into it.

An artificial intelligence can be far less humanlike in its motivations than a green scaly space alien. The extraterrestrial (let us assume) is a biological creature that has arisen through an evolutionary process and can therefore be expected to have the kinds of motivation typical of evolved creatures. It would not be hugely surprising, for example, to find that some random intelligent alien would have motives related to one or more items like food, air, temperature, energy expenditure, occurrence or threat of bodily injury, disease, predation, sex, or progeny. A member of an intelligent social species might also have motivations related to cooperation and competition: Like us, it might show in-group loyalty, resentment of free riders, perhaps even a vain concern with reputation and appearance.

An AI, by contrast, need not care intrinsically about any of those things. There is nothing paradoxical about an AI whose sole final goal is to count the grains of sand on Boracay, or to calculate the decimal expansion of pi, or to maximize the total number of paper clips that will exist in its future light cone. In fact, it would be easierto create an AI with simple goals like these than to build one that had a humanlike set of values and dispositions. Compare how easy it is to write a program that measures how many digits of pi have been calculated and stored in memory with how difficult it would be to create a program that reliably measures the degree of realization of some more meaningful goal—human flourishing, say, or global justice.

In this sense, intelligence and final goals are “orthogonal”; that is: more or less any level of intelligence could in principle be combined with more or less any final goal.

Nevertheless, there are some instrumentalgoals likely to be pursued by almost any intelligent agent, because there are some objectives that are useful intermediaries to the achievement of almost any final goal.

If an agent’s final goals concern the future, then in many scenarios there will be future actions it could perform to increase the probability of achieving its goals. This creates an instrumental reason for the agent to try to be around in the future—to help achieve its future-oriented goal.

Most humans seem to place some final value on their own survival. This is not a necessary feature of artificial agents: Some may be designed to place no final value whatever on their own survival. Nevertheless, many agents that do not care intrinsically about their own survival would, under a fairly wide range of conditions, care instrumentally about their own survival in order to accomplish their final goals.

Resource acquisition is another common emergent instrumental goal, for much the same reasons as technological perfection: Both technology and resources facilitate the achievement of final goals that require physical resources to be mobilized and arranged in particular patterns. Whether one desires a giant marble monument or an ecstatically happy intergalactic civilization, one needs materials and technology.

Human beings tend to seek to acquire resources sufficient to meet their basic biological needs. But people usually seek to acquire resources far beyond this minimum level. In doing so, they may be partially driven by minor biological conveniences (such as housing that offers slightly better temperature control or more comfortable furniture). A great deal of resource accumulation is motivated by social concerns—gaining status, mates, friends, and influence, through wealth accumulation and conspicuous consumption. Perhaps less commonly, some people seek additional resources to achieve altruistic ambitions or expensive non-social aims.

On the basis of such observations it might be tempting to suppose that a superintelligence not facing a competitive social world would see no instrumental reason to accumulate resources beyond some modest level, for instance whatever computational resources are needed to run its mind along with some virtual reality. Yet such a supposition would be entirely unwarranted.

First, the value of resources depends on the uses to which they can be put, which in turn depends on the available technology. With mature technology, basic resources such as time, space, matter, and free energy could be processed to serve almost any goal.

The orthogonality thesis suggests that we cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans—scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth. We will consider later whether it might be possible through deliberate effort to construct a superintelligence that values such things, or to build one that values human welfare, moral goodness, or any other complex purpose its designers might want it to serve. But it is no less possible—and in fact technically a lot easier—to build a superintelligence that places final value on nothing but calculating the decimal expansion of pi. This suggests that—absent a special effort—the first superintelligence may have some such random or reductionistic final goal.

Third, the instrumental convergence thesis entails that we cannot blithely assume that a superintelligence with the final goal of calculating the decimals of pi (or making paper clips, or counting grains of sand) would limit its activities in such a way as not to infringe on human interests. An agent with such a final goal would have a convergent instrumental reason, in many situations, to acquire an unlimited amount of physical resources and, if possible, to eliminate potential threats to itself and its goal system. Human beings might constitute potential threats; they certainly constitute physical resources.

Taken together, these three points thus indicate that the first superintelligence may shape the future of Earth-originating life, could easily have non-anthropomorphic final goals, and would likely have instrumental reasons to pursue open-ended resource acquisition. If we now reflect that human beings consist of useful resources (such as conveniently located atoms) and that we depend for our survival and flourishing on many more local resources, we can see that the outcome could easily be one in which humanity quickly becomes extinct. 

Join the conversation about this story »

If We Make A Brain In A Computer, Is It Conscious?

$
0
0

neurons

Imagine standing in an open field with a bucket of water balloons and a couple of friends. You've decided to play a game called "Mind." Each of you has your own set of rules. Maybe Molly will throw a water balloon at Bob whenever you throw a water balloon at Molly. Maybe Bob will splash both of you whenever he goes five minutes without getting hit -- or if it gets too warm out or if it's seven o'clock or if he's in a bad mood that day. The details don't matter.

That game would look a lot like the way neurons, the cells that make up your brain and nerves, interact with one another. They sit around inside an ant or a bird or Stephen Hawking and follow a simple set of rules. Sometimes they send electrochemical signals to their neighbors. Sometimes they don't. No single neuron "understands" the whole system.

Now imagine that instead of three of of you in that field there were 86 billion -- about the number of neurons in an average brain. And imagine that instead of playing by rules you made up, you each carried an instruction manual written by the best neuroscientists and computer scientists of the day -- a perfect model of a human brain. No one would need the entire rulebook, just enough to know their job. If the lot of you stood around, laughing and playing by the rules whenever the rulebook told you, given enough time you could model one or two seconds of human thought.

Here's a question though: While you're all out there playing, is that model conscious? Are its feelings, modeled in splashing water, real? What does "real" even mean when it comes to consciousness? What's it like to be a simulation run on water balloons?

These questions may seem absurd at first, but now imagine the game of Mind sped up a million times. Instead of humans standing around in a field, you model the neurons in the most powerful supercomputer ever built. (Similar experiments have already been done, albeit on much smaller scales.) You give the digital brain eyes to look out at the world and ears to hear. An artificial voice box grants Mind the power of speech. Now we're in the twilight between science and science fiction. ("I'm sorry Dave, I'm afraid I can't do that.")

Is Mind conscious now?

Now imagine Mind's architects copied the code for Mind straight out of your brain. When the computer stops working, does a version of you die?

Cajal Blue Brain Supercomputer

These queries provide an ongoing puzzle for scientists and philosophers who think about computers, brains, and minds. And many believe they could one day have real world implications.

Dr. Scott Aaronson, a theoretical computer scientist at MIT and author of the blog Shtetl-Optimized, is part of a group of scientists and philosophers (and cartoonists) who have made a habit of dealing with these ethical sci-fi questions. While most researchers concern themselves primarily with data, these writers perform thought experiments that often reference space aliens, androids, and the Divine. (Aaronson is also quick to point out the highly speculative nature of this work.)

Many thinkers have broad interpretations of consciousness for humanitarian reasons, Aaronson tells Popular Science. After all, if that giant game of Mind in that field (or C-3PO or Data or Hal) simulates a thought or a feeling, who are we to say that consciousness is less valid than our own?

In 1950 the brilliant British codebreaker and early computer scientist Alan Turing wrote against human-centric theologies in his essay "Computing Machinery and Intelligence:"

Thinking is a function of man's immortal soul [they say.] God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.

I am unable to accept any part of this … It appears to me that the argument quoted above implies a serious restriction of the omnipotence of the Almighty. It is admitted that there are certain things that He cannot do such as making one equal to two, but should we not believe that He has freedom to confer a soul on an elephant if He sees fit? ... An argument of exactly similar form may be made for the case of machines.

Alan Turing

"I think it's like anti-racism," Aaronson says. "[People] don't want to say someone different than themselves who seems intelligent is less deserving just because he's got a brain of silicon."

According to Aaronson, this train of thought leads to a strange slippery slope when you imagine all the different things it could apply to. Instead, he proposes finding a solution to what he calls the Pretty Hard Problem. "The point," he says, "is to come up with some principled criterion for separating the systems we consider to be conscious from those we do not."

A lot of people might agree that a mind simulated in a computer is conscious, especially if they could speak to it, ask it questions, and develop a relationship with it. It's a vision of the future explored in the Oscar-winning film Her.

Think about the problems you'd encounter in a world where consciousness were reduced to a handy bit of software. A person could encrypt a disk, and then instead of Scarlett Johannsen's voice, all Joaquin Phoenix would hear in his ear would be strings of unintelligible data. Still, somewhere in there, something would be thinking.

Aaronson takes this one step further. If a mind can be written as code, there's no reason to think it couldn't be written out in a notebook. Given enough time, and more paper and ink than there is room in the universe, a person could catalogue every possible stimulus a consciousness could ever encounter, and label each with a reaction. That journal could be seen as a sentient being, frozen in time, just waiting for a reader.

"There's a lot of metaphysical weirdness that comes up when you describe a physical consciousness as something that can be copied," he says.

The weirdness gets even weirder when you consider that according to many theorists, not all the possible minds in the universe are biological or mechanical. In fact, under this interpretation the vast majority of minds look nothing like anything you or I will ever encounter. Here's how it works: Quantum physics -- the 20th century branch of science that reveals the hidden, exotic behavior of the particles that make up everything -- states that nothing is absolute. An unobserved electron isn't at any one point in space, really, but spread across the entire universe as a probability distribution; the vast majority of that probability is concentrated in a tight orbit around an atom, but not all of it. This still works as you go up in scale. That empty patch of sky midway between here and Pluto? Probably empty. But maybe, just maybe, it contains that holographic Charizard trading card that you thought slipped out of your binder on the way home from school in second grade.

As eons pass and the stars burn themselves out and the universe gets far emptier than it is today, that quantum randomness becomes very important. It's probable that the silent vacuum of space will be mostly empty. But every once in a while, clumps of matter will come together and dissipate in the infinite randomness. And that means, or so the prediction goes, that every once in a while those clumps will arrange themselves in such a way perfect, precise way that they jolt into thinking, maybe just for a moment, but long enough to ask, "What am I?"

These are the Boltzmann Brains, named after the nineteenth-century physicist Ludwig Boltzmann. These strange late-universe beings will, according to one line of thinking, eventually outnumber every human, otter, alien and android who ever lived or ever will live. In fact, assuming this hypothesis is true, you, dear reader, probably are a Boltzmann Brain yourself. After all, there will only ever be one "real' version of you. But Boltzmann Brains popping into being while hallucinating this moment in your life -- along with your entire memory and experiences -- they will keep going and going, appearing and disappearing forever in the void.

In his talk at IBM, Aaronson pointed to a number of surprising conclusions thinkers have come to in order to resolve this weirdness.

You might say, sure, maybe these questions are puzzling, but what's the alternative? Either we have to say that consciousness is a byproduct of any computation of the right complexity, or integration, or recursiveness (or something) happening anywhere in the wavefunction of the universe, or else we're back to saying that beings like us are conscious, and all these other things aren't, because God gave the souls to us, so na-na-na. Or I suppose we could say, like the philosopher John Searle, that we're conscious, and ... all these other apparitions aren't, because we alone have 'biological causal powers.' And what do those causal powers consist of? Hey, you're not supposed to ask that! Just accept that we have them. Or we could say, like Roger Penrose, that we're conscious and the other things aren't because we alone have microtubules that are sensitive to uncomputable effects from quantum gravity. [Aaronson points out elsewhere in the talk that there there is no direct or clear indirect evidence to support this claim.] But neither of those two options ever struck me as much of an improvement.

Instead, Aaronson proposes a rule to help us understand what bits of matter are conscious and what bits of matter are not.

Conscious objects, he says, are locked into "the arrow of time." This means that a conscious mind cannot be reset to an earlier state, as you can do with a brain on a computer. When a stick burns or stars collide or a human brain thinks, tiny particle-level quantum interactions that cannot be measured or duplicated determine the exact nature of the outcome. Our consciousnesses are meat and chemical juices, inseparable from their particles. Once a choice is made or an experience is had, there's no way to truly rewind the mind to a point before it happened because the quantum state of the earlier brain can not be reproduced.

When a consciousness is hurt, or is happy, or is a bit too drunk, that experience becomes part of it forever. Packing up your mind in an email and sending it to Fiji might seem like a lovely way to travel, but, by Aaronson's reckoning, that replication of you on the other side would be a different consciousness altogether. The real you died with your euthanized body back home.

800px Descartes_mind_and_body

Additionally Aaronson says you shouldn't be concerned about being a Boltzmann Brain. Not only could a Boltzmann Brain never replicate a real human consciousness, but it could never be conscious in the first place. Once the theoretical apparition is done thinking its thoughts, it disappears unobserved back into the ether -- effectively rewound and therefore meaningless.

This doesn't mean us bio-beings must forever be alone in the universe. A quantum computer, or maybe even a sufficiently complex classical computer could find itself as locked into the arrow of time as we are. Of course, that alone is not enough to call that machine conscious. Aaronson says there are many more traits it must have before you would recognize something of yourself in it. (Turing himself proposed one famous test, though, as Popular Science reported, there is now some debate over its value.)

So, you, Molly, and Bob might in time forget that lovely game with the water balloons in the field, but you can never unlive it. The effects of that day will resonate through the causal history of your consciousness, part of an unbroken chain of joys and sorrows building toward your present. Nothing any of us experience ever really leaves us.

SEE ALSO: It's Time To Embrace Our Inevitable Future As Cyborgs

Join the conversation about this story »

Meet Amelia: The Computer That's After Your Job

$
0
0

Amelia_3053068b

In February 2011 an artificially intelligent computer system called IBM Watson astonished audiences worldwide by beating the two all-time greatest Jeopardy champions at their own game.

Thanks to its ability to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies, Watson consistently outperformed its human opponents on the American quiz show Jeopardy.

Watson represented an important milestone in the development of artificial intelligence, but the field has been progressing rapidly – particularly with regard to natural language processing and machine learning.

In 2012, Google used 16,000 computer processors to build a simulated brain that could correctly identify cats in YouTube videos ; the Kinect, which provides a 3D body-motion interface for Microsoft's Xbox, uses algorithms that emerged from artificial intelligence research, as does the iPhone's Siri virtual personal assistant.

Today a new artificial intelligence computing system has been unveiled, which promises to transform the global workforce. Named 'Amelia' after American aviator and pioneer Amelia Earhart, the system is able to shoulder the burden of often tedious and laborious tasks, allowing human co-workers to take on more creative roles.

"Watson is perhaps the best data analytics engine that exists on the planet; it is the best search engine that exists on the planet; but IBM did not set out to create a cognitive agent. It wanted to build a program that would win Jeopardy, and it did that," said Chetan Dube, chief executive Officer of IPsoft, the company behind Amelia.

"Amelia, on the other hand, started out not with the intention of winning Jeopardy, but with the pure intention of answering the question posed by Alan Turing in 1950 – can machines think?"

Amelia learns by following the same written instructions as her human colleagues, but is able to absorb information in a matter of seconds. She understands the full meaning of what she reads rather than simply recognizing individual words. This involves understanding context, applying logic and inferring implications.

When exposed to the same information as any new employee in a company, Amelia can quickly apply her knowledge to solve queries in a wide range of business processes. Just like any smart worker she learns from her colleagues and, by observing their work, she continually builds her knowledge.

While most ‘smart machines’ require humans to adapt their behaviour in order to interact with them, Amelia is intelligent enough to interact like a human herself. She speaks more than 20 languages, and her core knowledge of a process needs only to be learned once for her to be able to communicate with customers in their language.

Independently, rather than through time-intensive programming, Amelia creates her own 'process map' of the information she is given so that she can work out for herself what actions to take depending on the problem she is solving.

"Intelligence is the ability to acquire and apply knowledge. If a system claims to be intelligent, it must be able to read and understand documents, and answer questions on the basis of that. It must be able to understand processes that it observes. It must be able to solve problems based on the knowledge it has acquired. And when it cannot solve a problem, it must be capable of learning the solution through noticing how a human did it," said Dube.

IPsoft has been working on this technology for 15 years with the aim of developing a platform that does not simply mimic human thought processes but can comprehend the underlying meaning of what is communicated – just like a human.

Just as machines transformed agriculture and manufacturing, IPsoft believes that cognitive technologies will drive the next evolution of the global workforce, so that in the future companies will have digital workforces that comprise a mixture of human and virtual employees.

Amelia has already been trialled within a number of Fortune 1000 companies, in areas such as manning technology help desks, procurement processing, financial trading operations support and providing expert advice for field engineers.

In each of these environments, she has learnt not only from reading existing manuals and situational context but also by observing and working with her human colleagues and discerning for herself a map of the business processes being followed.

In a help desk situation, for example, Amelia can understand what a caller is looking for, ask questions to clarify the issue, find and access the required information and determine which steps to follow in order to solve the problem.

As a knowledge management advisor, she can help engineers working in remote locations who are unable to carry detailed manuals, by diagnosing the cause of failed machinery and guiding them towards the best steps to rectifying the problem.

During these trials, Amelia was able to go from solving very few queries independently to 42 per cent of the most common queries within one month. By the second month she could answer 64 per cent of those queries independently.

"That’s a true learning cognitive agent. Learning is the key to the kingdom, because humans learn from experience. A child may need to be told five times before they learn something, but Amelia needs to be told only once," said Dube.

"Amelia is that Mensa kid, who personifies a major breakthrough in cognitive technologies."

Analysts at Gartner predict that, by 2017, managed services offerings that make use of autonomics and cognitive platforms like Amelia will drive a 60 per cent reduction in the cost of services, enabling organisations to apply human talent to higher level tasks requiring creativity, curiosity and innovation.

IPsoft even has plans to start embedding Amelia into humanoid robots such as Softbank's Pepper, Honda's Asimoor Rethink Robotics' Baxter, allowing her to take advantage of their mechanical functions.

"The robots have got a fair degree of sophistication in all the mechanical functions – the ability to climb up stairs, the ability to run, the ability to play ping pong. What they don’t have is the brain, and we’ll be supplementing that brain part with Amelia," said Dube.

"I am convinced that in the next decade you’ll pass someone in the corridor and not be able to discern if it’s a human or an android."

Given the premise of IPsoft's artificial intelligence system, it seems logical that the ultimate measure of Amelia's success would be passing the Turing Test – which sets out to see whether humans can discern whether they are interacting with a human or a machine.

Earlier this year, a chatbot named Eugene Goostman became the first machine to pass the Turing Test by convincingly imitating a 13-year-old boy. In a five-minute keyboard conversation with a panel of human judges, Eugene managed to convince 33 per cent that it was human.

Interestingly, however, IPsoft believes that the Turing Test needs reframing, to redefine what it means to 'think'. While Eugene was able to imitate natural language, he was only mimicking understanding. He did not learn from the interaction, nor did he demonstrate problem solving skills.

"Natural language understanding is a big step up from parsing. Parsing is syntactic, understanding is semantic, and there’s a big cavern between the two," said Dube.

"The aim of Amelia is not just to get an accolade for managing to fool one in three people on a panel. The assertion is to create something that can answer to the fundamental need of human beings – particularly after a certain age – of companionship. That is our intent."

Join the conversation about this story »

5 Ways Superintelligence Could Come Into Existence

$
0
0

lucy scarlett johansson

Biological brains are unlikely to be the final stage of intelligence.

Machines already have superhuman strength, speed and stamina – and one day they will have superhuman intelligence. The only reasons this may not occur is if we develop some other dangerous technology first that destroys us, or otherwise fall victim to some existential risk.

But assuming that scientific and technological progress continues, human-level machine intelligence is very likely to be developed. And shortly thereafter, superintelligence.

Predicting how long it will take to develop such intelligent machines is difficult. Contrary to what some reviewers of my book seem to believe, I don't have any strong opinion about that matter. (It is as though the only two possible views somebody might hold about the future of artificial intelligence are "machines are stupid and will never live up to the hype!" and "machines are much further advanced than you imagined and true AI is just around the corner!").

A survey of leading researchers in AI suggests that there is a 50% probability that human-level machine intelligence will have been attained by 2050 (defined here as "one that can carry out most human professions at least as well as a typical human"). This doesn't seem entirely crazy. But one should place a lot of uncertainty on both sides of this: it could happen much sooner or very much later.

Exactly how we will get there is also still shrouded in mystery. There are several paths of development that should get there eventually, but we don't know which of them will get there first.

artificial intelligence robotBiological inspiration

We do have an actual example of generally intelligent system – the human brain – and one obvious idea is to proceed by trying to work out how this system does the trick. A full understanding of the brain is a very long way off, but it might be possible to glean enough of the basic computational principles that the brain uses to enable programmers to adapt them for use in computers without undue worry about getting all the messy biological details right.

We already know a few things about the working of the human brain: it is a neural network, it learns through reinforcement learning, it has a hierarchical structure to deal with perceptions and so forth. Perhaps there are a few more basic principles that we still need to discover – and that would then enable somebody to clobber together some form of "neuromorphic AI": one with elements cribbed from biology but implemented in a way that is not fully biologically realistic.

Pure mathematics

Another path is the more mathematical "top-down" approach, which makes little or no use of insights from biology and instead tries to work things out from first principles. This would be a more desirable development path than neuromorphic AI, because it would be more likely to force the programmers to understand what they are doing at a deep level – just as doing an exam by working out the answers yourself is likely to require more understanding than doing an exam by copying one of your classmates' work.

In general, we want the developers of the first human-level machine intelligence, or the first seed AI that will grow up to be superintelligence, to know what they are doing. We would like to be able to prove mathematical theorems about the system and how it will behave as it rises through the ranks of intelligence.

Brute Force

One could also imagine paths that rely more on brute computational force, such by as making extensive use of genetic algorithms.

Such a development path is undesirable for the same reason that the path of neuromorphic AI is undesirable – because it could more easily succeed with a less than full understanding of what is being built. Having massive amounts of hardware could, to a certain extent, substitute for having deep mathematical insight.

We already know of code that would, given sufficiently ridiculous amounts of computing power, instantiate a superintelligent agent. The AIXI model is an example. As best we can tell, it would destroy the world. Thankfully, the required amounts of computer power are physically impossible.

Plagiarising nature

The path of whole brain emulation, finally, would proceed by literally making a digital copy of a particular human mind. The idea would be to freeze or vitrify a brain, chop it into thin slices and feed those slices through an array of microscopes. Automated image recognition software would then extract the map of the neural connections of the original brain.

This 3D map would be combined with neurocomputational models of the functionality of the various neuron types constituting the neuropil, and the whole computational structure would be run on some sufficiently capacious supercomputer. This approach would require very sophisticated technologies, but no new deep theoretical breakthrough.

In principle, one could imagine a sufficiently high-fidelity emulation process that the resulting digital mind would retain all the beliefs, desires, and personality of the uploaded individual. But I think it is likely that before the technology reached that level of perfection, it would enable a cruder form of emulation that would yield a distorted human-ish mind.

And before efforts to achieve whole brain emulation would achieve even that degree of success, they would probably spill over into neuromorphic AI.

Competent humans first, please

Perhaps the most attractive path to machine superintelligence would be an indirect one, on which we would first enhance humanity's own biological cognition. This could be achieved through, say, genetic engineering along with institutional innovations to improve our collective intelligence and wisdom.

It is not that this would somehow enable us "to keep up with the machines"– the ultimate limits of information processing in machine substrate far exceed those of a biological cortex however far enhanced. The contrary is instead the case: human cognitive enhancement would hasten the day when machines overtake us, since smarter humans would make more rapid progress in computer science.

However, it would seem on balance beneficial if the transition to the machine intelligence era were engineered and overseen by a more competent breed of human, even if that would result in the transition happening somewhat earlier than otherwise.

Meanwhile, we can make the most of the time available, be it long or short, by getting to work on the control problem, the problem of how to ensure that superintelligent agents would be safe and beneficial. This would be a suitable occupation for some of our generation's best mathematical talent.


The Conversation organised a public question-and-answer session on Reddit in which Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, talked about developing artificial intelligence and related topics.

The Conversation

Nick Bostrom is the director of the Future of Humanity Institute and the Oxford Martin Programme on the Impacts of Future Technology, both based in the Oxford Martin School. He is the author of Superintelligence: Paths, Dangers, Strategies.

This article was originally published on The Conversation. Read the original article.

SEE ALSO: We Should Be Terrified Of Superintelligent Machines

Join the conversation about this story »

ELON MUSK: Robots Could Delete Humans Like Spam

$
0
0

Elon-Musk-Tesla-looking-upElon Musk took to the stage Wednesday night at the Vanity Fair New Establishment Summit to warn attendees that advanced artificial intelligence could spell the end of humanity.

Interviewed by Walter Isaacson, Musk warned of the rapid speed of AI development and the effects it could have:

I don’t think anyone realizes how quickly artificial intelligence is advancing. Particularly if [the machine is] involved in recursive self-improvement ... and its utility function is something that’s detrimental to humanity, then it will have a very bad effect. 

He went on to muse about just how serious the problem could be.

If its [function] is just something like getting rid of e-mail spam and it determines the best way of getting rid of spam is getting rid of humans ...

Isaacson asked Musk whether humanity could escape killer robots by traveling to Mars using SpaceX ships.

No — more likely than not that if there’s some ... apocalypse scenario, it may follow people from Earth.

Watch the full video at Vanity Fair.

SEE ALSO: Elon Musk Teased Another Announcement Is Coming Besides The Tesla D

Join the conversation about this story »


How IBM's Watson Could Transform The Way We Treat Cancer

$
0
0

IBM watson MD Anderson Cancer Center

"You know my methods, Watson."– Sherlock Holmes

Even those who have only a passing interest in computer technology may have heard of Watson, a third-generation computing system developed at IBM that gained fame defeating two elite contestants on the television game show "Jeopardy!"

What many people may not know is that the abilities of this computer, including natural language processing, hypothesis generation and machine learning, may fundamentally change the way humans interact with computers and revolutionize the delivery of healthcare worldwide. [N.B. One of the authors is the chief technology officer of IBM Watson.]

In one prominent example of the power of computers bringing patients the best medicine, doctors at the MD Anderson Cancer Center in Houston are using Watson to drive a software tool called the Oncology Expert Advisor, which serves as both a live reference manual and a virtual expert advisor for practicing clinicians. Eventually it could provide the best treatment options for individual cancer patients at medical centers around the country, even in places lacking expertise in cancer treatment. 

Watson: A Fine Conversationalist

How is Watson different from an advanced search engine such as Google, or even another IBM system, Deep Blue, the first computer system to beat a world chess champion? First and foremost, Watson has been designed to answer questions that are posed to it in natural, conversational language. This is particularly critical in the field of medicine, in which relevant but often overlooked case notes or data are embedded in clinician reports.

However, even within the more rigorously defined vocabulary typically used in published literature and consensus guidelines, ambiguities can arise that limit the ability of second-generation computers to process and analyze relevant data for clinicians. These linguistic ambiguities are well recognized, so much so that entire fields of study such as biomedical ontology have been established to help clarify meaning and relationships between terms within a given topic area.

For example, the Gene Ontology Consortium represents an invaluable effort to bring order to big data in genetics, standardizing the representation of gene and gene product attributes across species and databases and providing a controlled vocabulary of terms.1 In essence, this collaboration of genetics researchers is working to establish a clear "meaning" for different terms, because ambiguities can arise even within the worlds of specialists.

This consortium has clarified the genetic vocabulary for over 340,000 species, and this rigorous definition of terms is critical for accessibility by second-generation computing programs. But what if we move beyond the more rigorous world of specialists and look just at the word "gene?" This seemingly simple term can have different definitions: one genome data bank may define a gene as a "DNA fragment that can be transcribed and translated into a protein," while other data banks may define it as a "DNA region of biological interest with a name and that carries a genetic trait or phenotype."2

In this example the context of the term is critical for proper analysis, and it is exactly the ability to assess this context that sets Watson apart from second-generation computers.

A Computer "Sensitive" to a Mother's Concerns

If the processing of rigorously defined or coded medical terminology poses a challenge for current computing systems, then integration of unstructured clinician reports represents an entirely different magnitude of difficulty. Consider the following hypothetical notes of two mothers' conversations with their pediatricians:

"Mother noted that her son is very bright and sensitive, but has difficulty focusing on schoolwork outside of the classroom setting, even when it is just light reading."

"Mother said her child seems sensitive to light, stating that when the sun is very bright he won't even go outside to play with other kids until just after it is setting."

Few people would have difficulty understanding these notes, even though they include several instances in which identical words have different but related meanings (polysemes), including "sensitive,""bright,""outside,""setting" and "light." Nor would most people be confused between a "bright son" and a "bright sun," or be unsure as to whether "it is setting" in the second statement is referring to the "sun" or to "outside."

Many people might even sense in the first statement a bit of defensive pride as the mother prefaces her concerns with a comment that her son is bright and sensitive. For most computing systems, however, such conversational ambiguities are nearly insurmountable barriers to understanding human intent and meaning. But not for Watson. While it cannot assess the subtle emotional undertones of a mother's conversation, Watson does have the ability to process natural language and be "sensitive" to mom's intended meaning.

The Roots of Watson's Genius

Watson can extract the meaning of natural language using a process that parallels that of the human brain. We do not walk around with a massive dictionary in our head, looking up the definition of each word that we hear and meticulously piecing together a composite meaning for a given expression. Nor do we rely solely on a set of rules of grammar to determine meaning.

In fact, humans often violate the formal rules of grammar, spelling and semantic expression, and yet we're quite adept at understanding what other people mean to say. We do so by reasoning about the linguistics of their expressions, while also leveraging our shared historical context to resolve ambiguity, metaphors and idioms. Watson uses similar techniques to determine the intent of our inquiries.

As a first step in this process, Watson ingests a corpus of literature—such as the published references on the treatment of breast cancer—that serves as a basis of information about a given topic area, or domain. This literature can be provided in a variety of digitally encoded formats, including HTML, Microsoft Word or PDF, which is then "curated" by Watson—validating the relevance and correctness of the information contained within that corpus, and culling out anything that would be misleading or incorrect.

For instance, a clinical lecture on breast cancer by an esteemed researcher would be of considerable value for assessing surgical strategies, unless it was published in the British Medical Journal in 1870 and noted that excision of the breast was the most promising option.3 Moving forward a century, a consensus statement by breast cancer experts would be relevant for assessing non-surgical treatment options, unless it was published before the introduction of key monoclonal antibodies such as Herceptin and Rituxan in the late 1990s. It's the job of Watson to place these published recommendations in the appropriate context.

The ingestion process also prepares the content for more efficient use within the system. Once the content has been ingested, Watson can be trained to recognize the linguistic patterns of the domain, and the cognitive system then answers questions against that content by drawing inferences between the question and candidate answers produced from the information corpus.

Watson uses many different algorithms to detect these inferences. For example, if the question implies anything about timeframe, Watson's algorithms will evaluate the candidate answer for being relevant to that timeframe. Likewise, if the question implies anything about location, algorithms evaluate the candidate answer relative to that location. It also factors in context for both the question and source of the potential answer. It evaluates the kind of answer being demanded by the question (known as lexical answer type) to ensure it can be fulfilled by the candidate answer. And so forth for known subjects, conditional clauses, synonyms, tense, etc.

Watson scores each of these features to indicate the degree to which an inference can be found between the question and the candidate answer. Then a machine learning technique uses all of those scores to decide the degree to which that combination of features supports that answer within the domain.4

Watson is essentially trained to recognize relevant patterns of linguistic inferences. This training is represented in its confidence score for a candidate answer. Watson can also be retrained as often as needed to reflect shifts in the linguistic patterns of the domain.

watson process

The system orders the candidate answers by confidence levels, and provides the answer if a confidence level exceeds a specified minimum threshold. This differs dramatically from classical artificial intelligence (AI) techniques in that meanings are derived from actual language patterns rather than relying solely on rules based on controlled vocabularies governed by fixed relationships. The result is a system that performs at dramatically higher levels of accuracy than classical AI systems.

Improving Cancer Care

Cutting-edge cancer therapies garner headlines, and one has to marvel at the advances in oncology research achieved over the past decade. Unfortunately, relatively few patients have access to advanced treatment plans at specialized cancer centers such as MD Anderson. Most receive far less effective cancer care, or no care at all.

In addition, even the most devoted specialists cannot keep up with the ever-expanding body of medical literature. To fill these healthcare gaps, doctors and computer scientists at MD Anderson developed the MD Anderson Oncology Expert Advisor™ cognitive clinical decision support system (OEA™), which is being brought to life with the support of a $50 million gift from Jynwel Charitable Foundation to MD Anderson's Moon Shots program. [N.B. One of the authors is the director of the Jynwel Charitable Foundation.]

The OEA™ integrates the clinical expertise and experience of doctors at MD Anderson with results from clinical trials and published studies and consensus guidelines from medical experts, to come up with the treatment options that are best for an individual patient.

Specifically, Watson first ingests and then analyzes comprehensive summaries of patient care over time and across various practices, including symptoms, diagnoses, laboratory and imaging tests, and treatment history. This information is fed into software that compares this patient with others and divides the population into groups defined by their likely best responses to individual treatment.

Watson then captures and analyzes standard-of care practices, expertise of clinicians in the field, cohort studies (which compare distinct patient populations over time to assess risk factors for a given disease), and evidence in the clinical literature to evaluate and rank order various treatment options for the clinician to consider. It matches these data against the patient's current and previous condition, and reveals the optimal therapeutic approaches for a patient.

All of the data underlying Watson's therapeutic recommendations are available for review by the physicians, allowing them to judge the clinical relevance of the data and make their own treatment decisions. In other words, Watson does not dictate treatment, but provides a physician with the tools he or she needs to tailor treatment to each patient.

The Oncology Expert Advisor™ is currently in a pilot phase of study at MD Anderson for the treatment of leukemia, and is anticipated to be expanded to other cancers as well as chronic diseases such as diabetes.

watson Process 2 oncology expert adviser

It may seem counterintuitive to think of a massive computing system as a means to "humanizing" medical care, but through its ability to make sense of large quantities of data, the Oncology Expert Advisor™ represents a significant step towards truly individualized medicine. It will not only benefit the few who can afford elite care, or live within driving distance of tertiary care centers.

We believe that Watson will ultimately bring high-quality, evidence-based medicine to patients around the world, regardless of financial or geographic limitations. This democratization of healthcare may prove to be Watson's most lasting contribution.

References

1. www.geneontology.org

2. Gangemi A, Pisanelli DM, Steve G. Understanding systematic conceptual structures in polysemous medical terms. Proc AMIA Symp. 2000:285-9.

3. Savoy WS. Clinical Lecture on the Treatment of Cancer of the Breast by Excision. Br Med J. 1870 Mar 12;1(480):255-6.

4. Ferrucci DA. Introduction to "This is Watson". IBM Journal of Research and Development. Vol. 54, Issue 3.4, May-June 2012:1-15.

Rob High is the Chief Technology Officer of IBM Watson, where he leads the technical strategy and thought leadership of IBM Watson.

Jho Low is the Chief Executive Officer of Jynwel Capital Limited and Director of Jynwel Charitable Foundation Limited, which looks to fund breakthrough programs that help scale and accelerate advancements in health.

Join the conversation about this story »

Here's The One Problem We Need To Solve To Create Computers With Human-Like Intelligence

$
0
0

Watson on Jeopardy

Computers have been making trades on Wall Street, diagnosing patients like doctors, and even composing music that moves and inspires. But no matter how moving the movie "Her" was — they don't yet have human-like intelligence.

Why is that?

Well, there's one big problem that we need to solve, according to David Deutsch, an Oxford physicist widely regarded as the father of quantum computing. The problem? We don't even understand how to define how human intelligence operates.

In Aeon, Deutsch argues that artificial general intelligence, or AGI — the creation of a mind that can truly think like a human mind, not merely perform some of the same tasks — "cannot possibly be defined purely behaviourally," meaning we won't be able to tell if AI is human-like just based on the computer's output.

The definition of artificial intelligence relies on how thinking as we know it works on the inside, not on what comes out of it. AGI is a stricter definition of artificial intelligence, and is sometimes called "Strong AI," as opposed to "Weak AI," which refers to AI that can mimic some human capacities but does not attempt to capture the whole range of what our minds can do.

He invokes a classic thought experiment about a brain in a vat, which invites us to consider a human brain kept alive and alert, but disconnected from outside stimuli. This has never been done and wouldn't work in reality, but it illustrates a point.

In Deutsch's version of the thought experiment, the brain has no sensory data coming in, and cannot express itself, but nevertheless, the brain itself continues "thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI," Deutsch writes. "So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs."

Even without a way of talking to anyone, we can imagine the brain is still doing what brains do: coming up with ideas and explanations for what's happening in its world. (In this case, trying to answer the question: "How did I get into this vat?")Brain in a vat

Because we can't define exactly how our minds work, we are stuck saying about AGI what Supreme Court Justice Potter Stewart said about obscenity: "I know it when I see it."

In other words, we can't define AGI simply by what it produces — whether that's billions in trading profits, life-saving medical diagnoses, or soaring musical compositions. While impressive, these AI-creations aren't enough to tell us that the intelligence behind them is human-like.

As Deutsch writes:

What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.

In other words, before we can think seriously about creating anything that can be called an AGI, we need to know how our brains generate theories about how things work in the absence of information — and how to capture that process in a program. Simply put, we can't even agree on how our brains work, which is a pretty important thing to figure out before we can translate that process into a machine.

Deutsch elaborates:

[I]t is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI's thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable. But an AGI programmer needs to know where the theories come from in the first place.

It's worth mentioning that the thinking process Deutsch describes closely resembles the thinking of the very scientists and engineers who are working to make AGI into a reality. But human minds don't need all of the information before coming up with a theory — which is good, because we rarely, if ever, have all the information about anything.

Even more importantly, human minds don't rely solely or even mostly on justified, true information. A human who thinks that the Earth is flat or that the moon is made of cheese isn't any less human for it, nor would we consider a computer who has the correct information in a database to be more intelligent than the human.

Based on how little we currently know about how our brains work, a theory of how we invent theories "is beyond present-day knowledge," Deutsch says.

Until we can define how our brains actually come up with theories, how are we supposed to be able to recreate this process in a computer? Until we better understand the mind, we will be no closer to creating real artificial intelligence than we were 50 years ago, when the first supercomputer was created. Until we solve the problem of what it means for us to think, computers will keep getting faster and better at all kinds of tasks, but they won't be truly intelligent the way a human being is.

SEE ALSO: The Most Advanced Artificial Intelligence In Existence Is Only As Smart As A Preschooler

READ MORE: The Most Ambitious Artificial Intelligence Project In The World Has Been Operating In Near Secrecy For 30 Years

Join the conversation about this story »

Here's What 'Terminator' Gets Wrong About AI

$
0
0

t 800, terminator, machine

In a lot of science fiction, artificial intelligence systems become truly intelligent — as well as extremely dangerous — once they achieve self-awareness.

Take the "Terminator" series. Before becoming self-aware, Skynet is a powerful tool for the US military to coordinate the national defense; after becoming self-aware, Skynet decides, for some reason, to coordinate the destruction of the human species instead.

But how important is self-awareness, really, in creating an artificial mind on par with ours? According to quantum computing pioneer and Oxford physicist David Deutsch, not very.

In an excellent article in Aeon, Deutsch explores why artificial general intelligence (AGI) must be possible, but hasn't yet been achieved. He calls it AGI to emphasize that he's talking about a mind like ours, that can think and feel and reason about anything, as opposed to a complex computer program that's very good at one or a few human-like tasks.

Simply put, his argument for why AGI is possible is this: Since our brains are made of matter, it must be possible, in principle at least, to recreate the functionality of our brains using another type of matter — specifically circuits.

As for Skynet's self-awareness, Deutsch writes:

That's just another philosophical misconception, sufficient in itself to block any viable approach to AGI. The fact is that present-day software developers could straightforwardly program a computer to have 'self-awareness' in the behavioural sense — for example, to pass the 'mirror test' of being able to use a mirror to infer facts about itself — if they wanted to. As far as I am aware, no one has done so, presumably because it is a fairly useless ability as well as a trivial one.

In other words, the issue is not self-awareness — it's awareness, period. We could make a machine to be "self-aware" in a technical sense, and it wouldn't possess any more human-level intelligence than a computer that's programmed to play the piano. Viewed this way, self-awareness is just another narrow, arbitrary skill — not the Holy Grail it's made out to be in a lot of science fiction.

HAL 9000

As Deutsch puts it:

AGIs will indeed be capable of self-awareness — but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves.

So why does this matter? Isn't this just another sci-fi trope? Not exactly.

If we really want to create artificial intelligence, we have to understand what it is we're trying to create. Deutsch persuasively argues that, as long as we're focused on self-awareness, we miss out on understanding how our brains actually work, stunting our ability to create artificially intelligent machines.

What matters, Deutsch argues, is "the ability to create new explanations," to generate theories about the world and all its particulars. In contrast with this, the idea that self-awareness — let alone real intelligence — will spontaneously emerge from a complex computer network is not just science fiction. It's pure fantasy.

READ MORE: Here's What Needs To Happen For AI To Become A Reality

SEE ALSO: Why We Can't Yet Build True Artificial Intelligence, Explained In One Sentence

Join the conversation about this story »

Humans Aren't Creating Superintelligent Machines For Scientific Reasons — We Are Making Them For Love

$
0
0

Maria Metropolis Erotic Robot

Why are we so taken with the idea of machines that can think like we can? Perhaps, one futurist suggests, this fascination is rooted not in scientific curiosity but in sexual urges, even romantic longing.

Modern science fiction fans are not the only ones enthralled by the prospect of intelligent machines; the idea of intelligent robots has been a part of human culture since ancient times, going back to Greek mythology, and actual automatons have existed ever since the ancient Egyptians created sacred statues that convinced the devout that they were alive.

The appeal of having highly functional robots to perform tasks that we can't perform is obvious. But for George Zarkadakis, a Greek author and futurist, the allure of artificial intelligence runs deeper than our fondness for making tools.

Writing in Aeon, Zarkadakis makes a provocative case that the real driving force behind the quest for human-like artificial intelligence is not scientific, but erotic.

Zarkadakis writes that "technology is a cultural phenomenon, and as such it is molded by our cultural values." We develop medicine because we value health. We develop markets and creature comforts because we value wealth and freedom. We explore space because we are restlessly curious about our universe.

Yet when it comes to creating conscious simulacra of ourselves, what exactly is our motive? What deep emotions drive us to imagine, and strive to create, machines in our own image? If it is not fear, or want, or curiosity, then what is it? Are we indulging in abject narcissism? Are we being unforgivably vain? Or could it be because of love?

Zarkadakis isn't just talking about sex with robots, but real love, with all that implies. He cites examples from literary history as well as the history of artificial intelligence, including an interesting backstory on the famous "Turing Test":

Imagine three rooms, connected via keyboards and monitors that can display text. In one room sits a man. In the second there is a woman. The third room contains a person whom we shall call 'the judge'. The judge's task is to decide which of the two people talking to him through the computer is male. The man will try to convince the judge of his own masculinity. The woman will imitate masculinity, doing her utmost to deceive the judge into believing that she is the man.

In 1951, the British computing pioneer Alan Turing observed that, by modifying this 'imitation game' slightly and placing a machine in the second room instead of a woman, one thereby created a test for whether that machine possessed intelligence or not. The machine would imitate the man. If the judge couldn't tell the difference, then the machine was a passable simulation of a human being, which would presumably mean that it was intelligent.

Zarkadakis goes on to speculate that, since Turing was gay at a time that homosexuality was a crime in the UK, perhaps his test was about more than just artificial intelligence. (Despite being a war hero for his work as a cryptographer during World War II, in 1952 Turing was arrested for his relationship with a man, and was sentenced to undergo barbaric hormone treatments for his "condition.")

It is irresistible to suppose that his imitation game must have reflected something of his own, veiled sexuality: that it is he who is behind the door, both masculine and feminine at the same time, trying to fool the 'judge' that is society itself. Or perhaps Turing is the judge, examining the statements of his opposite number for a flicker of mutual recognition, a subtle affinity between kindred spirits. Behind the austere specifications of the famous 'Turing test', what fears and desires might lurk?

Alan TuringAs Zarkadakis points out, one doesn't have to try very hard to find examples of eroticized automatons in literature: "Western literature, ancient and modern, is strewn with mechanical lovers."

Consider Pygmalion, the Cypriot sculptor and favorite of Aphrodite. Ovid, in his Metamorphoses, describes him carving a perfect woman out of ivory. Her name is Galatea and she's so lifelike that Pygmalion immediately falls in love with her. He prays to Aphrodite to make the statue come to life. The love goddess already knows a thing or two about beautiful, non-biological maidens: her husband Hephaestus has constructed several good-looking fembots to lend a hand in his Olympian workshop. She grants Pygmalion's wish; Pygmalion kisses his perfect creation, and Galatea becomes a real woman. They live happily ever after.

The Pygmalion myth, Zarkadakis explains, went on to enjoy a long and fruitful life in Western literature, drawn upon by the likes of Rousseau, Goethe, Shakespeare, and George Bernard Shaw.

And we don't have to reach far for more modern examples. "Her," in which Joaquin Phoenix plays a lovesick man who falls in love with an operating system voiced by Scarlett Johansson, all but makes Zarkadakis's argument for him.

Scarlett JohanssonFor Zarkadakis, even our fears about artificial intelligence smack of love. In predictions that superintelligent machines will rebel and destroy us all, he sees the fear that "partners and children might indeed abandon us, regardless of what good we did for them."

And in the idea that we should program any intelligent machines we manage to create with fail-safe measures that would ensure their loyalty, Zarkadakis sees the controlling hand of a jealous lover:

Since we are the designers of the robots, let us force them to love us, forever and completely. Let us become like Pygmalion and make them perfect. It's in our grasp: to program our children and our lovers so that they will never fail us, never betray us, so that they remain forever faithful. Perfect love will no longer be elusive. Wasn't this why we wanted artificial intelligence in the first place?

One doesn't have to agree completely with Zarkadakis's argument to find his argument intriguing. As we humans become more and more intertwined with our machines, it might be worth meditating on the nature of our relationship with them — especially those we make in our own image.

SEE ALSO: Here's The One Problem We Need To Solve To Create Computers With Human-Like Intelligence

READ MORE: This Law School Professor Believes Robots May Lead To An Increase In Prostitution

Join the conversation about this story »

Google Is Building Up A Strong Team To Tackle Artificial Intelligence (GOOG)

$
0
0

artificial intelligence robot

Google just upped its bid to get to the forefront of artificial intelligence research through the acqui-hire of two British AI companies and a subsequent partnership with Oxford University. 

Earlier this year, Google bought the artificial technology company DeepMind for $400 million, and now it's welcoming the seven founders of Dark Blue Labs and Vision Factory to that team.

The companies focus on deep learning for natural language understanding and visual recognition systems, respectively and several of the founders work at Oxford University. Google is not only letting them continue to teach part time, but is making a "substantial donation" to the Computer Science and Engineering departments there to solidify a research partnership.  

"We are thrilled to welcome these extremely talented machine learning researchers to the Google DeepMind team and are excited about the potential impact of the advances their research will bring," Google's VP of Engineering, Demis Hassabis writes. 

So what exactly is Google trying to do with artificial intelligence? Well, that's not entirely clear yet. DeepMind's website says it's using machine learning and systems neuroscience to build "powerful general-purpose learning algorithms." Whether it will build something new with those algorithms, or bake them into its current projects, like self-driving cars or giant robots, remains to be seen.

SEE ALSO: Google Has A New App To Reinvent Email

Join the conversation about this story »

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>