Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

New UN Report Calls For The Cessation Of All Military Drone Fabrication

$
0
0

U.S. Phalanx

Killer robots that can attack targets without any human input "should not have the power of life and death over human beings," a new draft U.N. report says.

The report for the U.N. Human Rights Commission posted online this week deals with legal and philosophical issues involved in giving robots lethal powers over humans, echoing countless science-fiction novels and films.

The debate dates to author Isaac Asimov's first rule for robots in the 1942 story "Runaround:" ''A robot may not injure a human being or, through inaction, allow a human being to come to harm."

Report author Christof Heyns, a South African professor of human rights law, calls for a worldwide moratorium on the "testing, production, assembly, transfer, acquisition, deployment and use" of killer robots until an international conference can develop rules for their use.

His findings are due to be debated at the Human Rights Council in Geneva on May 29.

According to the report, the United States, Britain, Israel, South Korea and Japan have developed various types of fully or semi-autonomous weapons.

In the report, Heyns focuses on a new generation of weapons that choose their targets and execute them. He calls them "lethal autonomous robotics," or LARs for short, and says: "Decisions over life and death in armed conflict may require compassion and intuition. Humans — while they are fallible — at least might possess these qualities, whereas robots definitely do not."

He notes the arguments of robot proponents that death-dealing autonomous weapons "will not be susceptible to some of the human shortcomings that may undermine the protection of life. Typically they would not act out of revenge, panic, anger, spite, prejudice or fear. Moreover, unless specifically programmed to do so, robots would not cause intentional suffering on civilian populations, for example through torture. Robots also do not rape."

The report goes beyond the recent debate over drone killings of al-Qaida suspects and nearby civilians who are maimed or killed in the air strikes. Drones do have human oversight. The killer robots are programmed to make autonomous decisions on the spot without orders from humans.

Heyns' report notes the increasing use of drones, which "enable those who control lethal force not to be physically present when it is deployed, but rather to activate it while sitting behind computers in faraway places, and stay out of the line of fire.

"Lethal autonomous robotics (LARs), if added to the arsenals of States, would add a new dimension to this distancing, in that targeting decisions could be taken by the robots themselves. In addition to being physically removed from the kinetic action, humans would also become more detached from decisions to kill - and their execution," he wrote.

His report cites these examples, among others, of fully or semi-autonomous weapons that have been developed:

— The U.S. Phalanx system for Aegis-class cruisers, which automatically detects, tracks and engages anti-air warfare threats such as anti-ship missiles and aircraft.

— Israel's Harpy, a "Fire-and-Forget" autonomous weapon system designed to detect, attack and destroy radar emitters.

— Britain's Taranis jet-propelled combat drone prototype that can autonomously search, identify and locate enemies but can only engage with a target when authorized by mission command. It also can defend itself against enemy aircraft.

— The Samsung Techwin surveillance and security guard robots, deployed in the demilitarized zone between North and South Korea, to detect targets through infrared sensors. They are currently operated by humans but have an "automatic mode."

Current weapons systems are supposed to have some degree of human oversight. But Heyns notes that "the power to override may in reality be limited because the decision-making processes of robots are often measured in nanoseconds and the informational basis of those decisions may not be practically accessible to the supervisor. In such circumstances humans are de facto out of the loop and the machines thus effectively constitute LARs," or killer robots.

Separately, another U.N. expert, British lawyer Ben Emmerson, is preparing a special investigation for the U.N. General Assembly this year on drone warfare and targeted killings.

His probe was requested by Pakistan, which officially opposes the use of U.S. drones on its territory as an infringement on its sovereignty but is believed to have tacitly approved some strikes in the past. Pakistani officials say the drone strikes kill many innocent civilians, which the U.S. has rejected. The other two countries requesting the investigation were two permanent members of the U.N. Security Council, Russia and China.

In April, an alliance of activist and humanitarian groups led by Human Rights Watch launched the "Campaign to Stop Killer Robots" to push for a ban on fully autonomous weapons. The group applauded Heyns' draft report in a statement on its website.

Please follow Military & Defense on Twitter and Facebook.

Join the conversation about this story »


Steve Wozniak Says Human-Like Computers Could Give Every Child A Personal Tutor

$
0
0

steve wozniak ipad 3G

Apple cofounder Steve Wozniak thinks that computers will become so much like humans over the next few years that children will be able to use them as personal teachers.

According to the Belfast Telegraph, the Silicon Valley legend said that recent developments are bringing us closer to the point where kids will be able to have one-on-one conversations with their devices.

Whereas now students find themselves in over-fill classrooms and educators have to teach to national standards-based tests, Wozniak believes that such advances will allow for much more personalized educations: "We will be able to have one teacher per student and let students go in their own direction."

Please follow SAI on Twitter and Facebook.

Join the conversation about this story »

Sorry, Everyone: Artificially Intelligent Bots In 'Quake III' Game Do NOT Create 'World Peace' If Left Running For 4 Years

$
0
0

quake III arena

There's a wonderful story making its way through the tech/gaming media right now: That if you set 16 bots to fight each other in the video game "Quake III: Arena," and then leave the game running for four years, the bots' artificial intelligence figures out that if no one kills anyone, then no one dies ... and everybody "wins."

The bots end up standing around, passively, refusing to shoot each other, even if the map in the game is altered. Only when a new human-controlled character joins the game and starts blasting them do the bots go back to fighting — taking out the new belligerent first.

The story is based on this screen shot of a gaming chat room from 2011, in which a user discovers to his amazement that he's accidentally left a version of the game running on a pirate server he used four years ago. He wanted to see what happened if he left 16 bots in the game to fight each other. The anonymous user tells his friend:

I just checked on them but for some reason all the bots are just standing still. ... the ultimate survival strategy developed over 4 years: nobody dies if nobody kills.

The problem is, it's not true.

As the Huffington Post and Forbes both found to their cost, it's all a prank. Bots using AI inside Quake III do not evolve "world peace," as HuffPo put it.

The story "feels" true because the image of the chat thread is filled with enough gamer-tech jargon — including screengrabs of data logs — to seem like the real thing. And developers have created AI bots for Quake III that "learned" new fighting skills as the game evolved. Before both HuffPost and Forbes grabbed it, the rumor was given new life by Mat Murray ... who happens to be a content strategist at Delete, a digital strategy agency.

We're not saying Murray knew the thread was faked. But he surely has the credentials to know what kind of material goes viral very, very quickly.

Join the conversation about this story »

The Most Advanced Artificial Intelligence In Existence Is Only As Smart As A Preschooler

$
0
0

preschoolers blocks AI

One of the most advanced artificial intelligence systems is about as smart as preschooler, new research suggests. But your preschooler may have better common sense.

An artificial intelligence (AI) system known as ConceptNet4 has an IQ equivalent to that of a 4-year-old child, according to the study. But the machine's abilities don’t evenly match those of a child: the computer system aced its vocabulary test, but lacked the prudence and sound judgment of a ;preschooler.

“We’re still very far from programs with common sense — AI that can answer comprehension questions with the skill of a child of 8,” study co-author Robert Sloan, a computer scientist at the University of Illinois at Chicago, said in a statement.

Artificial intelligence has been getting exponentially smarter for decades, and some technologists even believe the singularity— the point when machine intelligence will overtake humans — is near. Along the way, we’ve seen the trivia-playing computer Watson trounced trivia maven Ken Jennings at Jeopardy, and chess program Deep Blue eventually beat chess master Gary Kasparov in 1997. [Super-Intelligent Machines: 7 Robotic Futures]

But despite those high profile wins, AI's track record is decidedly spotty. Machines can't yet be programed to form intuitions about the physical world without doing extensive calculations, and they seem to fail at answering open-ended questions.

In the new study, researchers decided to test out just how close to human intelligence AI has come. They administered the verbal portions of a standard IQ test called the Wechsler Preschool and Primary Scale of Intelligence Test to ConceptNet4.

The machine's score was similar to that of a 4-year-old child. But this was no ordinary child: its intelligence was all over the map. The program scored highly on the vocabulary and similarity portions of the test, but floundered on comprehension sections, which are heavy on the "why" questions.

"If a child had scores that varied this much, it might be a symptom that something was wrong," Sloan said in a statement.

Not surprisingly, the computer also seemed to lack common sense. Through rich life experience, most people gather subconscious knowledge of the world that they rely on to make quick, wise judgments.

AI's "childhood" is positively deprived in comparison. For instance, AI may know the boiling point of water, but people know better the importance of steering clear of a hot stove.

"As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appreciate having their tails pulled. Life is a rich learning environment," Sloan said.

Follow Tia Ghose on Twitter and Google+. Follow LiveScience @livescience, Facebook& Google+. Original article on LiveScience.com.

Find Us On Facebook — Business Insider: Science

Join the conversation about this story »

BMW Made Its Own Version Of Siri To Help Sell Electric Cars

$
0
0

bmw i3 electric car

In the new, strange and intriguing world of electric cars, customers have plenty of questions to ask--and BMW is launching a tool to help answer those questions without having to contact your dealer.

Called i Genius, it's actually an artificial intelligence service, combining a database of information on BMW's new electric cars with the ability to interpret a user's questions and respond accordingly.

The system is capable of interpreting words, the context of those words and the sentiment behind each question, allowing it to build an appropriate response in real time--even turning into a conversation with subsequent questions. Nothing is required beyond a simple text message, which BMW promises will be answered with a detailed and helpful answer.

The system was developed by London Brand Management, founded by 19-year-old Dmitry Aksenov. Of i Genius, Aksenov says, "our Artificial Intelligence software is truly groundbreaking and provides a unique channel for BMW and its customers to make better buying decisions by getting access to the right information at the right time in the right place." That means information at the tips of your fingers, even if you happen to wake up in the middle of the night wondering how much horsepower the i8 produces.

For the moment, the system is U.K.-only, and works by SMS-messaging a question to the shortcode 84737. In the interests of journalistic investigation, we asked it what engine the i8 uses. While a little verbose and PR-speaky, it confirmed the i8's 1.5-liter three-cylinder gasoline unit--and answered our follow-up question about the i3's launch date--November, in Europe (and early next year in the U.S.).

It's an interesting, useful service, and it's sure to offer answers to BMW's i customers once the first vehicles are delivered, too.

Follow Motor Authority on FacebookTwitter, and Google+.

SEE ALSO: Ralph Lauren Has One Of The World's Best Car Collections — Here Are His Personal Favorites

Join the conversation about this story »

IBM CEO: 'Do Not Be Afraid' Of The All-Powerful Computer We're Building (IBM)

$
0
0

Hal 9000

As most of the world knows by now, IBM has created a computer called Watson that is, arguably, the smartest, most human-like computer ever built.

Most folks know Watson as the computer that won the Jeopardy game some years ago. Watson had to understand verbal language to win, a hard thing for a computer to do well (as Siri users will attest).

Since the days of Jeopardy, Watson has been helping doctors fight cancer. "It's ingested 2 million pages [of medical information] and understands medical language," Rometty says. Today it helps doctors verify a diagnosis and pick the statistically best treatments.

But IBM CEO Ginni Rometty says that we haven't seen anything yet. 

Watson 2.0 will "see" she says, meaning it will be able to look at pictures like x-rays, understand them and interpret them.

ibm watsonWatson "3.0 is one that can debate and reason," she says. Maybe to argue its point if a human disagrees?

Making Watson that human-like sounds scary, said Fortune journalist Stephanie Mehta, who was interviewing Rometty on stage. The audience murmured agreement. Perhaps they were thinking of movies like "2001: A Space Odyssey," in which a self-aware super-computer named HAL 9000 starts killing humans that threaten it.

So Rometty defended the frighteningly powerful machine IBM is building:

"It's a service. Do not be afraid. It is really, truly an advisor to a decision-making process. There are many things the human brain does that is not imitated ... Think of it as an assistant."

Join the conversation about this story »

DARPA's New Project Sounds A Lot Like Computers That Think For Themselves

$
0
0

space odyssey

Trading isn't the only thing that happens in milliseconds nowadays, complicated exploits of computer systems happen just as fast — and with astounding potential consequences.

The newest competition out of the Defense Advanced Research Projects Agency aims to create "fully automatic network defense systems" in order to mitigate these real-world consequences.

In other words, they want software systems that automatically detect, repair, and repel hacker attacks. Shorter terms: Artificial Intelligence (AI).

“The growth trends we’ve seen in cyber attacks and malware point to a future where automation must be developed to assist IT security analysts,” said Dan Kaufman, director of DARPA’s Information Innovation Office, which oversees the Challenge.

Often, hackers employ software exploits — called "Zero Day" exploits — that are custom designed; hacks known only to the hackers themselves. Software engineers, developers and more importantly, cyber security analysts, sometimes go completely unaware their systems are compromised until the damage is done.

"Through automatic recognition and remediation of software flaws, the term for a new cyber attack may change from zero-day to zero-second,” said Mike Walker, DARPA program manager.

In less technical terms, DARPA wants software that detects when it's been exploited, analyzes how exactly it was exploited (identifies the Zero Day), and then automatically (writes code that?) patches itself — while red-flagging the problem to technicians.

Also they want the systems scanning themselves for weaknesses automatically, essentially identifying Zero Days before competing hackers.

From the DARPA website:

"What if computers had a “check engine” light that could indicate new, novel security problems? What if computers could go one step further and heal security problems before they happen?"

The Pentagon's defense research arm has made leaps and bounds recently in robotics due to their competitions, so they figured a competition for AI would yield similar results.

“With the Cyber Grand Challenge, we intend a similar revolution for information security. Today, our time to patch a newly discovered security flaw is measured in days," said Walker. 

From DARPA's website:

DARPA envisions teams creating automated systems that would compete against each other to evaluate software, test for vulnerabilities, generate security patches and apply them to protected computers on a network. To succeed, competitors must bridge the expert gap between security software and cutting-edge program analysis research. The winning team would receive a cash prize of $2 million.

Teams would score based on how capably their systems could protect hosts, scan the network for vulnerabilities and maintain the correct function of software. The winning team from the CGC finals would receive a cash prize of $2 million, with second place earning $1 million and third place taking home $750,000.

Join the conversation about this story »

Chips 'Inspired' By The Brain Could Be Computing's Next Big Thing

$
0
0

faces

When you see the world through your own eyes, your brain is able to almost instantly recognize and perceive what things are. Your brain is essentially a complex pattern-matching machine that delivers context and understanding. You see a person, recognize their height, facial features, clothing and voice and your brain says, “Hey, that’s Bob and I know him.”

A computer’s "brain" can't do this. It can see the same objects in life and may eventually be able to tell you what they are, but it does this by matching a digital representation of the objects it "sees" to similar representations stored in databases, using pre-defined instructions built into its code.

For instance, when Facebook or Google use facial recognition software to determine who the people are in your pictures, the computer runs the images through a database that determines who the person is. It hasn't learned anything, and if it needed to match the same face in a different photo, it would need to run through its database again.

But what if you could create a computer chip that simulated the neural networking in the human brain? A computer chip that could act, learn and evolve the way a brain does?

Technology companies have tried to make chips that work like the human brain for decades, efforts that have typically fallen well short of the mark. Now Qualcomm and rivals such as IBM and Intel say they're finally getting close with a new class of brain-inspired computers that are closer to reality — and actual products — than ever before.

Neural Network Effects

The concept has long been a goal for computer scientists. The concept of a neural network and strong artificial intelligence (not necessarily a single chip) was championed by the likes of the U.S. Defense Advanced Research Projects Agency (DARPA) from the 1980s before it pulled funding from its AI projects in 1991. Computer scientists have been working on a self-learning computer for a long time.

Federico Faggin, one of the fathers of the microprocessor, dreamed of a processor that behaves like a neural network for decades. At his company Synaptics, though, attempts to build chips that could "learn" from their inputs the way a brain does never yielded neural processors. Instead, the work led to advances in touch sensing and the creation of laptop touchpads and touchscreens.

Artificial neural networks turned out to be much more complicated to make than people like Faggin had hoped. Chipmakers like Qualcomm, IBM and Intel are taking what they know about how to build microprocessors and what they know about the human brain and creating a whole new class of computer chip: the neural-inspired processing unit.

Basic and advanced research is still taking place for the neuron-inspired computer brains. Qualcomm hopes to ship what it calls a "neural processing unit" by next year; IBM and Intel are on parallel paths. Companies like Brain Corp., a startup backed by Qualcomm, have focused on research and implementation of brain-inspired computing, adding algorithms that aim to mimic the brain's learning processes to improve computer vision and robotic motor control.

Redefining What Computers Do

Today's computer chips provide relatively straightforward — and rigidly defined — functions for processing information, storing data, collecting sensor readouts and so forth. Basically, computers do what humans tell them to do, and do it faster every year.

Qualcomm's "neural processing unit," or NPU, is designed to work differently from a classic CPU. It's still a classic computer chip, built on silicon and patterned with transistors, but it has the ability to perform qualitative (as opposed to quantitative) functions. 

Here's how Technology Review described the difference between Qualcomm's approach and traditional processors:

Today’s computer systems are built with separate units for storing information and processing it sequentially, a design known as the Von Neumann architecture. By contrast, brainlike architectures process information in a distributed, parallel way, modeled after how the neurons and synapses work in a brain....

Qualcomm has already developed new software tools that simulate activity in the brain. These networks, which model the way individual neurons convey information through precisely timed spikes, allow developers to write and compile biologically inspired programs. Qualcomm ... envisions NPUs that are massively parallel, reprogrammable, and capable of cognitive tasks like classification and prediction.

Matt Grob, the vice president and chief technology officer of Qualcomm Technologies explained the difference between a CPU and an NPU in an interview with ReadWrite during the EmTech MIT Technology Review conference earlier this month.

But all these processors are derived from the original [computer processor] hardware architecture. It’s an architecture where you have data registers and processing units, you grab things from memory and you process it and store the results, and you do that over and over as fast as you can. We’ve added multiple cores and wider channel widths, and cache memory, all these techniques to make the processors faster and faster and faster. But the result is not at all like biology, like a real brain.

We’re talking about new classes of processors. Using new techniques at Qualcomm, that are biologically inspired … So when you try to solve certain problems like image recognition, motor control, or different forms of learning behavior, the conventional approach is problematic. It takes an enormous amount of power or time to really do a task, or maybe you can’t do a task at all.

But what we did do is start an early partnership with Brain Corporation, and we have a good relationship with the founder Eugene Izhikevich, and he created a very good model for the neuron, a mathematical model. The model is two properties, one is that its very biologically plausible.… But it’s also a numerically compact model that you can implement efficiently. So Eugene influenced this space, didn’t create it, but influenced it, and that in concert with the amount of source that we can put down, together, created some excitement about the machines we could build.

Qualcomm calls its theoretical processors “Zeroth.” The name takes its cue from the “Zeroth Law of Robotics” coined by science fiction writer Isaac Asimov as a precursor to his famous “Three Laws of Robotics.” The Zeroth Law states: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” Asimov actually introduced the Zeroth Law well after the initial Three Laws to correct flaws in how robots interpreted the original paradigm.

Grob gives the example of a simple robot with an NPU and a leg with a hinge joint. The robot was given a very simple desire: laying down was bad, standing up was good. The robot started wiggling and eventually learned how to stand. If you pushed it back down, it would stand right back up.

“And when you watch it, it doesn’t move like a robot, it moves like an animal. The thing that popped into my mind, it was like how a giraffe tries to stand up after it’s born,” Grob said. “They’re kinda shaky. They want to learn how to stand up, they have that desire, it’s programmed in, but they don’t know how to do it just yet. But they know any movement gets them closer to their goal, so they very quickly learn how to do it. The way they shake and fumble, it looks just like that.”

An NPU based on mathematical modeling and human biology would, theoretically, be able to learn the way a human could. Qualcomm hopes that NPUs could end up in all kinds of devices from smartphones to servers in cloud architecture within the next several years.

Hacking The Learning Computer

Most smartphones and tablets these days are controlled by integrated chips that combine processors, specialized graphics units and communication systems such as cellular technologies like LTE or Wi-Fi. Qualcomm envisions a near-term future where the NPU either slides directly onto these "systems on a chip" or operates as a stand-alone chip within a machine (like a robot).

IBM's work on cognitive computing — called TrueNorth — attempts to rethink how computing is done and pattern it after the brain. TrueNorth is not a chip like Qualcomm's Zeroth but rather a whole new paradigm that could change how computer vision, motion detection and pattern matching work. IBM is able to program TrueNorth with what it calls "corelets" that teach the computer a different function.

Intel's "neuromorphic" chip architecture is based on "spin devices." Essentially, "Spintronics" takes advantage of the spinning of an electron to perform a variety of computing functions. In a research paper published in 2012 [PDF], Intel researchers describe the capabilities of neuromorphic hardware based on spin devices as able to perform capabilities like: "analog-data-sensing, data conversion, cognitive-computing, associative memory, programmable-logic and analog and digital signal processing."

How does Qualcomm's work differ from that of IBM and Intel? Qualcomm would like to go into production soon and get its NPUs onto integrated chips as early as next year. In a statement to ReadWrite, Qualcomm said, "Our NPU is targeted at embedded applications and will leverage the entire Qualcomm ecosystem. We can integrate our NPU tightly with conventional processors to offer a comprehensive solution in the low power, embedded space."

When Qualcomm uses the phrase "embedded applications," it's referring to actual devices that can control certain aspects of the user interface and experience. That might mean smartphones that can learn through computer vision and audio dictation, for instance, employing code that gives the NPU a desire to perform an act, like learn your voice and habits.

An NPU in a smartphone could perform a whole host of functions. Grob gave us the example of your phone ringing from an unknown number at 9 p.m. on a Sunday night, perhaps while you are watching your favorite HBO special. This is not a desirable time to be speaking with a telemarketer, so you say, “bad phone, bad.” The phone learns that an interruption at the time of day and week is bad and automatically transfers those types of calls to voicemail.

Or maybe you take a picture with your smartphone camera of Bob. You tell your phone (just like you would teach a child) that the image it is seeing is Bob. In the future, the camera would automatically know that is Bob and tag the photograph with his name. Instead of having the device perform pattern matching in the cloud to determine this, the computation is done locally at relatively low computing cost.

Qualcomm wants to be the center of the developer world if and when neural processors become commonplace. Qualcomm’s value proposition is to make hacking an NPU as easy as building an app, via the company’s own tools.

The NPU may ultimately be a pipe dream for computer scientists. Qualcomm, IBM and Intel have all taken steps towards a biologically inspired computer systems that learn on their own, but whether or not it ever actually becomes commonplace in our smartphones, cloud computing servers or robots is not a certainty. At the same time, recent research and development of these kinds of chips is far more advanced than scientists like Faggin could have dreamt 40 years ago.

Join the conversation about this story »


Tiny Startup Vicarious Is Creating A Human-like Brain That Runs On A Laptop

$
0
0

Dileep George Vicarious

A San Francisco-based startup named Vicarious is well on its way toward creating software that thinks and acts like a human brain.

On Monday, the company announced a major breakthrough. Its software can now solve "CAPTCHA" problems, those little questions that websites use to make sure you are a human.

A CAPTCHA looks like this:

Recaptcha

A captcha is difficult for most computers to solve because it distorts letters and numbers in unusual ways, says Vicarious co-founder Scott Phoenix. Humans use their powers of perception to see a "m and o" with a line through it and still read the word "morning."

"We picked CAPTCHA to solve because it was explicitly designed to be impossible for computers to solve. If you're trying to build artificial intelligence, CAPTCHA a great test,"Phoenix says.

Vicarious isn't planning on releasing CAPTCHA-breaking software as a product, he says. So website operators don't need to worry about it wreaking havoc on the Internet. (CAPTCHA prevents Internet "bots" from doing things like signing up for thousands of accounts at a time or posting spam comments.)

The founders of Vicarious have a much bigger goal than that. They are trying to build a computer that replicates the part of the brain called the neocortex, which commands higher functions such as sensory perception, spatial reasoning, conscious thought, and language.

In other words, they are trying to create human intelligence.

Eventually, they hope that their software will compete with IBM's supercomputer Watson, which is used for things like diagnosing cancer, with one big exception:

"Our software runs on laptops and a small set of servers we have," co-founder Dileep George said, "It doesn't need a huge set of data to train it." That's also like the human brain, he said.

IBM's Watson runs on a supercomputer, and doctors use it over the cloud.

The Vicarious team won't have a product ready for another five years, they say, and aren't worried about a commercial product at this stage. They are focused on the technology.

There's good reason to believe they'll succeed. The team is friends with some of the biggest names in the artificial intelligence industry, like Ray Kurzweil. They are well funded, too. The company, founded in 2010, landed $20 million in venture investment in 2012, led by Peter Thiel's Founder Fund. Facebook co-founder Dustin Moskowitz also invested.

Here's a video that shows the software in action breaking CAPTCHAs.

SEE ALSO: The 21 Hottest Cloud Startups Right Now

Join the conversation about this story »

Stop What You're Doing And Try This Site That Comes Up With A Facebook Status Update For You

$
0
0

What-would-I-say is a new website that generates Facebook statuses that sound like you. And as these things go, it's not too bad.

For example, it thinks I might say this on Facebook:

HHS What would I say

Which is pretty accurate!

Here's how they say it works:

Technically speaking, it trains a Markov Bot based on mixture model of bigram and unigram probabilities derived from your past post history.

I think that means the bot strings together words (unigrams) and pairs of words (bigrams) that pop up a lot in your Facebook posts.

It's pretty clever, but it's not perfect.

For example, this proposed status update contains a bunch of terms I might use, but overall it's gibberish:

milwaukee journal sentinel

While this one is coherent but gets my political views wrong:

abortion

Sometimes the bot makes make me sound like a pedantic douchebag:

loanword

harvard alum And sometimes it takes an uncomfortable journey into my subconscious sexual desires:

unspeakably horny

hipster invasion

And sometimes it thinks I'm Slate's Matt Yglesias:

cheesecake factory

But when the bot has its moments, it's more insightful than I am.

undesirably divisive

cain to save

Indeed, Markov Bot. Indeed.

Join the conversation about this story »

Researchers Are Trying To Teach Computers Common Sense

$
0
0

Carnegie Mellon Computer

PITTSBURGH (AP) — Researchers are trying to plant a digital seed for artificial intelligence by letting a massive computer system browse millions of pictures and decide for itself what they all mean.

The Carnegie Mellon University project is called NEIL, short for Never Ending Image Learning. In mid-July it began searching the Internet for images 24/7 and, in tiny steps, is deciding for itself how those images relate to each other. The goal is to recreate what we call common sense — the ability to learn things without being specifically taught.

For example, the computers have figured out that zebras tend to be found in savannahs and that tigers look somewhat like zebras.

The project is being funded by Mountain View, Calif.-based Google Inc. and the Department of Defense's Office of Naval Research.

Join the conversation about this story »

Facebook Just Hired A Man Who Teaches Computers To Think (FB)

$
0
0

 Yann LeCun

The race is on to create computers that can see, hear, think and reason like humans but with a computer's speed and accuracy. 

IBM has its Watson. Google has its new Quantum AI Lab in partnership with NASA and the Universities Space Research Association (USRA).

Facebook has a new artificial intelligence lab it announced in September. And on Monday, Facebook hired a star from this world to run it: New York University professor Yann LeCun.

LeCun announced that he was joining Facebook on a Facebook post. He'll be running the lab part time, teaching part time and overseeing a partnership between Facebook and NYU's Center for Data Science.

LeCun has a long history inventing computers that can think. His handwriting recognition technology is used by banks around the world. More recently he, along with University of Toronto Geoffrey Hinton, have ushered in advancements that let computers teach themselves, a concept called "unsupervised learning."

In March, Google hired Hinton. So by hiring LeCun, Facebook has scored a notch in its AI belt, too.

This next generation of artificial intelligence is called "deep learning" and this man is so well known in the field that both Facebook CEO Mark Zuckerberg and Facebook CTO Michael Schroepfer attended a deep learning conference in Tahoe on Monday and announce the hire there, LeCun said in his post.

Facebook, IBM, and Google aren't the ones working on deep learning. In October, Yahoo acquired LookFlow, a startup known for its deep learning image recognition tech. Teaching machines to recognize pictures is one of the holy grails of deep learning. Imagine telling your computer to collect pictures of your kids smiling and it finds them for you.

LookFlow will be the core of a new deep learning group at Yahoo, too, reports TechHive.

Meanwhile, Microsoft Researchers are busy working on deep learning tech for speech recognition.

SEE ALSO: 9 Tech Trends That Will Make Someone Billions Of Dollars Next Year

Join the conversation about this story »

Can A Human Actually Fall In Love With A Computer?

$
0
0

Joaquin Phoenix her trailer

The film "Her," which opens across the country this month, tells a love story between a man and some software.

It may seem far-fetched, but researchers say it's plausible. If so inclined, they could stitch together existing systems into one irresistible romance algorithm. Here's how a lovebot could seduce you.

Curiosity

A program that asks lots of questions stays more in control of the conversation and is more likely to produce convincing, relevant replies. Software that attempts to answer a person's questions risks revealing just how little it knows about the world (and human emotions) — ruining the effect. That's why inquisitiveness is one of the most common and successful cheats for chatbots, dating back to Eliza, a program built at the Massachusetts Institute of Technology in the '60s. Its persistent queries were modeled on those of a therapist.

Smarts

In order to be able to hold a real conversation, computers have to be intelligent enough to both ask and answer questions. IBM's Watson, which defeated humans on "Jeopardy!" in 2011, is one of the smartest programs around. It can understand conversational language, draw from external and internal knowledge bases, and process 500 gigabytes a second. (Watson currently works in health care, finance, and retail.) But for romance to truly blossom for most people, computers will need to get even smarter than that.

Allure

Studies show that people will divulge more about sensitive, personal topics (such as drug use or sexual activity) to a computer than to a researcher. Machines can also coax humans into being polite with them. In one study, people were interviewed by a computer and then asked to rate its performance. They rated it better when they input their scores on that same computer rather than on a separate terminal or on paper — as if the computer had feelings. A program that elicits both unconscious behaviors — confessing and being kind — would be formidable.

Spike Jonze On Chatbots

A single film can come from many sources of inspiration, but part of the idea for the movie "Her" came 10 years ago, when writer and director Spike Jonze was interacting with an online chatbot. More specifically, it came midway through the conversation, when things got weird. "You're not very interesting," the bot told him. The filmmaker wasn't charmed, but he was intrigued. Instead of seeming blandly, vaguely human, Jonze says, "it was sassy and had an attitude and a point of view of the world."

Find us on Facebook — Business Insider: Science

Join the conversation about this story »

These Computer Simulations Teach Themselves To Walk, And The Results Are Hilarious

$
0
0

What we have here is a computer demonstration of "flexible muscle-based locomotion for bipedal creatures," but let's out it for what it really is: a video of 3D models figuring out how to walk and mostly failing at it to chuckle-worthy results.

The hijinks begin to pick up around the 57-second mark, when the first humanoid falls over, right onto his digitized face.

Here's the proper video description for those seeking a more formal explanation of what's going on:

We present a muscle-based control method for simulated bipeds in which both the muscle routing and control parameters are optimized. This yields a generic locomotion control method that supports a variety of bipedal creatures. All actuation forces are the result of 3D simulated muscles, and a model of neural delay is included for all feedback paths. As a result, our controllers generate torque patterns that incorporate biomechanical constraints. The synthesized controllers find different gaits based on target speed, can cope with uneven terrain and external perturbations, and can steer to target directions.

As future generations "evolve" inside the software and gain a better understanding of their "bodies," the bumbling simulated creatures tend to get things worked out. But most of those early ones just didn't have a clue.

Flexible Muscle-Based Locomotion for Bipedal Creatures from John Goatstream on Vimeo.

SEE ALSO: The NSA May Or May Not Be Building A Quantum Computer That Can Decrypt Basically Anything

Join the conversation about this story »

Why You Won't Get Your Own Personal Scarlett Johansson Any Time Soon

$
0
0

Scarlett Johansson

In the movie "Her," the main character, Theodore Twombly falls in love with the operating system of his computer. The operating system is run by an artificial intelligence who Theodore calls Samantha (voiced by Scarlett Johansson), who over the course of the film develops self awareness and shows capabilities far beyond its creators’ intentions.

The film is set in the so-called near future, a recognisable Los Angeles/Shanghai hybridisation with elements of the present projected into what could possibly be. Without spoiling the film for those who haven’t seen it yet, the capabilities of the AI are what we imagine artificial intelligences to be. The science fiction aspect is that the film takes these imaginings and animates them, letting us forget the harsh reality of the present.

That reality is reflected by companies like Google and Amazon who are using algorithms in their attempts to show intelligence that is so basic as to make the future presented in her seem a very long way into the future. Take for example the current personalised advertising that Google has implemented. Have you noticed how if you happen to search for an item, all of a sudden you get deluged by adverts for that particular item or service? There is nothing subtle or sophisticated about this. Google has decided that because you searched for an item you are interested in it and in case you didn’t make a purchase, it will keep reminding you about the service at every available opportunity. Clicking on an ad gives you the same result.

It turns out that this is not simply our imaginings, it is a feature called Ad remarketing. The basis of this is that only 2% of people visiting a site will complete a purchase and so going after the 98% who didn’t gives retailers a way of going after that interested market. There is nothing subtle in this approach. It is a particularly blunt force approach to selling.

So when Google talks about using sophisticated and personalised advertising, this is as good as it gets. Forget the hype of Big Data and detailed analytics, this is a simple if-then decision. If person has visited site then show ads for site repeatedly.

There is a strong incentive for companies, organisations and individuals to show that they are innovative and represent the future. Equally, there is a huge risk associated with being labelled as a follower or un-innovative. Apple’s fortunes for example, have largely languished over the past year because of a perception that they are not able to innovate any more.

One of the main issues facing companies is the fact that the image of innovation is often set by the marketing department and is not reflected by the organisation as a whole. There is a belief that creating a few mobile phone apps is all that it takes to be innovative in the eyes of your clients and the general public. A good example of this is the Commonwealth Bank who has been lauded as being innovative. As I have written before making token efforts to appear innovative with mobile apps will utlimately fail if the entire organisation is not operating at the same level.

It takes very little interaction with the bank to realise that behind the online high-tech facade, there are administrators largely still operating manually. An application for a credit card made online for example results in letters sent by regular post. There is no AI here, not even online communication. Ultimately, you still have to talk to a person and go through the same arcane process (and arcane credit rating process) that has existed for decades.

I enjoyed the movie "Her." For a couple of hours, I was able to imagine the possibilities of technology and how as a society we will need to adapt if it were ever to eventuate. As the adaptation of humans is really the biggest challenge in this equation, it is fortunate perhaps that the future presented in her is not remotely near, even though people may like to pretend it is.

David Glance does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The Conversation

This article was originally published at The Conversation. Read the original article.

Join the conversation about this story »


Here Are The 47% Of Jobs At High Risk Of Being Destroyed By Robots

$
0
0

It is an invisible force that goes by many names. Computerization. Automation. Artificial intelligence. Technology. Innovation. And, everyone's favorite, ROBOTS.

Whatever name you prefer, some form of it has been stoking progress and killing jobs — from seamstresses to paralegals — for centuries. But this time is different: Nearly half of American jobs today could be automated in "a decade or two," according to a new paper by Carl Benedikt Frey and Michael A. Osborne, discussed recently in The Economist. The question is: which half?

Another way of posing the same question is: Where do machines work better than people? Tractors are more powerful than farmers. Robotic arms are stronger and more tireless than assembly-line workers. But in the past 30 years, software and robots have thrived at replacing a particular kind of occupation: the average-wage, middle-skill, routine-heavy worker, especially in manufacturing and office admin. 

Indeed, Frey and Osborne project that the next wave of computer progress will continue to shred human work where it already has: manufacturing, administrative support, retail, and transportation. Most remaining factory jobs are "likely to diminish over the next decades," they write. Cashiers, counter clerks, and telemarketers are similarly endangered. On the far right side of this graph, you can see the industry breakdown of the 47% of jobs they consider at "high risk."

automation risk

And, for the nitty-gritty breakdown, here's a chart of the ten jobs with a 99% likelihood of being replaced by machines and software. They are mostly routine-based jobs (telemarketing, sewing) and work that can be solved by smart algorithms (tax preparation, data entry keyers, and insurance underwriters). At the bottom, I've also listed the dozen jobs they consider least likely to be automated. Health care workers, people entrusted with our safety, and management positions dominate the list.

If you wanted to use this graph as a guide to the future of automation, your upshot would be: Machines are better at rules and routines; people are better at directing and diagnosing. But it doesn't have to stay that way.

The Next Big Thing

Predicting the future typically means extrapolating the past. It often fails to anticipate breakthroughs. But it's precisely those unpredictable breakthroughs in computing that could have the biggest impact on the workforce.

For example, imagine somebody in 2004 forecasting the next 10 years in mobile technology. In 2004, three years before the introduction of the iPhone, the best-selling mobile device, the Nokia 2600, looked like this: 

Many extrapolations of phones from the early 2000s were just "the same thing, but smaller." It hasn't turned out that way at all: Smartphones are hardly phones, and they're bigger than the Nokia 2600. If you think wearable technology or the "Internet of Things" seem kind of stupid today, well, fine. But remember that 10 years ago, the future of mobile appeared to be a minuscule cordless landline phone with Tetris, and now smartphones sales are about to overtake computers. Breakthroughs can be fast.

We might be on the edge of a breakthrough moment in robotics and artificial intelligence. Although the past 30 years have hollowed out the middle, high- and low-skill jobs have actually increased, as if protected from the invading armies of robots by their own moats. Higher-skill workers have been protected by a kind of social-intelligence moat. Computers are historically good at executing routines, but they're bad at finding patterns, communicating with people, and making decisions, which is what managers are paid to do. This is why some people think managers are, for the moment, one of the largest categories immune to the rushing wave of AI.

Meanwhile, lower-skill workers have been protected by the Moravec moat. Hans Moravec was a futurist who pointed out that machine technology mimicked a savant infant: Machines could do long math equations instantly and beat anybody in chess, but they can't answer a simple question or walk up a flight of stairs. As a result, menial work done by people without much education (like home health care workers, or fast-food attendants) have been spared, too.

But perhaps we've hit an inflection point. As Erik Brynjolfsson and Andrew McAfee pointed out in their book Race Against the Machine (and in their new book The Second Machine Age), robots are finally crossing these moats by moving and thinking like people. Amazon has bought robots to work its warehouses. Narrative Science can write earnings summaries that are indistinguishable from wire reports. We can say to our phones I'm lost, help and our phones can tell us how to get home. 

Computers that can drive cars, in particular, were never supposed to happen. Even 10 years ago, many engineers said it was impossible. Navigating a crowded street isn't mindlessly routine. It needs a deft combination of spacial awareness, soft focus, and constant anticipation — skills that are quintessentially human. But I don't need to tell you about Google's self-driving cars, because they're one of the most over-covered stories in tech today.

And that's the most remarkable thing: In a decade, the idea of computers driving cars went from impossible to boring.

The Human Half

In the 19th century, new manufacturing technology replaced what was then skilled labor. Somebody writing about the future of innovation then might have said skilled labor is doomed. In the second half of the 20th century, however, software technology took the place of median-salaried office work, which economists like David Autor have called the "hollowing out" of the middle-skilled workforce.

The first wave showed that machines are better at assembling things. The second showed that machines are better at organization things. Now data analytics and self-driving cars suggest they might be better at pattern-recognition and driving. So what are we better at?

If you go back to the two graphs in this piece to locate the safest industries and jobs, they're dominated by managers, health-care workers, and a super-category that encompasses education, media, and community service. One conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for, other humans. In this light, automation doesn't make the world worse. Far from it: It creates new opportunities for human ingenuity.  

But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one. As Frey and Osborne write in their conclusion:

While computerization has been historically confined to routine tasks involving explicit rule-based activities, algorithms for big data are now rapidly entering domains reliant upon pattern recognition and can readily substitute for labour in a wide range of non-routine cognitive tasks. In addition, advanced robots are gaining enhanced senses and dexterity, allowing them to perform a broader scope of manual tasks. This is likely to change the nature of work across industries and occupations.

It would be anxious enough if we knew exactly which jobs are next in line for automation. The truth is scarier. We don't really have a clue.

SEE ALSO: 15 Disappearing Middle-Class Jobs

Join the conversation about this story »

Google's Game Of Moneyball In The Age Of Artificial Intelligence

$
0
0

google

Over the past couple of months, Google has been playing its own peculiar game of Moneyball. It may not make a ton of sense right now, but Google is setting itself up to leave its competitors in the lurch as it moves into the next generation of computing.

Google has been snatching up basically every intelligent system and robotics company it can find. It bought smart thermostat maker Nest earlier this month for $3.2 billion. In the robotics field, Google bought Boston Dynamics, Bot & Dolly, Holomni, Meka Robotics, Redwood Robotics and Schaft.inc to round out a robot portfolio group that is being led by Android founder Andy Rubin. 

When it comes to automation and intelligent systems, Google started its acquisition spree in late 2012 with facial recognition company Viewdle and has continued picking up neural networks since then, including the University of Toronto's DNNResearch Inc. in March 2013, language processing company Wavii in April and gesture recognition company Flutter in October. Google bought a computer vision company called Industrial Perception in December and continued its spree with a $400 million acquisition of artificial intelligence gurus DeepMind Technolgies earlier this week.

Dizzy yet? The rest of the technology industry surely is. It's hard to compete with Google’s acquisition rampage when it seemed like Google had very little rhyme or reason to the purchases it was making as they were happening. Google beat out Facebook for DeepMind while Apple had interest in Nest. But, after more than a dozen large purchases, Google’s strategy is finally becoming clear.

Google is exploiting an inefficiency in the market to become the leader in the next generation of intelligent computing.

Google's Moneyball Strategy

Moneyball is a term coined by author Michael Lewis in his 2003 book, Moneyball: The Art Of Winning An Unfair Game. The term is often falsely associated with the use of advanced statistical models used by Billy Beane, the general manager of the Oakland Athletics, to build a club that could thrive in major league baseball. But the principal of Moneyball is not actually about using stats and data to get ahead, it is about exploiting systems for maximum gain by acquiring talents that are undervalued by the rest of the industry.

This is exactly what Google is doing: exploiting market inefficiency to land undervalued talent. Google determined that intelligent systems and automation will eventually be served by robotics and has gone out of its way to acquire all of the pieces that will serve that transformation before any of its competitors could even identify it as a trend. By scooping up the cream of the crop in the emerging realm of robotics and intelligent systems, Google is cornering the market on talented engineers ready to create the next generation of human-computer interaction.

Technology Review points out that Google's research director Peter Norvig said the company employs “less than 50 percent but certainly more than 5 percent” of the world’s leading experts on machine learning. That was before Google bought DeepMind Technologies. 

To put Google’s talent hoarding into context, remember that many companies are struggling just to find enough talent to write mobile apps for Android and iOS. When it comes to talented researchers focused on robotics and AI components like neural networks, computer vision and speech recognition, the talent pool is much smaller, more exclusive and far more elite. Google has targeted this group with a furious barrage of aggressive purchases, leaving the rest of the industry to wonder where the available talent will be when other companies make their own plays for building next-generation intelligent systems.

“Think Manhattan project of AI,” one DeepMind investor toldtechnology publication Re/code this week. “If anyone builds something remotely resembling artificial general intelligence, this will be the team.”

Of course, Google is not the only company working on intelligent systems.Microsoft Research has long been involved in the creation of neural networks and computer vision, while Amazon has automated many of its factories and built cloud systems that act as the brains of the Internet. The Department of Defense research arm, Defense Advanced Research Projects Agency (DARPA), has long worked on aspects of artificial intelligence, and a variety of smaller startups are also working on their own smaller-scale intelligent platforms. Qualcomm, IBM and Intel are also working on entire systems—from chipsets to neural mesh networks—that could advance the field of efficient, independent AI.

What Is Google Trying To Accomplish?

To understand what Google’s next "phase" will look like, it is important to understand the core concepts that comprise Google. 

Google’s core objective—which has never really changed despite branching out into other areas of computing—is to accumulate and make accessible all of the knowledge in the world. Google makes money by charging advertisers access to this knowledge base through keywords in its search engine. If there is a brilliance to Google’s business model, it is that the company has basically monetized the alphabet.

The work is nowhere near done, but Google has already done an impressive job over the last 16 years making the world’s knowledge available to anyone with Internet access. Thanks to Google, the answer to just about any question you could think of asking is at your fingertips, and with smartphones and ubiquitous mobile computing, that information is now available wherever you go. 

The next step for Google is to deliver that information to users with automated, intellectual context. The nascent Google Now personal assistant product that Google has been driving is the first step in this, but it has a lot of room to grow. 

If we take the concept of Google monetizing the alphabet and apply it to everyday objects, we can see where artificial intelligence come into play for how Google plans on changing the fundamental nature of computing.

What if you could use a device—like a smartphone, Google Glass or a smartwatch—to automatically identify all relevant information in your area and deliver it to you contextually before you even realize you want it?

If we mix the notion of ubiquitous sensor data laden within the Internet of Things with neural networks capable of "computer vision" and comprehending patterns on their own, then we have all the components of a personalized AI system that can be optimized to every individual on the planet. 

Academic researchers call these computing concepts “deep learning.” Deep learning is the idea that machines could eventually learn by themselves to perform functions like speech recognition in real-time. Google's prospect is to apply deep learning to the everyday machines like smartphones, tablets and wearable computers. 

But what about all the robots Google just purchased? This is a little trickier to discern, but if Google does eventually figure out the intricacies of artificial intelligence, it can then apply these principles to an army of automated machines that function without human interference. For instance, with computer vision, machine learning and neural networks, Google could deploy robots to its maintain its data centers. If a server breaks or is having problems, a robot could pay it a visit, tap into its internal diagnostics or see that it is having a hardware issue. Google’s driverless car could benefit from all of these technologies as well, including speech and pattern recognition.

Google’s research into robotics and deep learning doesn’t have to mean this new technology will be restricted to beefing up its current products. Advances in cognitive computing can be applied to many, many different types of industries, from manufacturing to analyzing financial data at large banks. Like the lessons learned with smartphones, the applications of machine learning technology can be applied almost anywhere—as long as patents don't apply.

Google Has All The Ingredients To Make AI Work

Google has brains. Lots of different kinds of brains.

From a people perspective, Google’s head of engineering, Ray Kurzweil, is the world's foremost expert on artificial intelligence. Andy Rubin joined Google in 2005 when the company purchased his Android platform to create a smartphone operating system. But currently, Rubin is taking his job of building Android much more literally as he now heads up Google's fledgling robotics department. Jeff Dean is part of a senior fellow within Google's research division working in the company's “Knowledge” (search-focused) group, which will most likely be the team to incorporate DeepMind.

Those names are just a few examples of Google's best brains at work. Google also has plenty of machine/computer brains that perform the bulk of the heavy lifting at the company.

The search product has been tweaked and torqued over the years to be one of the smartest tools ever created. In its own way, Google search has components of what researchers call “weak” artificial intelligence. Google’s server infrastructure that helps run search (as well as its Drive personal cloud, Gmail, Maps and other Google apps) is one of the biggest in the world, behind Amazon but on par with companies like Microsoft and Apple. 

Google has the devices necessary to put all that it creates into the hands of people around the world. Through the massively popular Android mobile operating system, it fledgling computer operating system in Chrome OS and accessory devices like Google Glass or its long-rumored smartwatch, Google can push cognitive, contextual computing to the world.

All Google needs to do is make artificial intelligence a reality and plug its components into its large, seething network and see what happens. That is both very exciting and mildly terrifying at the same time.

The Risks For Google, The Internet & The World

“Behold, the fool saith, 'Put not all thine eggs in the one basket.' Which is but a matter of saying, "Scatter your money and your attention." But the wise man saith, 'Pull all your eggs in the one basket and ... WATCH THAT BASKET.'" ~ Mark Twain, Puddn'head Wilson

In 1991, DARPA pulled much of its funding and support for research into neural networks. What followed was a period researchers called an “AI Winter,” where the field of artificial intelligence research became stagnant and did not progress forward in any meaningful way. The AI Winter from the 1990s was not the first and might not be the last.

Google is accumulating many individual brains in the field of AI into one big basket. If Google fails (or loses interest) to create the next generation of artificial intelligence, another AI Winter could definitely be a possibility.

Google also betting a lot of money on the fact that it can take the components of artificial intelligence and robotics and apply it to everything it touches. Between DeepMind and Nest, Google has spent $3.6 billion in the automation industry this year and those were just two companies. Google has lots of eggs in this basket, and if it fails, it could cost Google years, employees and the bleeding edge of next-generation computing.

Academics and pundits are also worried about the implications of privacy with Google’s chase of the contextual Internet centered around the individual. With Nest, Google could know just about everything that you do in your home (when you leave, when you get home, how many people are in the house and so forth). Part of aggregating the world’s knowledge is parsing information about the individual’s that inhabit that world. The more Google knows about us, the better it thinks it can enhance our lives. But the privacy implications are immense and well-justified.

There is also a large ethics question surrounding the use of artificial intelligence. Some of it centers around science fiction-like scenarios of enabling robots to take over the world (like Skynet in Terminator or the robots in The Matrix). But, in addition to privacy concerns, there are many different ways AI could be abused to severely augment the service economy. As part of the DeepMind acquisition, Google agreed to create an ethics board for how it will use the technology in any future applications.

Join the conversation about this story »

A Brilliant Scientist Would Like To Remind You That Watson And Siri Aren't Artificially Intelligent

$
0
0

douglas hofstadter

Douglas Hofstadter is a professor, cognitive scientist, and author whose work addresses a number of fields from computer science to consciousness.

And he'd like to remind you that the Jeopardy-winning supercomputer, IBM's Watson, isn't truly artificial intelligence. He gets into it during this Q&A with Popular Mechanics.

"Watson is basically a text search algorithm connected to a database just like Google search," he said. "It doesn't understand what it's reading. In fact, read is the wrong word. It's not reading anything because it's not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there's no intelligence there. It's clever, it's impressive, but it's absolutely vacuous."

Artificial intelligence is "a slippery term," he argues. Just because a computer plays a mean game of chess doesn't make it an intelligent machine, especially once you learn the exhaustive way that computers play chess.

So how do you start doing work in AI that matters? Hofstadter says that people need to study the mind and "find out the principles of intelligence."

"I think you have to move toward much more fundamental science, and dive into the nature of what thinking is. What is understanding? How do we make links between things that are, on the surface, fantastically different from one another? In that mystery is the miracle of human thought. The people in large companies like IBM or Google, they're not asking themselves, what is thinking? They're thinking about how we can get these computers to sidestep or bypass the whole question of meaning and yet still get impressive behavior."

Join the conversation about this story »

Netflix Is 'Training' Its Recommendation System By Using Amazon's Cloud To Mimic The Human Brain (NFLX)

$
0
0

reed hastings netflix

Frequent Netflix users rejoice: Netflix is working on a new technology that should make its recommendation engine much better.

The goal of the technology is to stop recommending movies based on what you've seen, and instead make suggestions based on what you actually like about your favorite shows and movies.

Right now, Netflix looks at the things you watch and based on stuff like the actors, genre, filming location, it offers suggestions.

That method is far from perfect. Sometimes Netflix gets stuck in rut and doesn't offer enough variety. Just because you like "Parks and Recreation" doesn't automatically mean you like "The Office" and "30 Rock" and nothing else.

And it can miss the subtle differences between similar shows — the thing that makes a person love "The West Wing" may not apply to "House of Cards."

That's why Netflix is moving into a field of research known as "deep learning." That means that Netflix is "training" its software to provide better recommendations by feeding massive amounts of information to a tech called"neural networks." Neural networks mimic how the human brain identifies patterns.

The company took the lessons learned by researchers at Google, Stanford, and Nvidia and created deep learning software that takes advantage of Amazon's powerful cloud infrastructure, according to a new post on Netflix's technology blog

It's no surprise that Netflix is building its neural network tech on top of Amazon's cloud, as it's one of the largest customers of Amazon Web Services.

In this case, Netflix used Amazon servers built with processors from Nvidia. These processors are typically used for processing effects in video games, photos or other graphics. With its human-brain software running on Amazon's computers, Netflix could "train" a new machine in 47 minutes, compared to the 20 hours of previous efforts.

With times like that, Netflix's engineers can set up a training session, have it run, and see results in the same day — and then run another test with improvements based on those results. 

That doesn't mean recommendations are going to start getting better every hour of the day. What it does mean is that the humans running it will be able perform more tests and get insight into patterns than they never could have identified before.

Netflix isn't the only player in tech trying to utilize deep learning to improve its service. Back in December, Facebook hired NYU professor Yann LeCun, an expert in the field of "unsupervised learning," a field in AI research that focuses on making computers that can teach themselves. IBM has its Watson supercomputer— which the company has said will soon be able to reason and debate — and Google shares a quantum AI lab with NASA.

SEE ALSO: IBM is using Watson to psychoanalyze people from their tweets

Join the conversation about this story »

Artificial Intelligence On Mobile Is Quickly Becoming A Reality

$
0
0

Mobile Insider is a daily newsletter from BI Intelligence delivered first thing every morning exclusively to BI Intelligence subscribers. Sign up for a free trial of BI Intelligence today.


CAN COMPUTERS BE OUR FRIENDS? Dave Smith at ReadWrite discusses developments in mobile recommendation engines and personalized mobile assistants, technology that can sort through our vast amounts of personal data and provide a real-time decision in a matter of seconds. Smith likens the developments to the plot of the 2013 Spike Jonze movie, Her, in which operating systems are tailored uniquely to users and can understand human emotion and behavior. Leading the way is voice-recognition service Nuance. CEO Peter Mahoney says the company is improving the ways in which artificial intelligence can communicate with humans, rather than just spitting back data. 

"Dialogue is really important. In the original systems that came out, it operated like a search engine. You say something and something comes back, but it may or may not be the right thing. But that’s not how humans work. Humans disambiguate. We clarify," said Mahoney to Smith. (ReadWrite)

Nuance is also reportedly studying paralinguistics, or the understanding of how people speak rather than what is being said. "We're looking at the acoustic elements to be able to detect emotions in speech," said John West, a Nuance principal solutions architect. Nuance's technology already helps power Apple's voice-recognition service Siri, and the company could potentially incorporate their paralinguistics findings into Siri as well. It's a clear step forward in a growing effort in machine learning, personalization, and ultimately artificial intelligence.(BBC)

BREAKING: Japanese e-commerce company Rakuten will buy Viber, a popular mobile messaging app, for $900 million, according to Reuters. (Reuters)

AUDI'S AUGMENTED-REALITY CARS: BBC UK visited the Audi global headquarters in Germany to find out how the automaker is starting to incorporate augmented reality technology into their vehicles. The BBC's video shows someone using Google Glass to help identify the car's issue and demonstrate — virtually— how to fix it, and engineers at an Audi factory using augmented reality to identify any imperfections in a car's design. (BBC)

QUOTE OF THE DAY: "You're not going to be able to afford four data plans and four of the highest-priced phones for that family."— Former Motorola Mobility CEO Dennis Woodside saying U.S. wireless carrier data rates are too expensive for middle-class families, which is why Google, Motorola, Comcast, Time Warner Cable, and Charter Communications are entering a coalition to push the government to increase the allocation of Wi-Fi airwaves. (Wall Street Journal)

COMCAST-TIME WARNER CABLE DARK HORSE: Many were caught talking about the Comcast-Time Warner Cable merger in terms of its implications for the cable television industry. But both companies also own a significant share of U.S. Wi-Fi networks. The merger could lead to the new company renting out Wi-Fi networks to other carriers or even creating a new mobile data service, according to Kevin Fitchard of GigaOm. (GigaOm)

A MOBILE WORLD: Cell phones have become nearly ubiquitous across the world, even now in emerging markets and developing nations, according to a new study from Pew. The research outfit conducted face-to-face interviews with almost 25,000 total people from 24 emerging and developing nations between March and May of last year. Among each of the 24 nations surveyed, more than half of the population owned a cell phone, a share that has grown significantly over the last decade, says Pew. Jordan, China, Russia, Chile and South Africa are ahead of the pack, with more than 90% of their populations owning mobile phones. In turn, landline ownership in these countries has dipped. A median of just 23% claimed to have a working landline in their residence. 

But smartphone adoption is still catching on. According to Pew, none of the 24 countries have reached over 50% smartphone penetration. Lebanon led the way with smartphone adoption of 45% and China was in second place with 37% penetration, based on Pew's results. Regardless, smartphones are still a relatively new technology, particularly for consumers in the most underdeveloped nations in the Middle East and Africa. But smartphone ownership across the board in these nations skewed significantly higher among 18- to 29-year-olds. That demographic's smartphone adoption went as high as 69% in China, 62% in Lebanon, and 55% in Chile. As smartphone adoption rates max out in much of the developed world, these nations and regions will shoulder the weight of the next wave of mobile growth. (Pew Research)

780 PERCENT GROWTH: Finnish mobile game developer Supercell saw its revenues increase 780T in 2013, growing to $892 million (US), up from just $101 million (US) in 2012. Supercell is best known for its mobile games "Clash Of Clans" and "Hay Day," which are both free to download on iOS App Store and Google Play, meaning its games generate most revenue from ads and in-app purchases. Supercell is following the path of its Finnish counterpart Rovio, maker of the popular Angry Birds brand. Gaming continues to be a massive money-maker on mobile. (Wall Street Journal)

SURVIVING TECH EXTINCTION: New York Times' writer Farhad Manjoo outlines a strategy for mobile consumers looking to not get left behind or locked into a soon-to-be obsolete company or service. Essentially he tells consumers to buy Apple hardware, use Google services, get content from Amazon, and bet on "connector" services like Dropbox and Evernote that can work across platforms and devices. Meanwhile, Microsoft's head of communications, whose company was left off the list, has a stern response. (New York TimesBusiness Insider)

Here's what else BI Intelligence subscribers are reading 

'Reverse Showrooming': Bricks-And-Mortar Retailers Fight Back

Facebook And Instagram Win In Mobile Time-Spend On Social, While Google+ Lags Badly

US Credit Card Volume Growth Cools Off In 2013, Due To Short Holidays And Debt Wariness

US Adults Now Spend More Time On Smartphones Than They Do On PCs

 

Join the conversation about this story »

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>