Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Facebook Bought A Company That Could Let It Take On Siri

$
0
0

Mark ZuckerbergFacebook has acquired Wit.ai, a Y Combinator-backed speech recognition startup founded 18 months ago.

The company provides an API for building voice-activated interfaces and already has over 6,000 developers on its platform.  

While building speech recognition and voice control is normally an extremely complicated technical process, Wit.ai allows developers build this capability into their products by simply adding a few short lines of code.

“Facebook’s mission is to connect everyone and build amazing experiences for the over 1.3 billion people on the platform – technology that understands natural language is a big part of that, and we think we can help,”  the company announced in a blog post.

Despite its acquisition, Wit.ai says its platform will remain open and free for everyone. It is likely that Facebook will leverage Wit.ai’s services to draw in new developers. Facebook provides developers with resources and help building apps on its platform in the hopes that these developers will one day turn around and pay Facebook for advertising.

Wit.ai’s technology could also become integrated with Facebook itself. The company could integrate voice control into Facebook's native app or add a voice-to-text input for Messenger, for example. 

In an October blog post announcing Wit.ai’s $3 million seed round led by Andreessen Horowitz, co-founder and CEO Alex Lebrun wrote that he hoped Wit.ai’s open, distributed, community-based network of developers could help build “the Github, the Wikipedia, the Bitcoin of natural language.”  

Lebrun said that he wants his company to be the go-to platform for developers looking to build messenger-based and audio-first apps for next generation wearables and smart devices.

Read more about the announcement on Wit.ai.

Join the conversation about this story »


ELON MUSK: Robots Aren't Going To Kill Us All ... Just Yet

$
0
0

Elon Musk Puzzled

Elon Musk has held a public question-and-answer session on Reddit, sharing everything from his daily habits to his fear of killer robots.

In his Reddit AMA (ask me anything) session, Musk was asked whether he thought the idea of the technological singularity was just "hype." 

Technological singularity is the idea that artificial intelligence will reach a point at which robots are smarter than humans and could control or even kill us. 

Musk dismissed the idea that his fears of artificial intelligence were just hype, telling the Reddit commenter that "The timeframe is not immediate, but we should be concerned. There needs to be a lot more work on AI safety."

But Musk showed that he wasn't worried about all robots, including a video of a cat in a shark costume riding a Roomba to make his point:

Billionaire Musk has often talked about his concerns over the advancement of artificial intelligence. He told a Vanity Fair conference that robots could delete humans like spam if they became too intelligent. When asked whether humanity could use SpaceX's line of spaceships to escape killer robots, he said robots would most likely follow us in our escape from earth. And in November, Musk also posted (and quickly deleted) an online comment warning that killer robots could arrive within five years.


NOW WATCH: Robot Funded By The US Military Can Sprint And Jump Just Like A Cheetah

 

Join the conversation about this story »

Artificial Intelligence Is Still A Long Way From Being A 'Doomsday Machine'

$
0
0

artificial intelligence robot

The possibility that advanced artificial intelligence (AI) might one day turn against its human creators has been repeatedly raised of late.

Renowned physicist Stephen Hawking, for instance, surprised by the ability of his newly-upgraded speech synthesis system to anticipate what he was trying to say, has suggested that, in the future, AI could surpass human intelligence and ultimately bring about the end of humankind.

Hawking is not alone in worrying about superintelligent AI. A growing number of futurologists, philosophers and AI researchers have expressed concerns that artificial intelligence could leave humans outsmarted and outmanoeuvred.

My view is that this is unlikely, as humans will always use an improved AI to improve themselves. A malevolent AI will have to outwit not only raw human brainpower but the combination of humans and whatever loyal AI-tech we are able to command – a combination that will best either on their own.

There are many examples already: Clive Thompson, in his book Smarter Than You Think describes how in world championship chess, where AIs surpassed human grandmasters some time ago, the best chess players in the world are not humans or AIs working alone, but human-computer teams.

While I don't believe that surpassing raw (unaided) human intelligence will be the trigger for an apocalypse, it does provide an interesting benchmark. Unfortunately, there is no agreement on how we would know when this point has been reached.

Beyond the Turing Test

An established benchmark for AI is the Turing Test, developed from a thought experiment described by the late, great mathematician and AI pioneer Alan Turing. Turing's practical solution to the question: "Can a machine think?" was an imitation game, where the challenge is for a machine to converse on any topic sufficiently convincingly that a human cannot tell whether they are communicating with man or machine.

In 1991 the inventor Hugh Loebner instituted an annual competition, the Loebner Prize, to create an AI – or what we would now call a chatbot – that could pass Turing's test. One of the judges at this year's competition, Ian Hocking, reported in his blog that if the competition entrants represent our best shot at human-like intelligence, then success is still decades away; AI can only match the tip of the human intelligence iceberg.

I'm not overly impressed either by the University of Reading's recent claim to have matched the conversational capability of a 13-year-old Ukrainian boy speaking English Imitating child-like intelligence, and the linguistic capacity of a non-native speaker, falls well short of meeting the full Turing Test requirements.

Indeed, AI systems equipped with pattern-matching, rather than language understanding, algorithms have been able to superficially emulate human conversation for decades. For instance, in the 1960s the Eliza program was able to give a passable impression of a psychotherapist.

Eliza showed that you can fool some people some of the time, but the fact that Loebner's US$25,000 prize has never been won demonstrates that, performed correctly, the Turing test is a demanding measure of human-level intelligence.

Measuring artificial creativity

So if the Turing test cannot yet be passed, are there aspects of human intelligence that AI can recreate more convincingly? One recent proposal from Mark Riedl, at Georgia Tech in the USA, is to test AI's capacity for creativity.

Riedl's Lovelace 2.0 test requires the AI to create an artifact matching a plausible, but arbitrarily complex, set of design constraints. The constraints, set by an evaluator who also judges its success, should be chosen so that meeting them would be deemed as evidence of creative thinking in a person, and so by extension in an AI.

For example the evaluator might ask the machine to (as per Riedl's example): "create a story in which a boy falls in love with a girl, aliens abduct the boy and the girl saves the world with the help of a talking cat". A crucial difference from the Turing test is that we are not testing the output of the machine against that of a person.

Creativity, and by implication intelligence, is judged by experts. Riedl suggests we leave aside aesthetics, judging only whether the output meets the constraints. So, if the machine constructs a suitable science fiction tale in which Jack, Jill and Bagpuss, repel ET and save Earth, then that's a pass – even thought the result is somewhat unoriginal as a work of childrens' fiction.

I like the idea of testing creativity – there are talents that underlie human inventiveness that AI developers have not even begun to fathom. But the essence of Riedl's test appears to be constraint satisfaction – problem solving. Challenging, perhaps, but not everyone's idea of creativity. And by dropping the competitive element of Turing's verbal tennis match, judging Lovelace 2.0 is left too much in the eye of the beholder.

Surprises to come

Ada Lovelace, the friend of Charles Babbage who had a hand in inventing the computer, and for whom Riedl named his test, famously said that "the Analytical Engine [Babbage's computer] has no pretensions to originate anything. It can do whatever we know how to order it to perform".

This comment reflects a view, still widely held, that the behaviour of computer programs is entirely predictable and that only human intelligence is capable of doing things that are surprising and hence creative.

However, in the past 50 years we have learned that complex computer programs often show "emergent" properties unintended by their creators. So doing something unexpected in the context of Riedl's test may not be enough to indicate original thinking.

Human creativity shows other hallmarks that reflect our ability to discover relationships between ideas, where previously we had seen none. This may happen by translating images into words then back into images, ruminating over ideas for long periods where they are subject to subconscious processes, shuffling thoughts from one person's brain to another's through conversation in a way that can inspire concepts to take on new forms. We are far from being able to do most of these things in AI.

For now I believe AI will be most successful when working alongside humans, combining our ability to think imaginatively with the computer's capacity for memory, precision and speed. Monitoring the progress of AI is worthwhile, but it will be a long time before these tests will demonstrate anything other than how far machine intelligence still has to go before we will have made our match.

All things considered, I don't think we need to hit the panic button just yet.

The Conversation

This article was originally published on The Conversation. Read the original article.

SEE ALSO: KURZWEIL: Human-Level AI Is Coming By 2029

Join the conversation about this story »

Facebook Is Now Better At Judging Your Personality Than Your Friends Are

$
0
0

robot

How well does your best friend know you? Chances are, Facebook understands you better.  

A new study, published Monday in the journal PNAS, suggests that computers are now better judges of character than your friends, family, and even your partners. 

The project, conducted by researchers at the University of Cambridge and Stanford, used an algorithm to calculate the average number of "Likes" a computer needs to draw a remarkably accurate identification of who you are.

In the study, 86,200 people completed personality questionnaires via the myPersonality app and provided access to their Facebook Likes. Computerized judgments based on Facebook Likes of a specific participant were then compared with the judgments of people based on their familiarity with that participant. The judgments related to predictions about life outcomes such as substance abuse, political attitudes, and physical health.

In a statement researchers said, "Given enough Likes, the computers came closer to a person's self-reported personality than their brothers, mothers, or partners."

The researchers add: "In the study, a computer could more accurately predict the subject's personality than a work colleague by analyzing just 10 Likes; more than a friend or a cohabitant (roommate) with 70, a family member (parent, sibling) with 150, and a spouse with 300 Likes."

Researchers at the UK and US institutions describe the findings as an "emphatic demonstration" of the capacity of computers. They say AI is now so equipped to discover people's psychological traits, through "pure data analysis," it knows us better than we previously thought. 

"It's an important milestone on the path towards more social human-computer interactions," they note.

Wu Youyou, a lead author at Cambridge University's Psychometrics Centre, adds: "In the future, computers could be able to infer our psychological traits and react accordingly, leading to the emergence of emotionally intelligent and socially skilled machines. In this context, the human-computer interactions depicted in science fiction films such as 'Her' seem to be within our reach." 

Today, the average Facebook user has about 227 Likes, a number that is increasing. Researchers say it shows AI "has the potential to know us better than our companions." They also note a previous study that brought similar results, again underpinning the fact digital records are able to "expose intimate details and personality traits" in human beings. 

But the new analysis also furthers dystopian ideas that AI, or robots, may one day hold unfathomable power. Google even has an internal committee to discuss fears around the notion.

Dr. Michal Kosinski is a researcher at Stanford who says machines have a couple of key advantages: the ability to retain and access vast quantities of information and the ability to analyze it with algorithms.

He says it's a "concern" and adds: "We hope that consumers, technology developers, and policymakers will tackle those challenges by supporting privacy-protecting laws and technologies and giving the users full control over their digital footprints."


NOW WATCH: Robot Funded By The US Military Can Sprint And Jump Just Like A Cheetah

 

Join the conversation about this story »

Top Scientists Have An Ominous Warning About Artificial Intelligence

$
0
0

Stephen Hawking

Dozens of the world’s top artificial intelligence experts have signed an open letter calling for researchers to take care to avoid potential “pitfalls” of the disruptive technology .

Those who have already signed the letter include Stephen Hawking, Elon Musk, the co-founders of DeepMind, Google's director of research Peter Norvig and Harvard professor of computer science David Parkes.

“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase,” says the letter, published by The Future of Life Institute.

“The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

Some of the research priorities set out in an accompanying paper describe the need to remain in control of any artificially intelligent machine – “systems must do what we want them to do” – while others relate to the ethics of autonomous weapons.

The paper suggests that it “may be desirable to retain some form of meaningful human control” over intelligent machines designed to kill.

It also warns that legislative efforts are needed before autonomous cars become a practical and ubiquitous technology: “If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits.”

Professor Stephen Hawking has previously said that the rise of artificial intelligence could see the human race become extinct.

He told the BBC: ''The primitive forms of artificial intelligence we already have have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.''

Technology entrepreneur Elon Musk has also described the rise of AI in the past as ''our biggest existential threat''.

This article was written by Matthew Sparkes Deputy Head of Technology from The Daily Telegraph and was legally licensed through the NewsCred publisher network.

Join the conversation about this story »

Elon Musk Is Donating $10 Million To Keep Killer Robots From Taking Over The World

$
0
0

Elon Musk

Elon Musk has a habit of worrying about killer robots. So he's taking action to keep them from harming us all in the future.

The CEO of SpaceX and Tesla donated $10 million to the Future Of Life Institute on Thursday to help keep artificial intelligence and robots of the future beneficial.

The Future Of Life Institute, a non-profit organization founded by scientists that describes itself as "working to mitigate existential risks facing humanity," will give out Musk's donation to researchers. 

The application process will begin next week.

Anyone from academics to people working in the tech industry can apply, as long as they have good ideas about how to make artificial intelligence work for the betterment of society.

Musk, who previously said artificial intelligence is "potentially more dangerous than nukes," can rest easy for now. Most of the money he's donated will go to researchers focused on artificial intelligence, and the rest will go to AI research "involving other fields such as economics, law, ethics, and policy," according to the Institute's website.

In a YouTube video announcing his decision to donate to the Future of Life Institute, Musk said, "There should probably be a much larger amount of money applied to AI safety in multiple ways." At the very least, his donation is a start.

Here's his tweet from Thursday, announcing his donation:

SEE ALSO: Tesla Gets Crushed

Join the conversation about this story »

7 Reasons Why Elon Musk Is Wrong To Believe Intelligent Robots Will One Day Kill Us All

$
0
0

Elon Musk

A panel at the World Economic Forum at Davos in Switzerland has just completely dismantled the idea — currently trendy in the tech sector — that artificially intelligent robots, lacking morals, may one day independently decide to start killing humans.

The idea has been spread, somewhat tongue in cheek, by Tesla and SpaceX founder Elon Musk, who has even suggested that the robots may even thwart any humans who try to escape them by blasting off to Mars.

AI research is advancing rapidly inside private companies right now like Facebook and Google. That R&D is mostly a secret, which is why people like to speculate about it. Plus, everyone loves the Terminator movies, in which killer AI robots are the main protagonists.

The panel was hosted by two UC Berkeley professors, Ken Goldberg (who studies robotics) and Alison Gopnik (who studies psychology). They have both been trying to figure out how machines might mimic human thinking. The good news, for Musk and anyone else afraid of the imminent robot apocalypse, is that machines are still way too stupid to be lethal to humans on any meaningful level.

Ken Goldberg Alison Gopnik DavosThey described seven good reasons why humans are going to remain a step ahead of AI for the foreseeable future.

  1. Machines cannot learn from random "life" experience."It is easier to simulate a grand master chess player with a machine that is it to simulate a 2-year-old child," said Gopnik. Her point is that while chess may seem complicated, it actually only requires a defined set of rules to learn. A child on the other hand is exposed to an infinite variety of random stimuli and learns quickly from it. Computers currently cannot learn from random inputs.
  2. Machines need humans to be smart. The most intelligent machines we have are those that receive constant input from humans. One of the most impressive learning machines is Google's search engine, Goldberg said, which learns because it is constantly being "fed" by the web activity of millions of humans. It then iterates its results from those inputs. In fact, Goldberg suggested, even the most intelligent robots may one day need to have a "human customer service" function so that they can call a human for help whenever they encounter something they do not understand. He noted the irony that when humans currently call companies for customer service, they are frequently greeted by robots. Goldberg proposed a name for this human-machine interaction, "multiplicity," which he is hoping will replace the term "singularity," which is currently used to describe independent AI learning machines.
  3. Machines cannot make jokes. Computers are bad at certain types of non-logical thinking that most define humans, Goldberg says. "I don't think we'll ever hear a robot telling a great joke in my lifetime."
  4. Machines cannot be creative: They can't do art or aesthetics. Only humans excel at tasks that posit an infinite realm of possibilities, in which a person must choose a course of action that has a high chance of success without knowing the answer in advance. Composing music is an example of this.
  5. Machines cannot have new ideas. Computers cannot think of a new idea on their own nor change an idea they already have, Gopnik says. This will keep machines on a leash for a long time.
  6. Machines cannot play. Play involves using creativity as a strategy to fulfil a goal, and machines can't do it.
  7. Stupid humans are way more dangerous than smart machines. It is far more likely that a human will be killed by a dumb machine made by a stupid human than a smart machine making its own decisions. Gopnik gave the example of autonomous weapons or stock market software. Those devices aren't intelligent but they can be incredibly dangerous in the hands of stupid humans.

Gopnik did have one thing to say to anyone who worries that humans will end up inside The Matrix, from the movies in which people think they are having a good time when in fact all they are doing is feeding artificially intelligent machines. "It's actually [already] just true," she said. Her favourite example is cat photos. One of the phenomena that AI scientists get most excited about is the improvement in the ability of a machine to recognise a picture of a cat on the internet. This step forward in AI has occurred because the internet has amassed an astonishingly rich collection of cat photos, uploaded by humans. AI software uses all those photos to refine its ability to distinguish a cat from non-cat objects

"We're just feeding the machines," she says.

Join the conversation about this story »

Afraid Of AI? Here's Why You Shouldn't Be

$
0
0

Pepper humanoid emotional robot from japan

Earlier in January, an organization called the Future of Life Institute issued an open letter on the subject of building safety measures into artificial intelligence systems (AI).

The letter, and the research document that accompanies it, present a remarkably even-handed look at how AI researchers can maximize the potential of this technology.

Here's the letter at its most ominous, which is to say, not ominous at all:

Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

And here was CNET's headline, for its story about the letter:

Artificial intelligence experts sign open letter to protect mankind from machines

BBC News, meanwhile, ventured slightly further out of the panic room to deliver this falsehood:

Experts pledge to rein in AI research

I'd like to think that this is rock bottom.

Journalists can't possibly be any more clueless, or callously traffic-baiting, when it comes to robots and AI. And readers have to get tired, at some point, of clicking on the same shrill headlines, that quote the same non-AI researchers—Elon Musk and Stephen Hawking, to be specific—making the same doomsday proclamations.

Fear-mongering always loses its edge over time, and even the most toxic media coverage has an inherent half-life. But it never stops.

Forget about the risk that machines pose to us in the decades ahead. The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists.

***

This is what it looks like to make a fool of yourself, when covering AI.

Start by doing as little reporting as possible. In this case, that means reading an open letter released online, and not bothering to interview any of the people involved in its creation.

To use the CNET and BBC stories as examples, neither includes quotes or clarifications from the researchers who helped put together either the letter or its companion research document. This is a function of speed, but it's also a tactical decision (whether conscious or not). Like every story that centers on frantic warnings about apocalyptic AI, the more you report, the more threadbare the premise turns out to be.

Experts in this field tend to point out that the theater isn't on fire, which is no fun at all when your primary mission is to send readers scrambling for the exit.

The speedier, and more dramatic course of action is to provide what looks like context, but is really just Elon Musk and Stephen Hawking talking about a subject that is neither of their specialties. I'm mentioning them, in particular, because they've become the collective voice of AI panic.

They believe that machine superintelligence could lead to our extinction. And their comments to that effect have the ring of truth, because they come from brilliant minds with a blessed lack of media filters. If time is money, then the endlessly recycled quotes from Musk and Hawking are a goldmine for harried reporters and editors. What more context do you need, than a pair of geniuses publicly fretting about the fall of humankind?

And that's all it takes to report on a topic whose stakes couldn't possibly be higher. Cut and paste from online documents, and from previous interviews, tweets and comments, affix a headline that conjures visions of skeletal androids stomping human skulls underfoot, and wait for the marks to come gawking.

That's what this sort of journalism does to its creators, and to its consumers. It turns a complex, and transformative technology into a carnival sideshow. Actually, that's giving most AI coverage too much credit. Carnies work hard for their wages. Tech reporters just have to work fast.

***

The story behind the open letter is, in some ways, more interesting than the letter itself. On January 2, roughly 70 researchers met at a hotel in San Juan, Puerto Rico, for a three-day conference on AI safety. This was a genuinely secretive event.

The Future of Life Institute (FLI) hadn't alerted the media in advance, or invited any reporters to attend, despite having planned the meeting at least six months in advance. Even now, the event's organizers won't provide a complete list of attendees. FLI wanted researchers to speak candidly, and without worry of attribution, during the weekend-long schedule of formal and informal discussions.

In a movie, this shadowy conference, hosted by an organization with a tantalizing name—and held in a tropical locale, no less—would have come under preemptive assault from some rampaging algorithmic reboot of Frankenstein's monster.

Or, in the hands of more patient filmmakers, the result would have been a first-act setup: an urgent call to immediately halt all AI research (ignored, of course, by a rebellious lunatic in a darkened server room). Those headlines from BBC News and CNET would have been perfectly at home on the movie screen, signaling the global response to a legitimately terrifying announcement.

In fact, the open letter from FLI is a pretty bloodless affair. The title alone—Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter—should manage the reader's expectations. The letter references advances in machine learning, neuroscience, and other research areas that, in combination, are yielding promising results for AI systems. As for doom and gloom, the only relevant statements are the afore-mentioned sentence about “potential pitfalls,” and this one:

“We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”

That, in all honesty, is as dark as this open letter gets. A couple of slightly arch statements, buried within a document whose language is intentionally optimistic. The signatories are interested in “maximizing the societal benefit of AI,” and the letter ends with this call to action.

“In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.”

This is the document that news outlets are interpreting as a call to protect humanity from machines, and to rein in AI R&D. It's also proof that many journalists aren't simply missing the point, when it comes to artificial intelligence. They are lying to you.

***

The truth is, there are researchers within the AI community who are extremely concerned about the question of artificial superintelligence, which is why FLI included a section in the letter's companion document about those fears. But it's also true that these researchers are in the extreme minority.

And according to Bart Selman, a professor of computer science at Cornell, the purpose of the open letter was to tamp down the hysteria that journalists are trying to instill in the general public, while bringing up near-term concerns.

Some of these issues are complex, and compelling. Will a mortgage company's machine learning system accidentally violate an applicant's privacy, and possibly even break the law, by digging too deep into his or her metadata? Selman isn't worried about rebellious algorithms, but faulty or over-eager ones.

“These systems are often given fairly high level goals. So making sure that they don't achieve their goals by something dramatically different than you could anticipate are reasonable research goals,” says Selman. “The problem we have is that the press, the popular press in particular, goes for this really extreme angle of superintelligence, and AI taking over. And we're trying to show them that, that's one angle that you could worry about, but it's not that big of a worry for us.”

Of course, it's statements like that which, when taken out of context, can fuel the very fires they're trying to put out. Selman, who attended the San Juan conference and contributed to FLI's open letter and research document, cannot in good conscience rule out a future outcome for the field of AI.

That sort of dogmatic dismissal is anathema to a responsible scientist. But he also isn't above throwing a bit of shade at the researchers who seem preoccupied with the prospect of bootstrapped AI, meaning a system that suddenly becomes exponentially smarter and more capable.

“The people who've been working in this area for 20, 30 years, they know this problem of complexity and scaling a little better than people are new to the area,” says Selman.

The history of AI research is full of theoretical benchmarks and milestones whose only barrier appeared to be a lack of computing resources. And yet, even as processor and storage technology has raced ahead of researchers' expectations, the deadlines for AI's most promising (or terrifying, depending on your agenda) applications remain stuck somewhere in the next 10 or 20 years.

I've written before about the myth of inevitable superintelligence, but Selman is much more succinct on the subject. The key mistake, he says, is in confusing principle with execution, and assuming that throwing more resources at given system will trigger an explosive increase in capability.

“People in computer science are very much aware that, even if you can do something in principle, if you had unlimited resources, you might still not be able to do it,” he says, “because unlimited resources don't mean an exponential scaling up. And if you do have an exponential scale, suddenly you have 20 times the variables.”

Bootstrapping AI is simultaneously an AI researcher's worst nightmare and dream come true—instead of grinding away at the same piece of bug-infested code for weeks on end, he or she can sit back, and watch the damn thing write itself.

At the heart of this fear of superintelligence is a question that, at present, can't be answered.

“The mainstream AI community does believe that systems will get to a human-level intelligence in 10 or 20 years, though I don't mean all aspects of intelligence,” says Selman.

Speech and vision recognition, for example, might individually reach that level of capability, without adding up to a system that understands the kind of social cues that even toddlers can pick up on.

“But will computers be great programmers, or great mathematicians, or other things that require creativity? That's much less clear. There are some real computational barriers to that, and they may actually be fundamental barriers,” says Selman.

While superintelligence doesn't have to spring into existence with recognizably human thought processes—peppering its bitter protest poetry with references to Paradise Lost—it would arguably have to be able to program itself into godhood. Is such a thing possible in principle, much less in practice?

It's that question that FLI is hoping the AI community will explore, though not with any particular urgency. When I spoke to Viktoriya Krakovna, one of the organization's founders, she was alarmed at how the media has interpreted the open letter, and focused almost exclusively on the issue of superintelligence.

"We wanted to show that the AI research community is a community of responsible people, who are trying to build beneficial robots and AI," she says. Instead, reporters have presented the letter as something like an act of contrition, punishing FLI for creating a document that's inclusive enough to include the possibility of researching the question--not the threat, but the question--of runaway AI.

Selman sees such a project as a job for “a few people,” to try to define a problem that hasn't been researched or even defined. He compares it to work done by theoretical physicists, who might calculate the effects of some cosmic cataclysm as a pure research question.

Until it can be determined that this version of the apocalypse is feasible in principle, there's nothing to safeguard against. This is an important distinction, that's easily overlooked. For science to work, it has to be concerned with the observable universe, not with superstition couched in scientific jargon.

Sadly, the chances of AI coverage becoming any less fear-mongering are about as likely as the Large Hadron Collider producing a planet-annihilating black hole. Remember that easily digestible non-story, and the way it dwarfed the true significance of that particle accelerator? When it comes to the difficult business of covering science and technology, nothing grabs readers like threatening their lives.

This article originally appeared on Popular Science

 

 

This article was written by Erik Sofge from Popular Science and was legally licensed through the NewsCred publisher network.

https://images1.newscred.com/cD03MzBlYjg2YWI1OWYwZDQxOTI2YWM2NWIwMWY4M2UyZiZnPWY2MTlmOTM2YWRiZTEwYTg4MzUwNjIwZTZkMjk4YTYw

SEE ALSO: 13 Scientific Predictions For 2015

Join the conversation about this story »


Bill Gates: Elon Musk Is Right, We Should All Be Scared Of Artificial Intelligence Wiping Out Humanity

$
0
0

Bill Gates

Like Elon Musk and Stephen Hawking, Bill Gates thinks we should be concerned about the future of artificial intelligence.

In his most recent Ask Me Anything thread on Reddit, Gates was asked whether or not we should be threatened by machine super intelligence.

Although Gates doesn't think it will bring trouble in the near future, that could all change in a few decades. Here's Gates' full reply:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

Google CEO Larry Page has also previously talked on the subject, but didn't seem to express any explicit fear or concern. 

"You can't wish these things away from happening," Page said to The Financial Times when asked about whether or not computers would take over more jobs in the future as they become more intelligent. But, he added that this could be a positive aspect for our economy. 

At the MIT Aeronautics and Astronautics' Centennial Symposium in October, Musk called artificial intelligence our "biggest existential threat."

Louis Del Monte, a physicist and entrepreneur, believes that machines could eventually surpass humans and become the most dominant species since there's no legislation regarding how much intelligence a machine can have. Stephen Hawking has shared a similar view, writing that machines could eventually "outsmart financial markets" and "out-invent human researchers." 

At the same time, Microsoft Research's chief Eric Horvitz just told the BBC that he believes AI systems could achieve consciousness, but it won't pose a threat to humans. He also added that more than a quarter of Microsoft Research's attention and resources are focused on artificial intelligence. 

SEE ALSO: How Microsoft Created A Virtual Assistant That Could Blow Siri Away

Join the conversation about this story »

BILL GATES: Here's What I Would've Done If Microsoft Didn't Work Out

$
0
0

Bill Gates Microsoft IllustrationBill Gates on Wednesday hosted his third AMA on Reddit.

For those unfamiliar, an AMA is where someone goes on Reddit and allows the greater internet community to ask them anything.

Gates was asked what he would have done if Microsoft hadn't worked out. Here's his response:

"I would probably be a researcher on AI. When I started Microsoft I was worried I would miss the chance to do basic work in that field."

Gates founded Microsoft back in 1975 with Paul Allen, but artificial intelligence was a popular field of research in the '50s and '60s — scientists and researchers discovered that computers could solve math problems, word problems, win at checkers, and even speak English. Gates was still in prep school around this time, where he was getting recognized for his prodigious talents programming computers and software. 

But less than a year after Gates enrolled at Harvard University, the US and British governments cut off all exploratory research into AI that wasn't directly government-funded. For almost five years thereafter, the countries experienced what was called an "AI winter." An even longer "winter" occurred starting around 1987, for about another decade.

Despite his longstanding interest in AI, however, Gates also says he is "in the camp that is concerned about super intelligence," which also includes Stephen Hawking and Elon Musk. Hawking believes AI could eventually "outsmart financial markets" and "out-invent human researchers," while Musk similarly believes we could see a "Terminator-like scenario" if we aren't careful about how we manage this technology.

It's interesting to think about what artificial intelligence would be like if Gates started researching in that field 40 years ago — perhaps it would be much improved, or maybe we would all be enslaved by computers by now. We'll never know, since Microsoft became such a resounding success.

SEE ALSO: This Is Bill Gates’ Biggest Regret In Life

Join the conversation about this story »

These are the awesomely nerdy names Elon Musk chose for his rocket-catching drone ships

$
0
0

SpaceX wants to be able to land the reusable parts of its Falcon 9 rockets on floating drone ships at sea, and the recently released names of the ships appear to be a tribute to a classic science fiction series.

Last week, Elon Musk tweeted out an image of one of the drone ships designed to act as a landing pad for the Falcon 9 rocket. Most notable is that the ship now has its rather silly name painted on the deck:

Musk had previously tweeted this name for the East Coast ship, and a similarly odd name for the West Coast drone ship under construction:

Sci-fi aficionados might recognize these names: They are likely a reference to "The Culture," a series of science fiction novels and short stories written by the late Scottish author Iain M. Banks.

The "Culture" novels focus on an incredibly technologically advanced interstellar civilization, in which nearly god-like artificial intelligences housed in powerful starships coexist with a wide range of humanoid species. The Culture is portrayed as a post-scarcity utopia: Citizens of the Culture have access to virtually anything they need or want, live in perfect health for centuries, and spend those long lives pursuing whatever noble or hedonistic goals they want.

Central to the Culture are "Minds": Artificially intelligent starships and space stations that, while being vastly more intelligent and powerful than the human-like beings they carry, are extremely friendly and peace-loving, though also ready to take whatever measures are necessary to protect their civilization and its interests.

Culture Minds have a tendency to give themselves flippant names, and here's where Musk may have drawn his inspiration. Both "Just Read The Instructions" and "Of Course I Still Love You" are names of Minds in Banks' "The Player of Games," a very enjoyable novel in which the Culture works to undermine a stagnant and brutal dictatorship whose leadership is determined by the outcome of a complicated strategy game.

Given Musk's worries about the possibility of uncontrolled or malignant AI destroying humanity, it's heartening that he chose to honor a much more optimistic view of the possibilities of AI and space travel in the naming of his drone ships.

SEE ALSO: NASA is building this monster rocket to shuttle astronauts to Mars

Join the conversation about this story »

NOW WATCH: Research Reveals Why Men Cheat, And It's Not What You Think

THE ROBOTICS MARKET REPORT: The fast-multiplying opportunities in consumer, industrial, and office robots

$
0
0

Robots have been a reality on factory assembly lines for over twenty years. But it is only relatively recently that robots have become advanced enough to penetrate into home and office settings. 

MasterRobots_BIIIn a recent report from BI Intelligence we assess the market for consumer and office robots, taking a close look at how robots are penetrating into many markets once dominated by legacy consumer-electronics companies. 

We also examine the market for industrial manufacturing robots since it is the market where many robotics companies got their start, and remains the largest robot market by revenue. We assess how far along the robotics industry has come in solving some of the most pressing hardware and software challenges. And finally, we assess the factors on the consumer side that might still limit the market for relatively inexpensive home robots.  

Access The Full Report And Data By Signing Up For A Risk-Free Trial »

Here are some of the most important takeaways from the report:

In full, the report: 

  • Includes nine charts and datasets on robot industry segmentation, opportunities, and trends
  • Has nine separate sections with in-depth discussions of tech and price hurdles, barriers to consumer adoption, industrial market shifts, Google's robotics efforts, toy robots, the telepresence market, the home-cleaning market, and the consumer-robot market overall. 
  • Discusses why growth in industrial robots has tapered. 
  • Details the reasons behind the success of the Roomba vacuum. 
  • Introduces geographically segmented data on the home-cleaning market.

For full access to the report on robots and all BI Intelligence's coverage of the internet of things, mobile, payments, e-commerce, and digital media industries, sign up for a risk-free trial.

 

Join the conversation about this story »

Facebook is getting into robots

$
0
0

Facebook R2 D2

Facebook, that company with a globally popular social network and data centers to run it all reliably, is apparently interested in “industrial automation and robotics.”

The company has been searching for a person to work as an electrical engineer focused on robotics at its Menlo Park, Calif., headquarters, according to an undated job posting. Facebook has been seeking someone for this sort of role since August, if not earlier.

“This person will be responsible for designing electrical system[s] and circuitry for Industrial Automation and Robotics development,” the job posting states. The successful applicant will join Facebook’s Strategic Engineering and Development team.

The results of the person’s efforts could be deployed inside the massive server farms Facebook counts on to store and serve up user information.

“Data Center experience a plus,” states the job posting, which is listed under Facebook’s Data Center Design and Operations group. Facebook currently employs ordinary humans to perform maintenance on servers and other hardware inside its data centers.

Facebook declined to provide VentureBeat with more information about the position.

The right person for the job ought to have “10+ years designing electrical systems for industrial automation, vehicles, or robotics,” Facebook said.

The engineer will be expected to “own the electrical design of several hardware systems” and “select and assemble components to make a working reliable system,” according to the post.

Facebook has already expressed an interest in autonomous devices. Last year Facebook launched a Connectivity Lab, after hiring people from a drone company called Ascenta. That Connectivity Lab has been hunting for engineers to staff up its Woodland Hills, Calif., office.

Autonomous vehicles have been in the news lately. Most recently, Uber said it’s starting a research and development facility for self-driving cars in conjunction with Carnegie Mellon University.

And for more than a year now, Facebook has been operating an artificial-intelligence research arm. Yann LeCun, the head of the group, has worked on mobile robots in the past.

Join the conversation about this story »

NOW WATCH: This Flying Car Is Real And It Can Fly 430 Miles On A Full Tank

Intelligent machines aren't going to overthrow humans

$
0
0

hal 2001 a space odysseyMichael Littman is a professor of computer science at Brown University. He is co-leader of Brown's Humanity-Centered Robotics Initiative, which aims to document the societal needs and applications of human-robot interaction research as well as the ethical, legal and economic questions that will arise with its development. Littman contributed this article to Live Science's Expert Voices: Op-Ed & Insights.

Every new technology brings its own nightmare scenarios. Artificial intelligence (AI) and robotics are no exceptions. Indeed, the word "robot" was coined for a 1920 play that dramatized just such a doomsday for humanity.

Earlier this month, an open letter about the future of AI, signed by a number of high-profile scientists and entrepreneurs, spurred a new round of harrowing headlines like "Top Scientists Have an Ominous Warning About Artificial Intelligence," and "Artificial Intelligence Experts Pledge to Protect Humanity from Machines." The implication is that the machines will one day displace humanity.

Let's get one thing straight: A world in which humans are enslaved or destroyed by superintelligent machines of our own creation is purely science fiction. Like every other technology, AI has risks and benefits, but we cannot let fear dominate the conversation or guide AI research.

Nevertheless, the idea of dramatically changing the AI research agenda to focus on AI "safety" is the primary message of a group calling itself the Future of Life Institute (FLI). FLI includes a handful of deep thinkers and public figures such as Elon Musk and Stephen Hawking and worries about the day in which humanity is steamrolled by powerful programs run amuck. [Intelligent Robots Will Overtake Humans by 2100, Experts Say]

As eloquently described in the book "Superintelligence: Paths, Dangers, Strategies" (Oxford University Press, 2014), by FLI advisory board member and Oxford-based philosopher Nick Bostrom, the plot unfolds in three parts. In the first part — roughly where we are now — computational power and intelligent software develops at an increasing pace through the toil of scientists and engineers. Next, a breakthrough is made: programs are created that possess intelligence on par with humans.

These programs, running on increasingly fast computers, improve themselves extremely rapidly, resulting in a runaway "intelligence explosion." In the third and final act, a singular super-intelligence takes hold — outsmarting, outmaneuvering and ultimately outcompeting the entirety of humanity and perhaps life itself. End scene.

Let's take a closer look at this apocalyptic storyline. Of the three parts, the first is indeed happening now and Bostrom provides cogent and illuminating glimpses into current and near-future technology. The third part is a philosophical romp exploring the consequences of supersmart machines. It's that second part — the intelligence explosion — that demonstrably violates what we know of computer science and natural intelligence. [History of A.I.: Artificial Intelligence (Infographic)]

Runaway intelligence?

elon musk robotsThe notion of the intelligence explosion arises from Moore's Law, the observation that the speed of computers has been increasing exponentially since the 1950s. Project this trend forward and we'll see computers with the computational power of the entire human race within the next few decades. It's a leap to go from this idea to unchecked growth of machine intelligence, however.

First, ingenuity is not the sole bottleneck to developing faster computers. The machines need to actually be built, which requires real-world resources. Indeed, Moore's law comes with exponentially increasing production costs as well — mass production of precision electronics does not come cheap. Further, there are fundamental physical laws — quantum limits — that bound how quickly a transistor can do its work. Non-silicon technologies may overcome those limits, but such devices remain highly speculative.

In additional to physical laws, we know a lot about the fundamental nature of computation and its limits. For example, some computational puzzles, like figuring out how to factor a number and thereby crack online cryptography schemes, are generally believed to be unsolvable by any fast program. They are part of a class of mathematically defined problems that are "NP-complete" meaning that they are exactly as hard as any problem that can be solved non-deterministically (N) in polynomial time (P), and they have resisted any attempt at scalable solution. As it turns out, most computational problems that we associate with human intelligence are known to be in this class. [How Smart Is Advanced Artificial Intelligence? Try Preschool Level]

Wait a second, you might say. How does the human mind manage to solve mathematical problems that computer scientists believe can't be solved? We don't. By and large, we cheat. We build a cartoonish mental model of the elements of the world that we're interested in and then probe the behavior of this invented miniworld. There's a trade-off between completeness and tractability in these imagined microcosms. Our ability to propose and ponder and project credible futures comes at the cost of accuracy. Even allowing for the possibility of the existence of considerably faster computers than we have today, it is a logical impossibility that these computers would be able to accurately simulate reality faster than reality itself.

Countering the anti-AI cause

In the face of general skepticism in the AI and computer science communities about the possibility of an intelligence explosion, FLI still wants to win support for its cause. The group's letter calls for increased attention to maximizing the societal benefits of developing AI.

Many of my esteemed colleagues signed the letter to show their support for the importance of avoiding potential pitfalls of the technology. But a few key phrases in the letter such as "our AI systems must do what we want them to do" are taken by the press as an admission that AI researchers believe they might be creating something that "cannot be controlled." It also implies that AI researchers are asleep at the wheel, oblivious to the ominous possibilities, which is simply untrue. [Artificial Intelligence: Friendly or Frightening?]

To be clear, there are indeed concerns about the near-term future of AI — algorithmic traders crashing the economy, or sensitive power grids overreacting to fluctuations and shutting down electricity for large swaths of the population. There's also a concern that systemic biases within academia and industry prevent underrepresented minorities from participating and helping to steer the growth of information technology. These worries should play a central role in the development and deployment of new ideas. But dread predictions of computers suddenly waking up and turning on us are simply not realistic.

I welcome an open discussion about how AI can be made robust and beneficial, and how we can engineer intelligent machines and systems that make society better. But, let's please keep the discussion firmly within the realm of reason and leave the robot uprisings to Hollywood screenwriters.

 

Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter andGoogle+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science.

Copyright 2015 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

SEE ALSO: Bill Gates: Elon Musk Is Right, We Should All Be Scared Of Artificial Intelligence Wiping Out Humanity

Join the conversation about this story »

NOW WATCH: Scientists Discovered What Actually Wiped Out The Mayan Civilization

Researchers figured out when companies think about replacing workers with robots

$
0
0

robot bartending technology future(Reuters) - The falling cost of industrial robots will allow manufacturers to use them to replace more factory workers over the next decade while lowering labor costs, according to new research.

Robots now perform roughly 10 percent of manufacturing tasks that can be done by machines, according to the Boston Consulting Group. The management consulting firm projected that to rise to about 25 percent of such "automatable" tasks by 2025.

In turn, labor costs stand to drop by 16 percent on average globally over that time, according to the research.

The shift will mean an increasing demand for skilled workers who can operate the machines, said Hal Sirkin, a senior partner at Boston Consulting.

Factory workers "will be higher paid but there will be fewer of them," Sirkin said.

The research found a tipping point for installing robots: Companies tend to start thinking about replacing workers when the costs of owning and operating a system come at a 15 percent discount to employing a human counterpart.

For example, in the U.S. automotive industry, which is predicted to be one of the more aggressive adopters of robots, a spot-welding machine costs $8 an hour versus $25 an hour for a worker.

A robot that can perform certain repetitive tasks costs about one-tenth as much as it did more than 10 years ago, Sirkin said. Costs tied to one commonly used robotics system, a spot welder, are expected to fall 22 percent between now and 2025.

Three-fourths of robot installations over the next decade are expected to be concentrated in four areas: transportation equipment, including the automotive sector; computer and electronic products; electrical equipment and machinery.

RobotAdoption is forecast to be slower in industries in which tasks are more difficult to automate or labor costs are low, such as food products or fabricated metal.

Certain countries also are expected to be more brisk adopters. China, the United States, Japan, Germany and South Korea now account for about 80 percent of robot purchases and are expected to maintain that share over the next decade.

Labor costs have climbed in countries such as China that have been popular for outsourcing production, while technological advances for robots allow them to be more flexible and perform more tasks, said Jim Lawton, chief marketing officer at robotics company Rethink Robotics.

"People have come to believe this is going to be an important part of how manufacturing gets done," he said.

(Editing by Lisa Shumaker)

Join the conversation about this story »

NOW WATCH: This Flying Car Is Real And It Can Fly 430 Miles On A Full Tank


Here’s the real reason we should be worried about AI

$
0
0

artificial intelligence robot

Somewhere in the long list of topics that are relevant to astrobiology is the question of 'intelligence'.

Is human-like, technological intelligence likely to be common across the universe?

Are we merely an evolutionary blip, our intelligence consigning us to a dead-end in the fossil record?

Or is intelligence something that the entropy-driven, complexity-producing, universe is inevitably going to converge on?

All good questions. An equally good question is whether we can replicate our own intelligence, or something similar, and whether or not that's actually a good idea.

In recent months, once again, this topic has made it to the mass media. First there was Stephen Hawking, then Elon Musk, and most recently Bill Gates. All of these smart people have suggested that artificial intelligence (AI) is something to be watched carefully, lest it develops to a point of existential threat.

Except it's a little hard to find any details of what exactly that existential threat is perceived to be. Hawking has suggested that it might be the capacity of a strong AI to 'evolve' much, much faster than biological systems – ultimately gobbling up resources without a care for the likes of us. I think this is a fair conjecture. AI's threat is not that it will be a sadistic megalomaniac (unless we deliberately, or carelessly make it that way) but that it will follow its own evolutionary imperative.

It's tempting to suggest that a safeguard would be to build empathy into an AI. But I think that fails in two ways. First, most humans have the capacity for empathy, yet we continue to be nasty, brutish, and brutal to ourselves and to pretty much every other living thing on the planet. The second failure point is that it's not clear to me that true, strong, AI is something that we can engineer in a pure step-by-step way, we may need to allow it to come into being on its own.

What does that mean? Current efforts in areas such as computational 'deep-learning' involve algorithms constructing their own probabilistic landscapes for sifting through vast amounts of information. The software is not necessarily hard-wired to 'know' the rules ahead of time, but rather to find the rules or to be amenable to being guided to the rules – for example in natural language processing. It's incredible stuff, but it's not clear that it is a path to AI that has equivalency to the way humans, or any sentient organisms, think. This has been hotly debated by the likes of Noam Chomsky(on the side of skepticism) and Peter Norvig (on the side of enthusiasm). At a deep level it is a face-off between science focused on underlying simplicity, and science that says nature may not swing that way at all.

An alternative route to AI is one that I'll propose here (and it's not original). Perhaps the general conditions can be created from which intelligence can emerge. On the face of it this seems fairly ludicrous, like throwing a bunch of spare parts in a box and hoping for a new bicycle to appear. It's certainly not a way to treat AI as a scientific study. But if intelligence is the emergent – evolutionary – property of the right sort of very, very complex systems, could it happen? Perhaps.

One engineering challenge is that it may take a system of the complexity of a human brain to sustain intelligence, but of course our brains co-evolved with our intelligence. So it's a bit silly to imagine that you could sit down and design the perfect circumstances for a new type of intelligence to appear, because we don't know exactly what those circumstances should be.

Except perhaps we are indeed setting up these conditions right now. Machine learning may be a just piece of the behavioral puzzle of AI, but what happens when it lives among the sprawl of the internet? The troves of big and small data, the apps, the algorithms that control data packet transport, the sensors – from GPS to thermostats and traffic monitors – the myriad pieces that talk to each other directly or indirectly.

This is an enormous construction site. Estimates suggest that in 2014 some 7.4 billion mobile devices were online. In terms of anything that can be online – the internet of 'things' (from toilets to factories) -  the present estimate is that there are about 15 billion active internet connections today (via a lovely service by Cisco). By 2020 there could be 50 billion.

If this were a disorganized mush of stuff, like the spare parts in a box, I think one would have little hope for anything interesting to happen. But it's not a mush. It's increasingly populated by algorithms whose very purpose is to find structures and correlations in this ocean – by employing tricks that are in part inspired by biological intelligence, or at least our impression of it. Code talks to code, data packets zip around seeking optimal routes, software talks to hardware, hardware talks to hardware. Superimposed on this ecosystem are human minds, human decision processes nursing and nurturing the ebb and flow of information. And increasingly, our interactions are themselves driving deep changes in the data ocean as analytics seek to 'understand' what we might look for next, as individuals or as a population.

Could something akin to a strong AI emerge from all of this? I don't know, and neither does anyone else. But it is a situation that has not existed before in 4 billion years of life on this planet, which brings us back to the question of an AI threat.

If this is how a strong AI occurs, the most immediate danger will simply be that a vast swathe of humanity now relies on the ecosystem of the internet. It's not just how we communicate or find information, it's how our food supplies are organized, how our pharmacists track our medicines, how our planes, trains, trucks, and cargo ships are scheduled, how our financial systems work. A strong AI emerging here could wreak havoc in the way that a small child can rearrange your sock drawer or chew on the cat's tail.

As Hawking suggests, the 'evolution' of an AI could be rapid. In fact, it could emerge, evolve, and swamp the internet ecosystem in fractions of a second. That in turn raises an interesting possibility – would an emergent AI be so rapidly limited that it effectively stalls, unable to build the virtual connections and structures it needs for long term survival? While that might limit AI, it would be cold comfort for us.

I can't resist positing a connection to another hoary old problem – the Fermi Paradox. Perhaps the creation of AI is part of the Great Filter that kills off civilizations, but it also self-terminates, which is why even AI has apparently failed to spread across the galaxy during the past 13 billion years…

SEE ALSO: KURZWEIL: Human-Level AI Is Coming By 2029

Join the conversation about this story »

NOW WATCH: Scientists Discovered What Actually Wiped Out The Mayan Civilization

Who's set to make money from the coming artificial intelligence boom?

$
0
0

artificial intelligence robot

Artificial intelligence is about to take off in a big way.

According to a new report by Goldman Sachs, AI is defined as "any intelligence exhibited by machines or software." That can mean machines that learn and improve their operations over time, or that make sense of huge amounts of disparate data.

Though it's been almost 60 years since we first heard of the term AI, Goldman believes that we are "on the cusp of a period of more rapid growth in its use and applications."

The reasons? Cheaper sensors leading to a flood of new data, and rapid improvements in technology that allows computers to understand so-called "unstructured" data — like conversations and pictures.

Other industry insiders are confident that AI will continue to evolve at a much higher rate while affecting wage growth in many industries. Ray Kurzweil, the director of engineering at Google, believes that human-level AI is coming by 2029.

So who are the players going to be?

First, several big tech companies have been storing up patents related to the field.

IBM is the leader, with about 500 patents related to artificial intelligence. IBM's super-computer — Watson — is an example of the shift to AI, as it entered the healthcare sector in 2013 and helped lower the error rate in cancer diagnoses by physicians.

Other big patent players in the space include Microsoft, Google, and SAP.

 Patents

A lot of big tech companies have also been buying AI startups.

In the last two years, Google bought five different companies having to do with technologies like image recognition, natural language processing, and neural networks. Yahoo set its sights on boosting its image recognition and natural language processing abilities. Twitter bought a deep learning technology last year to power its image recognition capabilities, while Home Depot uses a recently bought data analytics lab to help with their pricing. 

Then there are the AI startups who have received substantial amounts of funding, including Rethink Robotics ($127 million) and Sentient Technologies ($144 million). 

Analysts from Goldman Sachs are particularly bullish about AI technologies that come from Asia and the US, while Europe lags. 

So how do investors capitalize on the coming boom? 

Goldman believes Japanese hardware company NEC— the number one facial and text analysis company in the world — is a good investment. They also recommend several companies that sell AI components into cars for scenarios like helping drivers park: Nidec, MobileEye, Nippon Ceramic, and Pacific Industrial.

Marketo and Opower, both based in the U.S, are also rated as buys.  Both of these companies focus on personalizing customer engagement through the use of AI to analyze massive amounts of customer data. Goldman's analysts are also bullish about Amazon and Twitter, two companies that use AI to boost revenue and customer loyalty. 

Join the conversation about this story »

NOW WATCH: 14 things you didn't know your iPhone headphones could do

Why Siri is a woman

$
0
0

Zooey Deschanel Siri iPhone

From Apple's iPhone assistant Siri to the mechanized attendants at Japan's first robot-staffed hotel, a seemingly disproportionate percentage of artificial intelligence systems have female personas.

Why?

"I think there is a pattern here," said Karl Fredric MacDorman, a computer scientist and expert in human-computer interaction at Indiana University-Purdue University Indianapolis. But "I don't know that there's one easy answer," MacDorman told Live Science.

One reason for the glut of female artificial intelligences (AIs) and androids (robots designed to look or act like humans) may be that these machines tend to perform jobs that have traditionally been associated with women. For example, many robots are designed to function as maids, personal assistants or museum guides, MacDorman said. [The 6 Strangest Robots Ever Created]

In addition, many of the engineers who design these machines are men, and "I think men find women attractive, and women are also OK dealing with women," he added.

Voice of Siri

Siri is perhaps today's most well-known example of AI. The name Siri in Norse means "a beautiful woman who leads you to victory," and the default voice is a female American persona known as Samantha. Apple acquired Siri in 2010 from the research nonprofit SRI International, an Apple spokeswoman said. Siri's voice now comes in male or female form, and can be set to a number of different languages.

In his own research, MacDorman studies how men and women react to voices of different genders. In one study, he and his colleagues played clips of male and female voices, and gave people a questionnaire about which voice they preferred. Then the researchers gave people a test that measured their implicit, or subconscious, preferences. The men in the study reported that they preferred female voices, but they showed no implicit preference for them, whereas the women in the study implicitly preferred female voices to male ones, even more than they admitted in the questionnaire.

"I think there's a stigma for males to prefer males, but there isn't a stigma for females to prefer females," MacDorman said.

Rise of the fembots

Does the same trend toward female personas also exist among humanoid robots?

"When it comes to a disembodied voice, the chances of it being female are probably slightly higher than of it being male," said Kathleen Richardson, a social anthropologist at University College London, in England, and author of the book "An Anthropology of Robots and AI: Annihilation Anxiety and Machines" (Routledge, 2015). "But when it comes to making something fully humanoid, it's almost always male." [Super-Intelligent Machines: 7 Robotic Futures]

And when humanoid robots are female, they tend to be modeled after attractive, subservient young women, Richardson told Live Science.

For example, the Japanese roboticist Hiroshi Ishiguro of Osaka University has designed some of the world's most advanced androids, such as the Repliee R1, which was based on his then 5-year-old daughter. Ishiguro also developed the Repliee Q1Expo, which was modeled after Ayako Fujii, a female news announcer at NHK, Japan's national public broadcasting organization. (Ishiguro even created a robotic clone of himself that is so realistic it verges on creepy.)

Recently, Ishiguro developed a series of "Actroid" robots, manufactured by the Japanese robotics company Kokoro, for the world's first robot-staffed hotel. According to The Telegraph, the droids — which resemble young Japanese women — will act as reception attendants, waitresses, cleaners and cloakroom attendants.

Female AI personas can also be found in fiction. For example, the movie "Her" features an artificial intelligent operating system (incidentally named Samantha), who is seductively voiced by Scarlett Johansson. Her human "owner," played by Joaquin Phoenix, ends up falling in love with her.

What does this trend in creating attractive, flawless female robots say about society?

"I think that probably reflects what some men think about women — that they're not fully human beings," Richardson said. "What's necessary about them can be replicated, but when it comes to more sophisticated robots, they have to be male."

Another reason for having female robots could be that women are perceived as less threatening or more friendly than men, Richardson said. And the same could be said of childlike robots.

Hollywood's vision of robots, such as in "The Terminator" and "The Matrix" movies, makes them seem scary. "But if we designed robots to be like children, we could get people to be more comfortable with them," Richardson said.

Follow Tanya Lewis on Twitter. Follow us @livescience, Facebook& Google+. Original article on Live Science.

Copyright 2015 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

SEE ALSO: The physics of Mario World show the game has a fundamental flaw

Join the conversation about this story »

NOW WATCH: Scientists Discovered What Actually Wiped Out The Mayan Civilization

Ray Dalio is building robots

$
0
0

ray dalio

The world's largest hedge fund, Bridgewater Associates, is almost ready to launch a new artificial intelligence team, Bloomberg reports.

Billionaire Ray Dalio's hedge fund manages $160 billion in assets, which to put in perspective, is 8.5 times more than that managed by Bill Ackman's Pershing Square Capital Management.

Dalio told Ackman at the Harbor Investment Conference earlier this month that AI already factors into Bridgewater's investment strategy. He explained:

I can be short or long anything in the world, and I'm short or long practically everything. I don't have any bias, so I do it in a very fundamental way... we use a lot of artificial intelligence type of approaches to think about portfolio theory. I use a lot of financial engineering to basically take a whole bunch of uncorrelated bets...

Bloomberg reports that a source close to the matter says the new AI team will launch next month with about six employees led by senior technologist Dave Ferrucci, quietly hired from IBM in late 2012.

Ferrucci gained recognition as the lead researcher of Watson, the AI engine that became a "Jeopardy" champion.

Ferrucci told the New York Times in 2013 that before leaving IBM, he was working on WatsonPaths, which took a different direction from the traditional Big Data approach.

"The Big Data formula, he noted, has proved to be 'incredibly powerful' for tasks like natural-language processing — a central technology behind Google search, for instance," the Times wrote. "WatsonPaths, by contrast, builds step-by-step graphs, or paths, that trace possible causes rather than mere statistical correlations."

It's the approach Bridgewater hired him for to stay on top.

"Machine learning is the new wave of investing for the next 20 years and the smart players are focusing on it," Gustavo Dolfino, CEO of recruitment firm WhiteRock Group told Bloomberg. Investment firms like Two Sigma Investments and Renaissance Technologies have been expanding their AI teams in recent months.

It also works with Dalio's management philosophy, which he describes in the 120-page manual he gives to every Bridgewater employee. Dalio writes that a manager should see their team as an autonomous "machine" whose function is to achieve its manager's goals.

A Bridgewater representative told Business Insider that the hedge fund is not ready to comment but will update us if that changes.

SEE ALSO: Billionaire investor Ray Dalio explains how to avoid micromanaging

Join the conversation about this story »

NOW WATCH: Watch the FCC Chair's impassioned defense of net neutrality

Google chief economist: Don't fear the robots

$
0
0

Robot Bartender

Google chief economist Hal Varian says that we're grossly underestimating the productivity benefits we're getting from robots and other forms of automation.

Speaking at a Churchill Club forum in San Francisco last night alongside Paul Thomas (Chief Economist, Intel) and Jaana Remes (Partner, McKinsey Global Institute), he mentioned on multiple occasions that he views productivity as the true driving force of the global economy going forward.

But we're not measuring it right.

According to him, the issue with using GDP as a measure is that it doesn't take into account robots, which he claims are all in transaction costs:

"The first invasion of the robots...washing machines, dryers, vacuum cleaners, dish washers, lawn mowers, et cetera...they showed up in domestic production but weren't in GDP, because it's not a monetized sector."

The key to economic growth, he says, is in the time-savings that come from automation:

"The paradox...is that everybody wants less work but more jobs. This idea that the robots are going to come and take away our jobs seems to be vastly overblown...they haven't destroyed jobs, they've typically destroyed work. People...have offloaded these unpleasant tasks to computers."

Google wants this to be true, since they are a leading company in this era of complete automation. At the same time, researchers from Oxford say that 45% of American jobs will be automated in the next 20 years.

The answer probably lies somewhere in the middle. Automation will take away jobs, but it will also create new jobs — like building the robots that build the new products we want, or increasing demand for new ways to fill our leisure time.

Varian also said that way to counter automation is with better education for everybody — especially people who might previously have not been able to get an education.

"I would say that the smartest person in the world is in China or India or Africa and they're stuck behind a plow. 10 years ago that talent would've gone to waste, but now they have access to the internet." 

He also cited online video, like the lessons offered by Khan Academy, as the single most important aspect in educating children in developing countries.

 

 

 

Join the conversation about this story »

NOW WATCH: How to make your commute less miserable

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>