Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Elon Musk: I'm Worried About A 'Terminator'-Like Scenario Erupting From Artificial Intelligence (SCTY, TSLA)

$
0
0

skynet

Elon Musk believes it's feasible a "Terminator"-like scenario could erupt out of an artificial intelligence.

In an interview with CNBC on Tuesday, Musk said he's an investor in an artificial-intelligence company called Vicarious— but not because he's trying to make any money. Rather, it's because he likes to "keep an eye on" various technological developments.

Like killer robots.

Here's the full exchange, with the network's Kelly Evans and Julia Boorstin leading the discussion.

It's wild:

JULIA BOORSTIN: Now, I have to ask you about a company that you invested in. As you said, you make almost no investments outside of SpaceX and Tesla.

ELON MUSK: Yeah I’m not really an investor.

JB: You’re not an investor?

EM: Right. I don’t own any public securities apart from SolarCity and Tesla.

JB: That's amazing. But you did just invest in a company called Vicarious Artificial Intelligence. What is this company?

MUSK: Right. I was also an investor in DeepMind before Google acquired it and Vicarious. Mostly I sort of – it's not from the standpoint of actually trying to make any investment return. It's really, I like to just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome there and we need to –

KE: Dangerous? How so?

EM: Potentially, yes. I mean, there have been movies about this, you know, like "Terminator."

KE: Well, yes, but movies are — even if that is the case, what do you do about it? I mean, what dangers do you see that you can actually do something about?

MUSK: I don't know.

JB: Well why did you invest in Vicarious? What exactly does Vicarious do? What do you see it doing down the line?

MUSK: Well, I mean, Vicarious refers to it as recursive cortical networks. Essentially emulating the human brain. And so I think — 

JB: So you want to make sure that technology is used for good and not "Terminator"-like evil?

MUSK: Yeah. I mean, I don’t think — in the movie "Terminator," they didn't create A.I. to — they didn't expect, you know some sort of "Terminator"-like outcome. It is sort of like the "Monty Python" thing: Nobody expects the Spanish inquisition. It’s just — you know, but you have to be careful. Yeah, you want to make sure that —

KE: But here is the irony. I mean, the man who is responsible for some of the most advanced technology in this country is worried about the advances in technology that you aren't aware of.

MUSK: Yeah.

KE: I mean, I guess that is why I keep asking, So what can you do? In other words, this stuff is almost inexorable, isn’t it? How if you see that there are these brain-like developments out there can you really do anything to stop it?

MUSK: I don't know.

JB: But what should A.I. Be used for? What's its best value?

MUSK: I don't know. But there are some scary outcomes. And we should try to make sure the outcomes are good, not bad. Yeah.

KE: Or escape to mars if there is no other option.

MUSK: The A.I. will chase us there pretty quickly.

See below for the video. The clip starts at 14:45.

SEE ALSO: Elon Musk has made another gigantic bet

Join the conversation about this story »


3 Big Ways That Social Robots Will Change Your Life

$
0
0

Screen Shot 2014 06 20 at 1.48.53 PM 1

MIT professor and roboticist Cynthia Breazeal has an excellent essay up on Robohub right now all about how "social robots" will stand to change much of our lives.

A "social robot" is simply a robot that can communicate with you more like a human than the cold, lifeless machine that it is.

Human interactions are complex — we use sarcasm, we vary our tone to convey different feelings behind our words, and the common behavior of gaze aversion has us regularly looking away from a person we might be speaking directly to. This is complex stuff to teach to a robot, but it's exactly what Breazeal attacks in her research.

Breazeal writes that her "fundamental gripe with technology is that it fails to support a more holistic human experience. It falls short in giving people a personally meaningful, emotionally engaging experience." Over the course of attacking these problems, she helped build "Leonardo," a two-and-a-half-foot tall robot that can track objects with its onboard cameras, mimic human facial expressions, and demonstrate basic learning abilities.

In her Robohub post, Breazeal lays out three big ways that sufficiently "social" robots could change the world. A video demo of social robot Leonardo is at the bottom of this post.

Social robots will change the way we educate children:

"Even in the most time-crunched families, parents will have a reliable, high-quality partner in education for all children. Including our youngest learners who shall enter school ready to learn, or children with special needs, receptive to the quality education they need."

Social robots could prove effective health enforcers while remaining stoic and gentle:

"Those people who struggle with health or chronic disease issues will have the right kind of tools to change behavior, and can independently manage their health and improve their treatment."

The elderly could live independently much longer than they do with help from a friendly, human-esque robot:

Elders will be able to age independently in their homes with the help of a technology that feels much more like an attentive companion than yet another digital tool or a “Big Brother” monitoring system — relieving pressure on oversubscribed institutions and remaining emotionally connected to their families and loved ones that might live far away.

Join the conversation about this story »

Machine Learning Can Automatically Cut The Boring Parts Out Of Movies

$
0
0

Just about every device has a camera in it, so we're shooting more and more video than we have ever before. If only all of it were worth watching.

The latest application of machine learning was developed by Eric P. Xing, professor of machine learning from Carnegie Mellon University, and Bin Zhao, a Ph.D. student in the Machine Learning Department. It's called LiveLight, and it can help automate the reduction of videos to just their good parts. 

To get a quick idea what it's all about, watch the demo above.

LiveLight takes a long piece of source footage and "evaluates action in the video, looking for visual novelty and ignoring repetitive or eventless sequences, to create a summary that enables a viewer to get the gist of what happened." Put another way, it watches your movie and edits out the boring stuff. This all happens with just one pass through said video — LiveLight never works backwards.

You're left with something more like a highlight reel than the too-long original video pictured on the left above. LiveLight is robust enough to run on a standard laptop and is powerful enough to process an hour of video in one or two hours.

Join the conversation about this story »

This Ice Cream-Serving Robot Is Actually A Huge Breakthrough For Artificial Intelligence

$
0
0

Robots need some very specific instructions in order to successfully accomplish tasks. If you want a robot to bring you a beer while you recline on the couch, it needs to know what a beer is, that a beer is in a refrigerator, what a refrigerator looks like, how to open said fridge, and so on. It's tedious!

Humans can't possibly provide all this context with a simple verbal request, so Ashutosh Saxena and researchers at Cornell University's Robot Learning Lab aim to help bridge this communication gap with a project called "Tell Me Dave."

Saxena's robot, equipped with a 3-D camera, scans its environment and identifies the objects in it, using computer vision software previously developed in Saxena's lab. The robot has been trained to associate objects with their capabilities: A pan can be poured into or poured from; stoves can have other objects set on them, and can heat things. So the robot can identify the pan, locate the water faucet and stove and incorporate that information into its procedure. If you tell it to "heat water" it can use the stove or the microwave, depending on which is available. And it can carry out the same actions tomorrow if you've moved the pan, or even moved the robot to a different kitchen.

Put another way, instead of hard-coding numerous processes that are effortless for a human but quite elaborate for robots (like grabbing a beer from the fridge), Saxena and crew have taught their PR2 bot to recognize various objects after being taught their traits. A microwave can be used to heat things, a bowl can hold water, a tap dispenses water, and so on. 

A quick verbal request to heat up a bowl of water, let's say, is now completely within a robot's purview despite the usually robot-beating lack of info associated with such a statement — a robot now "knows" how to fill in the steps missing from the request.

As a demonstration of this ability, the video above demonstrates a robot filling a rather open-ended ice cream order: "Take some coffee in a cup. Add ice cream of your choice. Finally, add raspberry syrup to the mixture." This is more or less how we imagine we'd speak to a robot in a sci-fi movie. It takes the robot some time, but soon enough: affogato!

Join the conversation about this story »

Here's Why Robots Will Increasingly Be Less, Not More, Human-Like

$
0
0

Robots have been a reality on factory assembly lines for over twenty years. But it is only relatively recently that robots have become advanced enough to penetrate into home and office settings. 

BI Intelligence estimates that there will be a $1.5 billion market for consumer and business robots by 2019.

BII_ConsumerOfficeRobots

But our robot future likely won't be a world in which it becomes increasingly difficult to distinguish human from robot. Instead, robot manufacturers are moving away from human-like robots, as they try to overcome the problem of the 'uncanny valley,' a term coined by Japanese robotics professor Masahiro Mori in 1970.

This refers to the visceral human resistance to any robot that becomes too human-like. Consumers tend to feel empathy toward machines as they gain human attributes, but only up to a point, after which feelings quickly turn to repulsion.  

Some robot designers seem to have consciously avoided any human-like features in their devices (none of iRobot's home cleaning products echo the human anatomy or shape). Others, like Meka, a robot development company recently acquired by Google, have sought to create robot limbs and faces meant to be sympathetic (and allow robots to complete certain actions) but sufficiently non-human in appearance to avoid the uncanny valley problem. 

In a new report from BI Intelligence we assess the market for consumer and office robots, taking a close look at the three distinct categories within this market — home cleaning, telepresence, and home entertainment robots. We also examine the market for industrial manufacturing robots since it is the market where many robotics companies got their start, and remains the largest robot market by revenue. And finally, we assess the factors that might still limit the consumer robot market.  

Access The Full Report And Data By Signing Up >>

Here are some of the most important takeaways from the report:

For full access to the Robot Report and all BI Intelligence's coverage of the mobile industry, sign up.

 

Join the conversation about this story »

Emotion-Recognition Technology Could Make Focus Groups A Thing Of The Past

$
0
0

Focus groups for TV shows and movies have been around for decades. But despite the dollars and hours spent trying to figure out how people will react to media, people don't necessarily report what they're actually feeling.

That's Matt Celuszak's take anyway.

Celuszack is the founder and CEO of UK-based startup CrowdEmotion, which has created a software program that can analyze facial expressions. Launched only a few months ago, the company is part of BBC's incubator program, BBC Worldwide Labs. The British broadcaster is already using CrowdEmotion's proprietary tech to gauge how viewers react to shows like Sherlock. 

“The BBC wants to make high quality content and they want to make it stick,” Celuszack told BI Intelligence, “That’s the reason why they chose us. They want to see if viewers will like their content or not.”

CrowdEmotion's software can track expressions over time, making it possible to measure mood and specific reactions to parts of a show or story. For producers, this means more intel on how specific scenes or dialogue are received. The tech could also potentially help marketers determine how to craft messages in ads and understand if those messages achieve their intended emotional effect. 

Data from facial expression measurements give a more accurate way of determining mood and emotion over traditional rank and score studies, according to Celuszak. The company is also toying with voice recognition analysis as an additional metric. Here's a screenshot of Celuszak in CrowdEmotion's "realtime feed," as it tracks his reactions to a piece of content: 

CrowdEmotion
Screenshot of CrowdEmotion's mood data output. "Media planning can use this ... [the technology] predicts appreciation of a show, telling a broadcaster where to sell it.  That's our secret sauce," he writes in an email accompanying the image. 

To calculate mood based on expression, CrowdEmotion's software uses algorithms and the latest in machine learning technology, a type of artificial intelligence in which a system can train itself using data it collects. Emotion recognition software has existed in the field of neuroscience, but this is the first time a product like this has been developed and used outside of a research setting, according to Celuszak.

"This is an example of data science helping, not fighting, creative content makers," he said.

The software can be used with any device that has a camera. It can even work with smartphones and other portable devices. In the future, CrowdEmotion wants to track audience reactions to any type of content, not just TV. "We are looking to expand into broadcast, music, gaming and publishing.” Other possible applications for the tech industry include using it for border safety patrol to detect criminals.  

 
 

Join the conversation about this story »

This Settles The 'Robots Will Take Our Jobs' Argument Once And For All

$
0
0

metropolis

It's a common dystopian sci-fi trope: the robots get better at various tasks, the humans become useless, and suddenly the world's population is unemployed (or dead) as the robots take our jobs and everything is terrible.

But this simply won't happen, according to several robotics wonks and investors.

Colin Lewis of RobotEnomics says there are simply more jobs available in companies that make use of robots, not fewer: "Our research shows 76 companies that implemented industrial or factory/warehouse robots actually increased the number of employees by 294,000 over the last 3 years."

Most notable among these 76 companies are retail giant Amazon and electric car manufacturer Tesla. Amazon acquired Kiva Robotics Systems in 2012 and has plans to aggressively deploy some 10,000 autonomous robots to fill customer orders by the end of 2014. Amazon has added 89,000 new staff members since 2011.

Tesla's numerous robots help facilitate the construction of its electric cars, completing the assembly of one car part an average of every six seconds. But Tesla has also added 6,000 human jobs. (A video tour of the company's robot-aided manufacturing facility is at the bottom of this post.)

Investor Marc Andreessen echoes this in an op-ed in the Financial Times titled "Robots Will Not Eat The Jobs But Will Unleash Our Creativity." He says "to argue that huge numbers of people will be put out of work but we will find nothing for them — for us — to do is to short human creativity dramatically. And I am long on human creativity."

Robots are already at work in mines and oil fields doing work that used to be carried out by humans, so robotics has inherently already had an effect on the job market. But Lewis says this is only change, not replacement:

Research by Lawrence Katz Professor of Economics at Harvard also shows the ‘hollowing out’ of middle skilled jobs due to technological advances.  A recent paper by Carl Frey and Michael Osborne of Oxford University concludes that 47% of US jobs are at high risk from automation. It’s not all doom and gloom for those with ‘middle skills’ and the MIT and Harvard researchers do allude to an increase in jobs and income for the ‘new artisans,’ a term coined by Professor Katz to refer to those who ‘virtuously combine technical and interpersonal tasks.’ Expanding upon this, Professor Autor expects that ”a significant stratum of middle skill, non-college jobs combining specific vocational skills with foundational middle skills – literacy, numeracy, adaptability, problem-solving and common sense – will persist in coming decades.

If you've got those basic skills, a robot will only make your job easier.

Join the conversation about this story »

The Most Ambitious Artificial Intelligence Project In The World Has Been Operating In Near-Secrecy For 30 Years

$
0
0

douglas lenat"We've been keeping a very low profile, mostly intentionally," said Doug Lenat, President and CEO of Cycorp. "No outside investments, no debts. We don't write very many articles or go to conferences, but for the first time, we're close to having this be applicable enough that we want to talk to you."

IBM's Watson and Apple's Siri stirred up a hunger and awareness throughout the United States for something like a Star Trek computer that really worked — an artificially intelligent system that could receive instructions in plain, spoken language, make the appropriate inferences, and carry out its instructions without needing to have millions and millions of subroutines hard-coded into it.

As we've established, that stuff is very hard. But Cycorp's goal is to codify general human knowledge and common sense so that computers might make use of it.

Cycorp charged itself with figuring out the tens of millions of pieces of data we rely on as humans — the knowledge that helps us understand the world — and to represent them in a formal way that machines can use to reason. The company's been working continuously since 1984 and next month marks its 30th anniversary.

"Many of the people are still here from 30 years ago — Mary Shepherd and I started [Cycorp] in August of 1984 and we're both still working on it," Lenat said. "It's the most important project one could work on, which is why this is what we're doing. It will amplify human intelligence."

It's only a slight stretch to say Cycorp is building a brain out of software, and they're doing it from scratch.

"Any time you look at any kind of real life piece of text or utterance that one human wrote or said to another human, it's filled with analogies, modal logic, belief, expectation, fear, nested modals, lots of variables and quantifiers," Lenat said. "Everyone else is looking for a free-lunch way to finesse that. Shallow chatbots show a veneer of intelligence or statistical learning from large amounts of data. Amazon and Netflix recommend books and movies very well without understanding in any way what they're doing or why someone might like something.

"It's the difference between someone who understands what they're doing and someone going through the motions of performing something."

Cycorp's product, Cyc, isn't "programmed" in the conventional sense. It's much more accurate to say it's being "taught." Lenat told us that most people think of computer programs as "procedural, [like] a flowchart," but building Cyc is "much more like educating a child."

"We're using a consistent language to build a model of the world," he said.

This means Cyc can see "the white space rather than the black space in what everyone reads and writes to each other." An author might explicitly choose certain words and sentences as he's writing, but in between the sentences are all sorts of things you expect the reader to infer; Cyc aims to make these inferences. 

Consider the sentence, "John Smith robbed First National Bank and was sentenced to thirty years in prison." It leaves out the details surrounding his being caught, arrested, put on trial, and found guilty. A human would never actually go through all that detail because it's alternately boring, confusing, or insulting. You can safely assume other people know what you're talking about. It's like pronoun use — he, she, it — one assumes people can figure out the referent. This stuff is very hard for computers to understand and get right, but Cyc does both.

"If computers were human," Lenat told us, "they'd present themselves as autistic, schizophrenic, or otherwise brittle. It would be unwise or dangerous for that person to take care of children and cook meals, but it's on the horizon for home robots. That's like saying, 'We have an important job to do, but we're going to hire dogs and cats to do it.'"

If you consider the world's current and imagined robots, it's hard to imagine them not benefitting from Cyc-endowed abilities that grant them a more human-like understanding of the world.

Just like computers with operating systems, we might one day install Cyc on a home robot to make it incredibly knowledgable and useful to us. And because Cycorp started from zero and was built up with a knowledge of nearly everything, it could be used for a wide variety of applications. It's already being used to teach math to sixth graders.

Cyc can pretend to be a confused sixth grader, and the user's role is to help the AI agent understand and learn sixth grade math. There's an emotional investment, a need to think about it, and so on. Our program of course understands the math, but is simply listening to what students say and diagnosing their confusion. It figures out what behavior can it carry out that would be most useful to help them understand things. It's a possibility to revolutionize sixth grade math, but also other grade levels and subjects. There's no reason couldn't be used in common core curriculum as well.

We asked Lenat what famed author and thinker Douglas Hofstadter might think of Cyc:

[Hofstadter] might know what needs to be done for things to be intelligent, but it has taken someone, unfortunately me, the decades of time to drag that mattress out of the road so we can do the work. It's not done by any means, but it's useful.

Join the conversation about this story »


By 2045 'The Top Species Will No Longer Be Humans,' And That Could Be A Problem

$
0
0

terminator red eye rise of robots

"Today there's no legislation regarding how much intelligence a machine can have, how interconnected it can be. If that continues, look at the exponential trend. We will reach the singularity in the timeframe most experts predict. From that point on you're going to see that the top species will no longer be humans, but machines."

These are the words of Louis Del Monte, physicist, entrepreneur, and author of "The Artificial Intelligence Revolution." Del Monte spoke to us over the phone about his thoughts surrounding artificial intelligence and the singularity, an indeterminate point in the future when machine intelligence will outmatch not only your own intelligence, but the world's combined human intelligence too.

The average estimate for when this will happen is 2040, though Del Monte says it might be as late as 2045. Either way, it's a timeframe of within three decades.

louis del monte

"It won't be the 'Terminator' scenario, not a war," said Del Monte. "In the early part of the post-singularity world, one scenario is that the machines will seek to turn humans into cyborgs. This is nearly happening now, replacing faulty limbs with artificial parts. We'll see the machines as a useful tool. Productivity in business based on automation will be increased dramatically in various countries. In China it doubled, just based on GDP per employee due to use of machines."

"By the end of this century," he continued, "most of the human race will have become cyborgs [part human, part tech or machine]. The allure will be immortality. Machines will make breakthroughs in medical technology, most of the human race will have more leisure time, and we'll think we've never had it better. The concern I'm raising is that the machines will view us as an unpredictable and dangerous species."

Del Monte believes machines will become self-conscious and have the capabilities to protect themselves. They "might view us the same way we view harmful insects." Humans are a species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses." Hardly an appealing roommate.

He wrote the book as "a warning." Artificial intelligence is becoming more and more capable, and we're adopting it as quickly as it appears. A pacemaker operation is "quite routine," he said, but "it uses sensors and AI to regulate your heart."

A 2009 experiment showed that robots can develop the ability to lie to each other. Run at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, the experiment had robots designed to cooperate in finding beneficial resources like energy and avoiding the hazardous ones. Shockingly, the robots learned to lie to each other in an attempt to hoard the beneficial resources for themselves.

"The implication is that they're also learning self-preservation," Del Monte told us. "Whether or not they're conscious is a moot point."

SEE ALSO: The Next 20 Years Are Going To Make The Last 20 Look Like We Accomplished Nothing In Tech

Join the conversation about this story »

Google Cofounder Sergey Brin: We Will Make Machines That 'Can Reason, Think, And Do Things Better Than We Can' (GOOG)

$
0
0

Google cofounders Larry Page and Sergey Brin sat for an interview with venture capitalist Vinod Khosla

During the interview, Brin was asked about machine learning and artificial intelligence. He says that so far, we haven't come close to replicating human intelligence. However, he thinks it's only a matter of time before that changes:

In the machine learning realm, we have several kinds of efforts going on. There's, for example, the brain project, which is really machine learning focused. It takes input, such as vision. In fact, we've been using it for the self-driving cars. It's been helpful there. It's been helpful for a number of Google services. And then, there's more general intelligence, like the DeepMind acquisition that — in theory — we hope will one day be fully reasoning AI. Obviously, computer scientists have been promising that for decades and not at all delivered. So I think it would be foolish of us to make prognoses about that. But we do have lots of proof points that one can create intelligent things in the world because — all of us around. Therefore, you should presume that someday, we will be able to make machines that can reason, think and do things better than we can.

You can watch the interview here. The stuff about artificial intelligence starts at 12:02:

SEE ALSO: By 2045, 'The Top Species Will No Longer Be Humans'

Join the conversation about this story »

This Artificial Intelligence Could 'Eradicate The Spreadsheet' And Do The Work Of A $250,000 Consultant

$
0
0

kris hammond narrative science

Kris Hammond is chief scientist and co-founder of Narrative Science, a company that uses an artificial intelligence product called Quill to turn boring data and statistics into highly readable stories with a beginning, middle, and end.

The software can write stories with no bias, confusion, or cherry picking, accurately representing the truth based on the data you give it.

Hammond calls it "the most powerful AI system [he's] ever built."

"We looked at the world of media and data as it's growing today," he told us over the phone. "We saw the beginnings of dissatisfaction with big data and we saw ourselves as the solution. You don't want spreadsheets, you want to be told."

Narrative Science is spun out of technology coming out of Northwestern University, merging engineering and journalism. It's all about content generation from raw data, using narrative structure as the driver.

The company's AI product, Quill, can essentially turn numbers into stories: The box score from a baseball game becomes a written report of that game, for example, detailing player performance as if you were reading a sportswriter's coverage in the newspaper.

Hammond said Quill works so well that it exceeded his team's expectations.

"That's rare," Hammond told Business Insider. "We had a moment of pause and looked at it — what is the scope here?"

In the early days, Hammond said they used their software to "write" about any sport that could be expressed with numbers: baseball, basketball, soccer, and the like. It wasn't long before they branched out from there.

quill

"We very quickly saw that if you're an organization tracking your inventory and inventory waste, what you really want is a report that someone can read to understand waste problems. If you're doing assessment of your mutual funds, you want to be able to take a look at the decisions that were made, compare against benchmarks, and get a report. If you want to know how your sales team is doing, you don't have to look through 50 spreadsheets," he said. "The story is the point, and communication drives analysis of the data. Instead of a report on the data, you can get a report on what the data means."

Quill turns boring numbers into written communication that seems human and natural — a story — and Hammond says the results are guaranteed to adhere to the truth as defined by the data.

In the near future, Narrative Science will take aim at the financial services industry. "The data is there, and it's a well-trodden space with respect to data. If you're managing millions of clients, you want to be able to give them something more than a pie chart," Hammond said.

Also on the company's list: the retail industry. How are franchises doing? How are products selling? Hammond says Quill makes the expensive human data scientists interpreting this data today obsolete tomorrow. The company's non-human software system can do an equally effective deep dive on various stats and tell you what they mean in the same written language that an $250,000-per-year employee would.

Despite having created a seemingly magic auto-writer, Narrative Science maintains that conventional journalists will still find employment. "We will make [journalists' jobs] better," said Hammond. "There are things that humans do that are not yet in [purview] of systems like Quill. They hear facts, they chase down compelling ideas, they reconsider and reestablish what they're looking for. Quill doesn't do that, but there will be a time when it does. But by then, you'll be at the deeper, higher end of your game."

As for long-term ambitions, Hammond is steadfast: "In my lifetime, this technology is going be such that will eradicate the spreadsheet. Any place where there's data in a table, Quill will be there look at that data and explain it to you. I look at the spreadsheet and I think it's going to be like the computer punch card. It's what we used to use, and there was a time when it was all we had, but that time is over. Who in the world would ever say, 'I'd rather look at a spreadsheet to find trends and correlations' when Quill can just do it for you?"

SEE ALSO: The Most Ambitious Artificial Intelligence Project In The World Has Been Operating In Near Secrecy For 30 Years

Join the conversation about this story »

Artificial Intelligence Will Write Bestseller Fiction In The Future

$
0
0

kris hammond narrative science

We recently spoke to Kris Hammond, chief scientist and co-founder of Narrative Science, a company with an artificial intelligence product called Quill that can turn data into stories that read as if they were written by a human.

This has instantly obvious applications in the business world. Why isn't a certain product selling? Why is a particular retail franchise succeeding or failing? Quill can spin wonky stats into a story that anyone can read so that they can answer these questions without having to dive into alternately boring or scary numbers.

We asked Hammond if artificial intelligence could conceivably write bestseller fiction. Some already predict that machines will be the dominant "species" by 2045, so surely it's not that much of a stretch to keep humans entertained in the meantime?

"Yes, but it will be a different kind of system than Quill," said Hammond. "Consider the movie 'The Invention of Lying.' Before Ricky Gervais's character invents lying, all the world's television shows are documentaries. That's Quill. The stories it tells are based on data in the real world. There are other ways you can tell stories, of course, but this is Quill's way. It will always do 'documentaries,' in some sense, but other technology will create fiction from other sources. Quill will inform them tremendously and components will go into them, but that's not its job. It's a little too businessy."

John Grisham's livelihood is safe for now.

SEE ALSO: A Prominent Lawyer Is Calling For A New Federal Agency To Regulate Robots

Join the conversation about this story »

Why We Can't Yet Build True Artificial Intelligence, Explained In One Sentence

$
0
0

jaron lanier

Buried in Maureen Dowd's latest column for The New York Times is a perfect one-sentence explanation of why true artificial intelligence for machines isn't coming any time soon.

Jaron Lanier, prominent author/thinker/cybersociologist-type lays out why it is currently impossible to build machines with the type of "true" artificial intelligence that sci-fi is nearly founded on:

We’re still pretending that we’re inventing a brain when all we’ve come up with is a giant mash-up of real brains. We don’t yet understand how brains work, so we can’t build one.

We bolded that last sentence because it pretty much explains the predicament for AI. Until we more fundamentally understand that which we're trying to clone, everything else is an impressive attempt up Everest that never totally summits.

This jibes with a sentiment that renowned author and cognitive scientist Douglas Hofstadter posed earlier this year. He calls current prominent pursuits in the artificial intelligence arena "vacuous":

[IBM's "Jeopardy!"-winning supercomputer] Watson is basically a text search algorithm connected to a database just like Google search. It doesn't understand what it's reading. In fact, "read" is the wrong word. It's not reading anything because it's not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there's no intelligence there. It's clever, it's impressive, but it's absolutely vacuous.

We've got a ways to go before machines are truly smart.

SEE ALSO: A Brit Explains Why The UK's Electric Plugs Are The Best In The World

Join the conversation about this story »

If You Don't Think Robots Can Replace Journalists, Check Out This Article Written By A Computer

$
0
0

kris hammond narrative science

Earlier this week we reported on Narrative Science, an artificial intelligence company whose product, Quill, can turn tables of data into a natural language story that you can read as if it were a newspaper article.

The benefit here is that the software near-instantaneously renders numerical data as much more digestible units of understanding — words — without requiring any human supervision.

Quill powers the technology at work in GameChanger, an app for amateur sports leagues to manually capture game events (a pop fly in baseball, a three-pointer in basketball, and so on), then instantly have it spit out a written retelling of the game's events.

It's an artificially intelligent sportswriter that summarizes your game as honestly as you record it in the app. When we spoke to Kris Hammond, chief scientist of Narrative Science, he told us that the results are guaranteed to adhere to the truth as defined by the data that you provide it with.

Here's a sample report, in which Quill helps tell the story of a bested baseball team called the Manalapan Braves Red. Check out how naturally it reads, then know that it was all software that ordered and arranged these words, not a human:

Cole Benner did all he could to give Hamilton A's-Forcini a boost, but it wasn't enough to get past the Manalapan Braves Red, as Hamilton A's-Forcini lost 10-5 in six innings at Pecci two on Saturday.

Benner had a good game at the plate for Hamilton A's-Forcini. Benner went 2-3, drove in one and scored one run. Benner singled in the third inning and doubled in the fifth inning.

The Manalapan Braves Red's Gargano was perfect at the dish, going 1-1. Gargano singled in the first inning.

The Manalapan Braves Red tacked on another four runs in the second. The inning got off to a hot start when Bullen singled, bringing home Cappola. That was followed up by that scored Pellecchia.

After pushing across two runs in the top of the third, Hamilton A's-Forcini faced just a 5-2 deficit. An RBI single by Benner and an error sparked Hamilton A's-Forcini's rally. The Hamilton A's-Forcini threat came to an end when Pellecchia finally got Dominic Chiarello to ground out.

The Manalapan Braves Red increased their lead with three runs in the third. A scored Grieco to start the inning. That was followed up by that scored DeAlemeida.

Two runs in the top of the fourth helped Hamilton A's-Forcini close its deficit to 8-4. A two-run double by Joey Sacco gave Hamilton A's-Forcini life. Nick Psomaras grounded out to end the Hamilton A's-Forcini threat.

The Manalapan Braves Red built upon their lead with two runs in the fourth. Cappola started the inning with a single, plating Gargano. That was followed up by that scored Zucker.

One run in the top of the fifth helped Hamilton A's-Forcini close its deficit to 10-5. A groundout by Dominick Gambino triggered Hamilton A's-Forcini's comeback. Pellecchia ended the inning by getting Matt Kozma to strike out.

Join the conversation about this story »

This Company Spent 9 Years To Build Apps That Mimic How Brains Work

$
0
0

brainJeff Hawkins and Donna Dubinsky started Numenta nine years ago to create software that was modeled after the way the human brain processes information. It has taken longer than expected, but the Redwood City, Calif.-based startup recently held an open house to show how much progress it has made.

Hawkins and Dubinsky are tenacious troopers for sticking with it. Hawkins, the creator of the original Palm Pilot, is the brain expert and co-author of the 2004 book “On Intelligence.” Dubinsky and Hawkins had previously started Handspring, but when that ran its course, they pulled together again in 2005 with researcher Dileep George to start Numenta. The company is dedicated to reproducing the processing power of the human brain, and it shipped its first product, Grok, earlier this year to detect unusual patterns in information technology systems. Those anomalies may signal a problem in a computer server, and detecting the problems early could save time.

While that seems like an odd first commercial application, it fits into what the brain is good at: pattern recognition. Numenta built its architecture on Hawkins’ theory of Hierarchical Temporal Memory, about how the brain has layers of memory that store data in time sequences, which explains why we easily remember the words and music of a song. That theory became the underlying foundation for Numenta’s code base, dubbed the Cortical Learning Algorithm (CLA). And that CLA has become the common code that drives all of Numenta’s applications, including Grok.

Hawkins and Dubinsky said at the company’s recent open house that they are more excited than ever about new applications, and they are starting to have deeper conversations with potential partners about how to use Numenta’s technology. We attended the open house and interviewed both Hawkins and Dubinsky afterward. Here’s an edited transcript of our conversations.

Numenta cofounders Donna Dubinsky and Jeff Hawkins

Above: Numenta cofounders Donna Dubinsky and Jeff Hawkins

Image Credit: Dean Takahashi

VentureBeat: I enjoyed the event, and I was struck by a couple of things you said in your introduction. Way back when, you wrote the book on intelligence. You started Numenta. You said that you’d been studying the brain for 25 years or so. It seemed to me that you knew an awful lot about the brain already. So I was surprised to hear you say that we didn’t know much of anything at all about the way the brain works.

Jeff Hawkins: Well, in those remarks I gave at the open house, I meant to say that I’d been working at this for more than 30 years, and that when I started, all those years ago, we knew almost nothing about the brain. But it wasn’t that we’ve known nothing in the last 10 years or something. Tremendous progress has been made in the last 30 years.

VB: At the beginning of Numenta, if you look back at what you knew then and compare it to what you know now, what’s different now?

Hawkins: If you go back to the beginning of Numenta, our state of understanding was similar to what I wrote about in On Intelligence. That’s a good snapshot. If you look in the book, you’ll see that we knew the cortex is a hierarchy of thinking. We knew that we had to implement some form of sequence memory. I wrote about that. We knew that we had to form common representations for those sequences. So we knew a lot of this stuff.

What we didn’t know is exactly how the cortex learns and does things. It was like, yeah, we’ve got this big framework, and I wrote about what goes into the framework, but getting the details so you can actually build something, or understand exactly how the neurons are doing this, was very challenging. We didn’t know that at the time. There were other things, like how sensory, motor, these other systems work. But the big thing is we didn’t have a theory that went down to a practical, implementable, testable level.

Numenta cofounder Jeff Hawkins

Above: Numenta cofounder Jeff Hawkins

Image Credit: Dean Takahashi

VB: I remember from the book, you had a very interesting explanation of how things like memory work. You said that you could remember the words to a song better because they were attached to the music. The timing of the music is this kind of temporal memory. Is that still the case as far as how you would describe how the brain works, how you can recall some things more easily?

Hawkins: The memory is a temporal trait, a time-based trait. If you hear one part of something you’ll recognize the whole thing and start playing it back in your head. That’s all still true. Again, we didn’t know exactly how that worked.

It turns out that, in the book, I wrote quite a bit about some of the attributes that this memory must have. You mentioned starting in the middle of a song and picking it up. Or you can hear it in a different key or at a different tempo. We didn’t know exactly how we do that.

When we started Numenta, we had a list of challenges related to sequenced memory and prediction and so on. We worked on them for quite a few years, trying to figure out how you build a system that solves all these constraints. That’s very difficult. I don’t think anyone has done it. We worked on it for almost four years until we had a breakthrough, and then it all came together in one fell swoop, in just a matter of a few weeks. Then we spent a lot of time testing it.

VB: Can you describe that platform in some way, this algorithm?

Hawkins: The terminology we use is a little challenging. HTM refers to the entire overall theory of the cortex. You can take what’s in the book as HTM theory. I didn’t use the term at the time. I called it the memory prediction framework. But I decided to use a more abstract term, “hierarchical temporal memory.” It’s a concept of hierarchy, sequenced memory, and all the concepts that play into that theory.

The implementation, the details of how cells do that – which is a very detailed biology model, plus a computer science model – that we gave a different name to. We call that the Cortical Learning Algorithm. It’s the implementation of cells as the components of the HTM framework. It’s something you can reduce to practice. The CLA, you can describe that. Many people have created that, and it works. While the HTM is a useful framework for understanding what the whole thing is about, it doesn’t tell you how to build one. It’s the difference between saying “We’re going to invent a car that has wheels, a motor, consumes gasoline, and so on” – that’s the HTM version – and figuring out how to build an internal combustion engine that really works and that someone can build.

VB: When you talk about the algorithm there, what are you simulating? Is it a very small part of what the brain does?

Hawkins: If you look at the neocortex, it looks very similar everywhere. Yet it has some basic structure. The first basic structure you look at in any neocortex in any animal, you see layers of cells. The layers you see are the same in dogs, cats, humans, mice. They look similar everywhere you go.

What the CLA captures is how a layer of cells can learn. It can learn sequences of different types of things. It can learn sequences of motor behaviors. It can learn sequences of sensory inputs. We’re modeling a section of the layer of cortex. But we’re only modeling a very tiny piece of one. We’re modeling 1,000 to 5,000 nerve cells. But it’s the same process used everywhere. Our models are fairly small, but the theory covers a very large component of what the cortex has to do – the theory of cells in layers. We also now understand how those layers interact. That’s not in the CLA per se, but we now understand how multiple layers interact and what they’re doing with each other. The CLA specifically deals with what a layer of cells does. But we think that’s a pretty big deal.

Numenta's Grok

Above: Numenta’s Grok

Image Credit: Numenta

VentureBeat: It sounded like, when you guys were talking at the outset, that it took longer than you expected to get a commercial business out of all the ideas that you started with.

Donna Dubinsky: That’s fair to say. We knew it would be hard, but I don’t think we anticipated it would take so long to get the first commercial product out there. It’s taken a long time to go through multiple generations of these algorithms and get them to the point where we feel they’re performing well.

VB: Could you explain that, then? The underlying platform is the algorithm. What exactly does it do? It functions like a brain, but what are you feeding into it? What is it crunching?

Dubinsky: It’s modeled after what we believe are the algorithms of the human brain, the human neocortex. What your brain does, and what our algorithms do, is automatically find patterns. Looking at those patterns, they can make predictions as to what may come next and find anomalies in what they’ve predicted. This is what you do every day walking down the street. You find patterns in the world, make predictions, and find anomalies.

We’ve applied this to a first product, which is the detection of anomalies in computer servers under the AWS environment. But as much as anything it’s just an example of the sort of thing you could do, as opposed to an end in itself. We’ve shown several examples. We’re keen on demonstrating how broadly applicable the technology is across a bunch of different domains.

VB: Are there some benefits to taking longer to get to market that you can cash in on? There are things like Amazon Web Services or the cloud that don’t exist when you started the company.

Dubinsky: It’s a good point. Certainly having AWS has been fantastic for us. AWS takes and packages the data in exactly the way that our algorithm likes to read it. It’s streaming data. It flows over time. It’s in five-minute increments. That’s a great velocity for us. We can read it right into our algorithm.

Over the years we’ve talked to lots of companies who want to use this stuff, and their data are simply not in a streaming format like that. Everyone talks about big data. The way I think about that is big databases. Everyone’s taking all this data and piling it up in databases and looking for patterns. That’s not how we do it at all. We do it on streaming data, moving over time, and create a model.

We don’t even have to keep the data. We just keep the last couple thousand data points in case the system goes down or something. But we don’t need to keep the data. We keep the model. That’s substantially different from what other people do in the big data world. One of the reasons we went to AWS was it got us around that problem. It was very easy to connect up our product to AWS.

VB: It almost seems that with the big data movement, a lot of corporations are thinking about tackling bigger problems than before. They seem to need more of the kinds of things that you do.

Dubinsky: More and more people are instrumenting more and more things. When I think about where we’re going to fit, it gets closer to the sensors. You have more and more sensors generating information, including people walking around with mobile devices. Those are nodes in a network, in some sense. The more this data comes in to companies and individuals, the more they’re going to need to find patterns in it.

People don’t know what to do with all this data. When you go read all the internet-of-things stuff and ask the people who write it, “How are you going to use the data that this thing on the internet is generating?” they don’t really have good answers for you.

VB: It seemed like the common thread among them was pattern recognition, which is what the brain is good at, right?

Numenta's Grok recognizes patterns.

Above: Numenta’s Grok recognizes patterns.

Image Credit: Numenta

Dubinsky: Yeah, and that’s distinct from what computers do. Computers today, the classic architecture can do lots of amazing things, but they don’t do what the brain does. A computer can’t tell a cat from a dog. It seems like a simple problem, but it turns out to be a really hard one. Yet even a two-year-old can do it pretty reliably. The brain is just much better at pattern recognition, particularly in the face of noise and ambiguity, cutting through all that and figuring out what’s the true underlying pattern.

VB: Is it as simple at this point as, you have an application, it sits on top of your platform or engine, and it works?

Hawkins: Everything we showed at our open house, including the shipping product and the other applications we showed, are all built on that CLA. It’s all using essentially the same code base, just applied to different problems. We’ve verified and tested it in numerous ways to know that not only does it work, but it works very well.

VB: It seemed like the common thread was pattern recognition.

Hawkins: It’s pattern recognition, but you have to make a distinction. It’s time-based pattern recognition. There’s lots of pattern recognition algorithms out there, but there are very few that deal with time-based patterns. All the applications we showed were what we call streaming data problems. They’re problems where the data is collected in a continuous fashion and fed in a continuous way.

It’s not like you take a batch of data and say, “What’s the pattern here?” If the pattern is coming in, it’s more like a song. I would make that distinction. It’s time-based pattern recognition.

VB: How would you say it’s more efficient than, say, a Von Neumann computer?

Hawkins: I wouldn’t say it’s more efficient. I’d say the Von Neumann computer is a programmed computer. You write new programs to do different things. It executes algorithms. The brain and our algorithm are learning systems, not programs. We write software to run on a traditional computer to emulate a learning system. You can do this on a Von Neumann computer, but you still have to train it. We write this software and then have to stream in data, thousands of records or weeks of data. It’s not a very efficient way of going about it.

What we’re doing today, we’re emulating the brain’s learning ability on a computer using algorithms. It works, but you can’t get very big that way. You can’t make a dog or mouse or cat or human or monkey-sized brain. We can’t do that in software today. It would be too slow. A lot of people recognize this. We have to go hardware with something more designed for this kind of learning. New types of memory have to be designed – ones that are designed to work more like the brain, not like the memory in traditional computers.

VB: If you look at what other progress has been made, are you able to take advantage of some other developments in computer science or hardware out there?

Hawkins: Memory is a very hot area. With semiconductors, guys like it because they can scale really big. You’ve seen that already. Look what’s happened in flash memory over the last 10 years. It’s amazing. That’s just scratching the surface. They like it because it scales well, unlike cores and GPUs and things like that.

There are many different kinds of people. Some people are betting on stacked arrays. Some people are betting on other things. I don’t know which of those are going to play out in the end. We’re looking very closely at a company with some memory technology that they’re trying to apply to our algorithms. I’ve talked to other people, like a disc drive maker that does flash drives. They have a different approach as far as how to invent algorithms like ours. There’s a guy trying to use protonics on chip to solve memory-related problems. A lot of people are trying to do this stuff. We provide a very good target for them. They like our work. They say, “This sounds right. This might be around for decades. How will we try to work with these memory algorithms using our technology?” We’ll see, over time, which of these technologies play out. They can’t all be successful.

Adam Gazzaley of UCSF

Above: Adam Gazzaley of UCSF

Image Credit: Dean Takahashi

VB: I had a chance to listen to Adam Gazzaley from UCSF talk recently about treating brain disorders like Alzheimer’s using things like games to help improve memory. Have you looked any some of those areas to see whether you’re optimistic about the advances people are making?

Hawkins: Mike Merzenich is the guy who invented all the brain plasticity stuff. That’s been very helpful for us in terms of understanding how our model should learn. It’s been helpful from a neuroscience point of view.

I don’t know how much our work will influence that stuff, though. Most of the brain plasticity thing is about getting your brain to release certain chemicals that foster growth. Dopamine is the primary one. These brain exercises are designed to release dopamine so you get a reward system going. It’s pretty far from what we do. We’re still working on the basic, underlying neural tissue. We could explain how neural tissue is plastic, how it learns, how it forms representations, but that’s not much help as far as figuring out how to do better brain exercises. That’s more about game design and motivation and reward and things like that.

VB: The easiest things you might try could have been vision-related. Did you go down that path, exploring vision-related applications, before you came to these other ones?

Dubinsky: Yes, we did. We spent quite a bit of time on vision in earlier years. A couple of things happened with vision. One was that, after a lot of research, we came to believe that there wasn’t a large commercial opportunity in vision. But the other was as important as anything – vision starts to require some very specific algorithmic work that does not apply to all the other sensory domains. We decided we didn’t want to limit ourselves. Going down the vision path would mean becoming a vision company, and we wanted to be broader than that. We felt it was better to have everything else be our focus.

We could go back and do vision. It’s not to say that these algorithms could not do vision. But there are some very specific requirements that would have to be added back in.

VB: Was it because it would take you down a path of having a very specialized engine?

Hawkins: It’s not that it’s specialized. But it takes a huge amount of cortex to do human-level vision. If you look at the human neocortex, the amount of tissue dedicated to just seeing things is monstrous. It’s somewhere around 30 or 40 percent of your brain. Whereas if you look at the areas dedicated to language, spoken and written, it’s tiny by comparison. It turns out that from a neural tissue point of view, that’s far easier than seeing.

The problem we have with vision, if we want to do it the way the brain does it, we’re just not able to today. It’s too much. Our software would take forever to run. We need hardware acceleration for that. Unfortunately, that’s been an area that many people want to focus on. We started investigating it and implementing things and quickly realized it would be very difficult to do something that was practical and impressive. It would take days to run these things. So we said, “What can we do that’s different and that’s practical today?”

It’s the same algorithm, though. It’s not like it’s a different set of algorithms. It’s the same thing.

Brain training

Above: Brain training

Image Credit: Shutterstock

VB: One of the benefits that you still came up with is that even though there are other ways to solve some of these problems, whether it’s search or detecting anomalies in servers, those ways would burn up more computing power.

Dubinsky: It’s just that those ways have a lot of problems, more than just computing power. Let’s take the server anomaly detection as an example. The way they do it today is essentially with a threshold. You have a high threshold or a low threshold and if something goes over that, there’s an alert.

This doesn’t work very well. It generates a lot of false positives. It misses true positives. It can’t find complex patterns. It doesn’t learn. If you hit the threshold because you changed something, like increasing memory in a server so it performs differently, it just keeps hitting that until someone resets it, even though it should learn about the new environment. Thresholds cannot be done well today. It’s one of those unsolved problems, doing a good job on anomaly detection. Human brains do it very well and we think our technology can do it very well.

VB: So it works better, but it’s also more efficient, and that adds up to a lot of saved hardware or computing power.

Dubinsky: It depends on the exact application. I don’t know that I’d say it’s more efficient across the board. It certainly is in some application areas. It’s more that it’s doing things that other systems can’t do. They can’t find patterns the way we can.

The fact that it’s a learning technology is a very important one too. If you think about the resources required to deploy some of these other technologies, if it’s something that has to be manually programmed or tuned in the beginning, sometimes all of the work to do that baselining can take a lot of work and computing power. The fact that this technology automatically learns is a key differentiator that we see amplified in not only the IT analytics space, but we expect in some of these other spaces as well.

Today, if you were a human data scientist with a really big model you wanted to build to discover patterns in the world – for all the windmills in a wind farm, for example – you just don’t have the human resources to separately model each individual windmill. To have an automated system that creates those models is the only way to ever address that. You can throw scientist after scientist at it, but there’s just not enough data scientists in the world to find patterns in all of the sensors and nodes that are generating data in the world today.

VB: Donna also brought up that some of the applications you have are geared toward solving problems that aren’t being solved some other way. It’s not necessarily about doing something more efficiently than a current data center.

Jeff Hawkins of Numenta

Above: Jeff Hawkins of Numenta

Image Credit: Numenta

Hawkins: There are two worlds here. There’s the academic world and the business world. On the academic side, there’s a set of benchmarks that everyone likes to go and test against. People spend literally years trying to get a tenth of a percent better performance over someone else’s algorithm. But very few of these things are practical. They’re academic exercises. They help progress the field somewhat. But we weren’t interested in doing that.

We’re interested in solving problems where we can say, “No one’s done this before. We’re doing a great job at it.” People haven’t done this before because they don’t know how. They’re problems they don’t have any algorithms for it. So from a business point of view, we took that approach.

VB: What applications came as surprises to you in the last year or so?

Dubinsky: It’s been fun to find applications where we can demonstrate the utility. We did the first one on AWS. The other ones we showed at the open house were all things we’re excited about, though. The internal intrusion is very similar to AWS, but using different data streams. That’s very interesting to us, as a way to demonstrate that this algorithm is data-agnostic. It’s not especially tuned for IT services. It could be anything. It could be monitoring your shopping cart on the internet. It could be monitoring the energy use in a building. It’s anything that’s a stream of flowing data.

The one that I’m excited about is the geo-spatial one, the idea of putting 2D and 3D data into the algorithm, finding patterns and making predictions. It’s about thinking about objects moving in the world. You know when they’re moving in unusual ways or ordinary ways. They can learn these ways, as opposed to having them programmed in. You think about trucking fleets going every which way. Instead of programmers with programs determining what is and isn’t normal as they go from point A to point B and where you set a threshold alert, you would just have the GPS data fed into this, and it would automatically figure that out for every individual truck. There’s so much continuity in what you can do with it in a geo-spatial sense.

Text is another area I’m excited about. We’re doing some really interesting work with another group on figuring out how to feed text into these algorithms and find semantic meaning in text.

VB: What else do you envision about the future here?

Numenta's Donna Dubinsky

Above: Numenta’s Donna Dubinsky

Image Credit: Numenta

Dubinsky: We’re just excited about being able to show stuff that works and has interesting results. We feel like what we’re doing is pretty neat. Not many people can do the degree of error-finding and anomaly detection that we’re doing. We think it has broad applicability. We’re looking forward to the next couple years as we prove that out.

VB: To sum up, it sounds like you’re very optimistic about where things are now.

Hawkins: Absolutely. I’m thrilled. This is a very difficult problem and we’re making amazingly good progress at it.

One of the biggest challenges we have is convincing computer scientists that understanding how the brain works is important. For many people, the prevailing attitude is, “I don’t know about brains. Why do I need to study brains?” The classic thing I hear is, “Planes don’t flap their wings. Why do intelligent machines have to be based on brain principles?” To which I reply, “Planes work on the principle of flight. That’s important. Propulsion is not the principle of flight.” Thinking and learning, you have to understand how brains do that before you say you don’t need to know about it. We have decades and decades of people making very little progress in machine learning toward where we want to be, which suggests that we really do need to study the brain.

So one of my biggest challenges is just getting this conversation about machine intelligence and machine learning toward how we can understand how brains work. One way to do that, of course, is to build applications that are successful. It’s growing. More and more people recognize this. But it’s still a challenge.

VB: What would you say keeps you going?

Hawkins: I can’t think of anything in the world more interesting than understanding our own brains. We have to understand our brains if we’re ever going to progress, explore the universe, figure out all the mysteries of science. I think there can be tremendous societal benefit in machines that learn, as much societal benefit as computers have had over the last 70 years. I feel a sense of historical obligation, almost. This is important. How could I not do it?

SEE ALSO: Why We Can't Yet Build True Artificial Intelligence, Explained In One Sentence

Join the conversation about this story »


The Robotics Industry Is Undergoing A Major Market Shift

$
0
0

MasterRobots_BII

The multibillion-dollar global market for robotics, long dominated by industrial and logistics uses, has begun to see a shift toward new consumer and office applications. 

The market for consumer and office robots will grow at a CAGR of 17% between 2014 and 2019, seven times faster than the market for manufacturing robots. 

By 2019, there will be a $1.5 billion market for consumer and business robots. 

In a recent report from BI Intelligence we assess the market for consumer and office robots, taking a close look at the three distinct categories within this market — home cleaning, telepresence, and home entertainment robots. We also examine the market for industrial manufacturing robots since it is the market where many robotics companies got their start, and remains the largest robot market by revenue. And finally, we assess the factors that might still limit the consumer robot market.  

Access The Full Report And Data By Signing Up >>

Here are some of the most important takeaways from the report:

In full, the report:

For full access to the Robot Report and all BI Intelligence's coverage of the mobile industry, sign up.

Join the conversation about this story »

Scientists Are Afraid To Talk About The Robot Apocalypse, And That's A Problem

$
0
0

terminator

Working roboticists need to indulge the public in sci-fi scenarios.

I thought it'd be a cool story to interview academics and robotics professionals about the popular notion of a robot takeover, but four big names in the area declined to talk to me. A fifth person with robo street cred told me on background that people in the community fear that publicly talking about these topics could hurt their credibility, and that they think the topic has already been explained well enough.

This is a problem. A good roboticist should have a finger on the pulse of the public's popular conception of robotics and be able to speak to it. The public doesn't care about "degrees of freedom" or "state estimation and optimization for mobile robot navigation," but give a robot a gun and a mission, and they're enthralled.

More importantly, as I heard from the few roboticists who spoke to me on the record, there are real risks involved going forward, and the time to have a serious discussion about the development and regulation of robots is now.

Jul 16, 2014 14:02

Most people agree that the robot revolution will have benefits. People disagree about the risks.

Author and physicist Louis Del Monte told us that the robot uprising "won't be the 'Terminator' scenario, not a war. In the early part of the post-singularity world — after robots become smarter than humans — one scenario is that the machines will seek to turn humans into cyborgs. This is nearly happening now, replacing faulty limbs with artificial parts. We'll see the machines as a useful tool."

louis del monteBut according to Del Monte, the real danger occurs when self-aware machines realize they share the planet with humans. They "might view us the same way we view harmful insects" because humans are a species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses."

Frank Tobe, editor and publisher of the business-focused Robot Report, subscribes to the views of Google futurist Ray Kurzweil on the singularity, that we're close to developing machines that can outperform the human mind, perhaps by 2045. He says we shouldn't take this lightly.

"I’ve become concerned that now is the time to set in motion limits, controls, and guidelines for the development and deployment of future robotic-like devices," Tobe told Business Insider.

"It’s time to decide whether future robots will have superpowers — which themselves will be subject to exponential rates of progress — or be limited to services under man’s control," Tobe said. "Superman or valet? I choose the latter, but I’m concerned that politicians and governments, particularly their departments of defense and industry lobbyists, will choose the former."

Kurzweil contends that as various research projects plumb the depths of the human brain with software (such as the Blue Brain ProjectThe Human Brain Project, and the BRAIN Initiative), humankind itself will be improved by offshoot therapies and implants.

"This seems logical to me," Tobe said. "Nevertheless, until we choose the valet option, we have to be wary that sociopathic behaviors can be programmed into future bots with unimaginable consequences."

ryan caloRyan Calo, assistant professor of law at the University of Washington with an eye on robot ethics and policy, does not see a machine uprising ever happening: "Based on what I read, and on conversations I have had with a wide variety of roboticists and computer scientists, I do not believe machines will surpass human intelligence — in the sense of achieving 'strong' or 'general' AI — in the foreseeable future. Even if processing power continues to advance, we would need an achievement in software on par with the work of Mozart to reproduce consciousness."

Calo adds, however, that we should watch for warnings leading up to a potential singularity moment. If we see robots become more multipurpose and contextually aware then they may then be "on their way to strong AI," says Calo. That will be a tip that they're advancing to the point of danger for humans.

Calo has also recently said that robotic capability needs to be regulated

Andra Keay, managing director of Silicon Valley Robotics, also doesn't foresee a guns a' blazin' robot war, but she says there are issues we should confront: "I don't believe in a head-on conflict between humans and machines, but I do think that machines may profoundly change the way we live and unless we pay attention to the shifting economical and ethical boundaries, then we will create a worse world for the future," she said. "It's up to us."

distressed sad wooden robot looking gazing windowIn contrast to this, Jorge Heraud, CEO of agricultural robotics company Blue River Technology, offers a fairly middle-of-the-road point of view: "Yes, someday [robots and machines] will [surpass human intelligence]. Early on, robots/machines will be better at some tasks and (much) worse at others. It'll take a very long while until a single robot/machine will surpass human intelligence in a broad number of tasks. [It will be] much longer until it's better in all."

When asked if if the singularity would look like a missing scene from "Terminator" or if it would be more subtle than that, Heraud said, "Much more subtle. Think C-3PO. We don't have anything to worry for a long while."

Regardless of the risk, it shouldn't be controversial that we need to discuss and regulate the future of robotics.

Northwestern Law professor John O. McGinnis makes clear how we can win the robot revolution right now in his paper, "Accelerating AI" [emphasis ours]:

Even a non-anthropomorphic human intelligence still could pose threats to mankind, but they are probably manageable threats.  The greatest problem is that such artificial intelligence may be indifferent to human welfare. Thus, for instance, unless otherwise programmed, it could solve problems in ways that could lead to harm against humans. But indifference, rather than innate malevolence, is much more easily cured. Artificial intelligence can be programmed to weigh human values in its decision making. The key will be to assure such programming.

Long before any battle scenes ripped from science fiction actually take place, the real battle will be in the hands of the people building and designing artificially intelligent systems. Many of the same people who declined to be interviewed for this story are the ones who must stand up as heroes to save humanity from blockbuster science fiction terror in the real world.

Forget the missiles and lasers — the only weapons of consequence here will be algorithms and the human minds creating them.

***

We asked our interview subjects for book and movie recommendations that pertain to this topic. Their responses are below.

Ryan Calo: "I would recommend 'The Machine Stops' by E.M. Forster for an eerie if exaggerated account of where technology could take the human condition."

Frank Tobe: "The James Martin Institute for Science and Civilization at the University of Oxford produced a video moderated by Michael Douglas entitled 'The Meaning of the 21st Century' and wrote a book with the same title. It might be worth your time to watch the short version: 'Revolution in Oxford'."

Andra Keay: "I enjoy Daniel Wilson's books, but also the sci-fi of Octavia Butler and other writers who delve into the different inner lives that simple changes in biology create, whether human, alien, or robot."

Jorge Heraud: "'Star Wars'."

SEE ALSO: By 2045 'The Top Species Will No Longer Be Humans,' And That Could Be A Problem

Join the conversation about this story »

Here's What We Know About The Secretive, Elon Musk-Backed Firm Creating Functional Artificial Intelligence

$
0
0

connectomics

Silicon Valley enjoys something of a monopoly these days on making the most noise in the U.S. economy.

But there's been surprisingly little fanfare surrounding the latest project that seemingly all the most successful Valleyites have been pouring into: an artificial intelligence company called Vicarious.

Founded in 2010, Vicarious' list of investors is dazzling: Elon Musk, Mark Zuckerberg, Jeff Bezos, Ashton Kutcher, and Dustin Moskowitz count among the household names. PayPal cofounder Peter Thiel, along with folks from startup funders Y Combinator and cloud storage group Box, also number among those who've provided funding. 

For Aydin Senkut, whose Felicis Ventures was one of Vicarious' earliest investors, there is agreement among top valley VCs that now is the time to get into artificial intelligence and machine learning, because it is going to change everyone's lives irrevocably.

"AI and machine learning is a really big deal," he told us by phone recently. "For the longest time no one took it very seriously, but... more and more companies are now seeing how far they've fallen behind in this area, and how it's critically important to have this capability."

What exactly is this capability? Despite its marquee backers, Vicarious has gained a reputation for secrecy. Co-founder Scott Phoenix, a computer scientist and designer, told us his team was not currently doing interviews. The other co-founder, a neuroscientist named Dileep George, did not respond to several requests for comment.

But in previous interviews, George has discussed some of the potential practical applications for artificial intelligence. In an interview with NBC's PressHere in 2012, he described how an "improved Siri" could someday be smart enough to complex commands from any speaker, even with ones with thick accents, for things like booking air tickets without having to click through a bunch of screens. In the nearer term, which is still measured in years, AI capabilities will be sufficiently advanced that they can perform medical diagnoses, or recognize images that don't contain any preexisting text tags. 

George and Phoenix call the underlying technology that powers these applications recursive cortical networking. RCN means teaching computers to model brain functions — specifically, those of the neocortex, the part responsible for sensory processing. As George, who left an AI venture created by Palm founder Jeff Hawkins to found Vicarious, told KurzweilAI in 2012"My goals have always been to embody the computational principles of the brain in a mathematical model, but RCN is a ground-up rethinking of what kind of algorithmic approach is necessary to solve the problem.” 

Vicarious' breakthroughs are still in their infancy, but they've posted a demonstration of something it can already do: break a CAPTCHA security device. We've GIF'ed how it works here:

In this demonstration, it takes a Gmail CAPTCHA and drops it into its algorithm...

...AND THEN IT READS IT IN NANOSECONDS.

The Valley seems to be hungry for AI in general. But if there's any kind of AI arms-race on, the field remains pretty narrow, if only because there are so few people qualified to lead the way. Facebook recently hired its own specialist in charge of AI, NYU's Yann LeCun, but he and his team remain focused on how to improve Facebook's own functions. A more direct rival, of sorts, to Vicarious, is a firm called DeepMind. Google bought DeepMind for $400 million earlier this year. Ironically Vicarious and DeepMind both share Thiel's Founders Fund as a backer, which confirms how narrow the space remains, but also how VCs are attempting to get a piece of as much AI action as they can. Recode reported in January that London-based DeepMind is working on similar projects as Vicarious, like advanced image recognition, though they too are quite cagey about what exactly they're up to. 

"The reality is, there are a very limited number of AI and machine learning experts in the world, which is one reason why it's been getting so much attention," Senkut says. "It is such an important field, and [DeepMind] is one of few that are thinking very big and ambitious."

In fact, the most publicly accessible AI projects are coming from the government.  Sometime between 2006 and 2007, the Director of National Intelligence began earmarking funds for IARPA, short for Intelligence Advanced Research Projects Activity. Its goal was to start developing technology for the country's 16 different spy agencies. IARPA is itself looking to accelerate its image-reading capabilities through a program called JANUS. It's also hoping to develop a technology that can "[understand] human interactions that involve trust and trustworthiness." 

Bruno Olshausen, a Vicarious adviser and neuroscience professor at Berkeley told us that the most exciting research IARPA is conducting is in a field called Connectomics. The goal is nothing less than recreating the human brain. The output from the field will make the aforementioned projects look prehistoric.

"Evolution discovered all these secrets — like building an eye — about how to build good, simple processing," he told us. "This is something computers cannot do now. But when you look at a brain under a microscope, you're basically looking at the solution, you're looking at a microchip."

Last month, Elon Musk, who came on board as a backer in a $40 million funding round that also included Zuckerberg and Kutcher,  said one of the reasons he'd invested in Vicarious was to keep an eye on unexpected negative developments in AI — basically, a "SkyNet" scenario.

Olshausen says that scenario remains a remote possibility. Our knowledge of how the brain works is more or less where our knowledge of physics was before Newton: nearly useless.

"Absent a major paradigm shift - something unforeseeable at present - I would not say we are at the point where we should truly be worried about AI going out of control," he said in a follow-up email. " That is not to say that we shouldn’t worry about how *humans* will use machines or engage in warfare via machines - e.g., for domestic spying, foreign espionage, hacking attacks and the like.  But in the meantime we can rest easy knowing that computers themselves are not going to take over the world anytime soon, or in the foreseeable future." 

The AI crew is playing a very long game — there have been reports that Vicarious makes anyone who comes on sign something that says they will not ask about short-term progress or profits. But Senkut believes that as novel as it sounds now, we will someday be taking AI for granted.

"It's unstoppable," he said. "This thing is going to be here before we know it, like with HTTP distribution coming out in the '70s, I don't think people realized it was going to give birth to the Internet. It's not like, Oh my god, what's the next thing in a few months. I'm just really excited that it's going to be an enabling platform, that’s something I don't even have to speculate about."

Join the conversation about this story »

Elon Musk: Artificial Intelligence Is 'Potentially More Dangerous Than Nukes'

$
0
0

hal 2001 a space odyssey

Back in June, Tesla CEO Elon Musk told CNBC that he'd invested in a company called Vicarious that is developing products and services based on artificial intelligence. But that wasn't why Musk got interested. His impetus for backing the firm was instead "to keep an eye on" unforeseen terrifying scenarios where the products began to threaten humanity. 

He doesn't appear to have been exaggerating. 

In a Tweet last night, Musk said this:

Bostrom is Nick Bostrom, the founder of Oxford’s Future of Humanity Institute. That group recently partnered with a new group at Cambridge, the Centre for the Study of Existential Risk, to study how things like nanotechnology, robotics, artificial intelligence and other innovations could someday wipe us all out, according to PCPro:

At [a] conference, Bostrom was asked if we should be scared by new technology. "Yes," he said, "but scared about the right things. There are huge existential threats, these are threats to the very survival of life on Earth, from machine intelligence – not the way it is today, but if we achieve this sort of super-intelligence in the future," Bostrom said.

"Superintelligence" is set to be published in English next month. In a blurb, Bostrom's colleague Martin Rees of Cambridge says of the work, "Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." 

In our recent profile of Vicarious, the firm backed by Musk, we talked to Bruno Olshausen, a Berkeley professor and one of the firm's advisors. He said we are still way too far behind in our understanding of how the brain works to be able to create something that could turn heel. 

"Absent a major paradigm shift - something unforeseeable at present - I would not say we are at the point where we should truly be worried about AI going out of control," he told us. 

So at a minimum, it sounds like the robot takeover is not imminent. 

But it seems like it's something all of us should "keep an eye on."

SEE ALSO: Elon Musk Says Killer Robots Could Chase Us To Mars

Join the conversation about this story »

The Rise Of The Roomba Vacuum Shows There Is Huge Business Potential In The Home Robot Industry

$
0
0

Roomba_BIIiRobot, a Massachusetts company, has shipped more than 10 million Roomba robotic vacuums since the device launched in 2002, and it's shipping over 1 million Roomba units annually.

In a new report from BI Intelligence we reveal that iRobot has shipped over 6 million home robots, including Roombas, in the last four years and a half. We also assess the market for consumer and office robots, taking a close look at the three distinct applications within this market, and how this emerging category now represents nearly all the growth in the increasingly diverse global robotics industry. 

Consider: 

  • We believe the market for home and office robots will grow from $673 million in 2014 to $1.5 billion in 2019, for a five-year CAGR of 17.39%. Note: Our estimate for the size of the consumer/office robot market excludes devices marketed to children as toys. 
  • That means the market for consumer and office robots is growing seven times faster than the one for industrial robots
  • One company, iRobot, is set to break the $500 million revenue mark this year on the back of home robots. 
  • iRobot's success has attracted major manufacturers to the robotic floor cleaning market, including LG, Samsung, Neato, Hoover and others. Robotic vacuums have only achieved ~15% penetration in North America, Europe, and Asia. (See chart, below.)

Access The Full Report And Data Today By Signing Up For BI Intelligence

In full, the report:

Sign up today for full access to the report on robots and all BI Intelligence's coverage of the mobile, payments, e-commerce, and digital media industries.

BII_VacuumMarket

 

Join the conversation about this story »

Viewing all 1375 articles
Browse latest View live




Latest Images