Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

This Google and Apple alum now wants to bring a little 'magic' into your workday

$
0
0

Gluru

These days, the biggest tech companies are trying to ramp up the way your phone can act as a virtual assistant for dishing out contextual information.

Google, Apple, and Microsoft have pushed the capabilities of their assistants — Google Now, Siri, and Cortana — forward, and even Facebook just announced its own new service called "M."

But while those companies have been trying to find ways to use machine learning and artificial intelligence to answer questions and surface information about local restaurants or how long it'll take you to get to the airport, a London-based startup called Gluru has spent the last year and a half building a system specifically for professionals.

"Gluru is a virtual assistant for your work flow," founder and CEO Tim Porter tells Business Insider. "It organizes your files and then connects them to you at the most important moments."

In short, he wants to bring a little "magic" into users' work routines.

Here's how it works: Download Gluru's free Android and desktop apps and connect them with your Google accounts. Dropbox, Evernote, Microsoft OneDrive, and Box integrations are available, too.

Gluru's machine learning system will start digging through your Google Docs, calendar, and emails to surface important information in a "daily digest" section each morning that'll show you what meetings you have coming up and what documents you'll likely need for each, as well as information about people it thinks you'll likely be working with that day, using natural language processing and contextual knowledge of file contents.

Here's how a daily digest might look:

Gluru

For example, when Business Insider hopped on the phone with Porter, Gluru had surfaced for him Google Docs with his company roadmaps and revenue model — since it had inferred from the calendar information that he was being interviewed about his startup.

Porter shared other recent examples of how Gluru has worked for him.

Because he had also connected Gluru to his phone, when his mother called, it surfaced photos from his recent vacation, intuiting that "mom" in his address book might want to see personal content that he'd uploaded to Google Drive. When his accountant called, it surfaced a pertinent financial email.

Porter, an alum of Apple and Google, wants Gluru to help businesspeople be more productive, saving them time they would otherwise have wasted digging through files to prepare for conversations or meetings.

Besides the daily digest, the search function acts like a "super-charged file explorer," pulling from all the sources you give it access to. Porter estimates that Gluru's ability to unlock data could save users at least 165 hours — about one full week — per year, but it'll soon be launching a tool for users that'll tell them exactly how much time Gluru has saved them.

The startup, which had hundreds of companies using it in beta pre-launch, has raised $1.5 million in seed funding, and the team is working on integrations with Salesforce and other project or customer-relationship management systems. Right now, it's free for individuals with enterprise prices being worked out individually on a company basis.

Porter shirks humbleness when he talks about the machine learning and data science required to work Gluru's magic:

This is a really, really hard thing to do. We have a number of patents that we've filed behind this technology, we have several Ph.D.s in machine learning onboard. There's no other product in the market that achieve the use-cases that we've created here. We're really excited.

SEE ALSO: Investors have poured more than $220 million into this man's plan to beat Amazon. Are they crazy?

Join the conversation about this story »

NOW WATCH: Ashley Madison hack reveals the states where people cheat the most


I used IBM's supercomputer to make dinner — and the results surprised me

$
0
0

chef watson eating

IBM's supercomputer Watson, after obliterating chess and Jeopardy champions, is now taking on culinary experts.

The free Chef Watson app takes the user's requested meal, style and ingredients, and concocts a recipe based on its knowledge of flavors that pair well.

Chef Watson had to undergo its own culinary training, called machine learning. It analyzed 10,000 recipes on Bon Appetit and looked for ingredients that often appeared together, and substituted ingredients.

I'm an adventurous eater, but I don't usually cook elaborate meals. So I wanted to try it for myself. To test Chef Watson, I followed the ingredients and steps as accurately as possible — but encountered a few surprises.

Scroll down to see how the experiment went.

After opening the app, I'm immediately already overwhelmed by choices. I look for a difficulty setting to simplify the process, but there's nothing. I'll search by style instead.



Hmm, a "Labor Day" recipe categories. There I find the gem "Labor Day Mayonnaise Meat Salad" with pineapple and mayonnaise. But I don't think I'd like that. Let's try Mexican?



Instead I find more meat salads...



See the rest of the story at Business Insider

NOW WATCH: 11 amazing facts about your skin

Robots will do almost every job better than humans

$
0
0

robot

Of all the changes that sophisticated artificial intelligence (AI) will bring, the biggest may be changing the job market as we know it.

A 2013 Oxford study projected that an estimated 47% of all employment in the United States is at risk of being automated.

But Toby Walsh, a professor in AI at the National Information and Communications Technology Australia told Tech Insider that in the foreseeable future, that number could be much much worse.

"It's hard to think of a job that a computer ultimately won't be able to do as well if not better than we can do," Walsh told Tech Insider.

Armed with machine learning, a method that allows AI to "learn" from its mistakes, AI systems are getting better at tasks they previously struggled with — vision, translation, and language.

Combined with increasing computing capacity and cheaper costs, Walsh said future AI might be part of a perfect storm of technologies that have the capability to completely transform society.

"There are various forces in play, and one of those forces is technology, and technology is actually a force that is tending to concentrate and widen the inequality gaps," Walsh said. "This is a challenge not for scientists but one for society to address, of how are we going to work through these changes."

While we've bounced back from shifts in labor before, like the machination of physical work during the industrial revolution, he said a future automated revolution is likely to happen much more rapidly.

"The changes that we see precipitated by changes in computing are ones that tend to happen very, very quickly." Walsh said. "The challenge there is that society tends to change rather slowly."

Jerry Kaplan, author of "Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence," agrees. He told Tech Insider that even in the near-future of automation, even white-collar jobs like lawyers won't be robot proof.

Any job that requires many routine and structured tasks are at risk of automation, Kaplan said: "Even for what you think of as highly-trained, highly-skilled, intuitive personable professions, it is still true that the vast majority of the work is routine."

While there isn't one secret key to unlocking success in the workplace alongside robotic colleagues, Walsh recommends turning to artistic and creative work, which he considers the last stand for human jobs as we know them.

"Go into the most people-facing, artistic, creative places that you can think of," Walsh told Tech Insider. "The artists of the world are, for a long time, still going to be real physical people. The people who are in most people-facing, sociological, empathetic jobs are going to be people."

But creative jobs won't be completely safe from the robotic workforce.

"Even there, you're not completely safe," Walsh said. "But I think you'll be safer for longer, at least."

Join the conversation about this story »

NOW WATCH: What happens to your body when you get a tattoo

I used IBM's supercomputer to make dinner — and the results surprised me

$
0
0

chef watson eating

IBM's supercomputer Watson, after obliterating chess and Jeopardy champions, is now taking on culinary experts.

The free Chef Watson app takes the user's requested meal, style and ingredients, and concocts a recipe based on its knowledge of flavors that pair well.

Chef Watson had to undergo its own culinary training, called machine learning. It analyzed 10,000 recipes on Bon Appetit and looked for ingredients that often appeared together, and substituted ingredients.

I'm an adventurous eater, but I don't usually cook elaborate meals. So I wanted to try it for myself. To test Chef Watson, I followed the ingredients and steps as accurately as possible — but encountered a few surprises.

Scroll down to see how the experiment went.

After opening the app, I'm immediately already overwhelmed by choices. I look for a difficulty setting to simplify the process, but there's nothing. I'll search by style instead.



Hmm, a "Labor Day" recipe categories. There I find the gem "Labor Day Mayonnaise Meat Salad" with pineapple and mayonnaise. But I don't think I'd like that. Let's try Mexican?



Instead I find more meat salads...



See the rest of the story at Business Insider

NOW WATCH: 11 amazing facts about your skin

The real problem with artificial intelligence

$
0
0

Elon Musk

Artificial intelligence has gotten a bad rap lately.

Stephen Hawking, Bill Gates and Tesla Motors CEO Elon Musk are just a few notable people who have warned about the perilous consequences associated with developing AI systems.

"The development of full artificial intelligence could spell the end of the human race,” Hawking cautioned last year.

Musk has even donated millions of dollars to the Future of Life Institute (FLI) to fund a program with the goal of making sure humans manage AI so it doesn’t destroy us.

But artificial intelligence in itself isn’t really dangerous, Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence, said at DARPA’s "Wait, What?" conference on Thursday.

Rather, the real threat stems from making AI systems completely autonomous, he said.

The real threat is autonomy 

AI is basically smart software that enables machines to mimic human behavior. For many people, it is already a part of daily life. Apple's Siri, Google Now, and Skype's Real-Time Translation tool are all examples of artificial intelligence. 

Some AI systems incorporate many different components like computer vision, speech recognition, tactile feedback and touch systems. All of these sensory modalities give computers the ability to sense as well as, or even better than humans. The collected data can then be used to plan or take action.  

For example, the autopilot systems used on commercial aircrafts are AI systems that help safely fly the plane.

But when people like Musk or Hawking warn about AI, they are cautioning against giving AI systems complete autonomy — which isn’t something that happens naturally, Dietterich said.

“A misconception I think people have is that somehow these systems will develop free will on their own and become autonomous even though we didn’t design them with that in mind... AI systems will not become spontaneously autonomous, they will need to be designed that way,” Dietterich said.  

“So I think the dangers of AI is not so much in artificial intelligence itself, in its ability to reason and learn, but in the autonomy. What should we give computers control over?”

In other words, computers won't just take over the world someday, unless we design them to. 

Keeping AI under control

The Future of Life Institute proposes that for safety critical AI systems, like in weapons and vehicles, it may be worth considering some form of human control. Dietterich goes even further and said he believes we should never build fully autonomous AI systems.

Google self driving car

“By definition a fully autonomous system is one that we have no control over,” Dietterich said. “And I don't think we ever want to be in that situation.”

But given that many companies are already investing heavily in this technology, it seems inevitable that these systems will come into existence.

The potential applications of autonomous AI are already being proposed and many of them involve high-stakes decision making. Think self-driving cars, autonomous weapons, AI hedge funds, and automated surgical assistants. The risks associated with these AI systems must be addressed before it's safe to completely implement them, Dietterich said. 

As Elon Musk said in August during an AI panel in Silicon Valley“It's definitely going to happen. So if it’s going to happen, what’s the best way for it to happen?”

Join the conversation about this story »

NOW WATCH: Google's self-driving car has a huge problem

It just got a lot easier to find great supercomputer-created recipes on Chef Watson

$
0
0

IBM Watson's supercomputer chef is a marvel of artificial intelligence but it isn't always perfect when it comes to actual use.

The free Chef Watson app takes the user's requested meal, style and ingredients, and concocts a recipe based on its knowledge of flavors that pair well. Sounds great!

But when I tried Chef Watson, I had a difficult time looking for a recipe that had easy-to-find ingredients and wasn't too difficult to whip up in an hour.

At first glance, I was overwhelmed by the categories and ran into a stumbling block almost immediately when I couldn't find a difficulty setting.

chef watson landing pageBut according to a tip from a friendly commenter on Tech Insider, who was later revealed to be Florian Pinel, senior software engineer at the IBM T.J. Watson Research Center, a new feature just added this week might make Chef Watson a lot easier to use, especially for folks like me who are precious about their time.

Log in to Chef Watson as usual and search for easy in the search field.

chef watson easyAnd voila! A variety of easy and quick recipes pop up.

To get a different combination of ingredients and new dishes, click "more" or search for different ingredients.

chef watson easy recipeWith the first try, I found a lovely recipe that would make for a delicious lunch — Easy Dried Mushroom Couscous Dish.

The dish has just two steps, and it's something I'll likely be trying.

chef watson easy mushroom recip

Pinel told Tech Insider by email that the IBM team has been working on the easy feature for a couple of weeks.

"This is really a first attempt, to see how people react," Pinel said.

Chef Watson invented the couscous dish after analyzing over 10,000 Bon Appetit recipes. For the easy feature, Chef Watson scans for about 2,000 Bon Appetit recipes tagged with the keywords "easy" or "quick," according to Pinel. Eventually the easy setting may find a different home on the main app page.

Chef Watson cooks up these recipes after a unique kind of culinary education called machine learning. According to the Washington Post, the app "ingests a huge amount of unstructured data — recipes, books, academic studies, tweets — and analyzes it for patterns the human eye wouldn't detect."

It then takes the user's requested ingredients or style and throws together a recipe based on its memory and accumulated knowledge of flavors that work together. In other words, it looked for statistical correlations among ingredients that tended to appear together.

Chef Watson also wrote a cookbook with 65 original recipes after analyzing more than 30,000 recipes. If the recipes in the books are too complicated, give up the webapp a shot. It won't take you long.

Join the conversation about this story »

NOW WATCH: This animated map shows an alarming change in sea levels over the last 23 years

Computers could eventually write and improve their own code — putting programmers out of jobs

$
0
0

robot

Step aside "Terminator"— moviemakers looking to make accurate dystopian movies should instead take cues from "The Grapes of Wrath."

That's because robots won't take our lives — they're more likely to take our livelihoods.

And it's looking more and more likely that these artificially intelligent programs will even take over the job of making themselves — some researchers think it's likely that software engineers will one day be supplanted by intelligent software that can copy, write, and improve programs itself.

"I can envision systems that become better and better at writing software," Bart Selman, a computer scientist at Cornell University, said. "A person complemented with an intelligent system can write maybe ten times as much code, maybe a hundred times as much code. The problem then becomes you need a hundred times fewer human programmers."

An oft-cited 2013 Oxford study estimated that programmers and software engineers have just an 8% chance of automation in the next 20 years. Selman disagrees. He said that number will end up being much bigger, and we should be ready for it to skyrocket in the future.

Given time, some AI systems, especially those using a methodology called machine learning that allows AI to improve over time, will get better and faster than humans at writing software.

"[Software engineer] looks fairly safe right now," Selman said. "But you know, 20 to 30 years from now that might be different."

And of course software engineers aren't the only jobs at risk. The Oxford study projected that 47% of all employment in the US is likely to be automated by 2030.

But Toby Walsh, a professor in artificial intelligence at the National Information and Communications Technology Australia, told Tech Insider that eventually those numbers might be much worse.

"It's hard to think of a job that a computer ultimately won't be able to do as well if not better than we can do," Walsh told Tech Insider.

Others have noted that many jobs in, for example, the legal field, could also be disappearing soon.

The idea that computer scientists and lawyers might lose their jobs to the products they built is jarring, but Selman said there's no reason it wouldn't happen once robotic labor becomes cheaper than human labor. That humans will always be smarter or more efficient than the AI systems they build is a misconception.

"Chess is actually a good example," Selman said. "The programs are generally written by people who are fairly bad chess players. As a programmer, you can write a program that can do a task much better than you can."

Join the conversation about this story »

NOW WATCH: 6 science-backed ways to look smarter than you are

A new program can recreate how Vincent van Gogh painted the world

$
0
0

Vincent van Gogh's artistic genius took the art scene by storm shortly after his death. The tortured artist's unique vision of the world still captivates today.

Now scientists from the Bethge Lab in Germany have demystified how the influential painter interpreted the world using an artificially intelligent (AI) system that can learn any artist's style, like the swirls and dots that are characteristic of van Gogh's work, and replicate it on other images.

According to Leon Gatys, PhD student and the lead on the paper published in the open-source journal arxiv, this is the first "artificial neural system that achieves a separation of image content from style."

To do this, the AI system had to first be "taught" the features of the famous painting before replicating it. The scientists had it analyze van Gogh's most famous painting, "Starry Night."

starry night van goghThe scientists fed the painting into the system, which is composed of stacked layers of computing units that imitates the interconnected structure of the cells of the brain. The program analyzed the different layers of color and structure in the painting to discover van Gogh's painting style.

They also had to train the AI with the image it was supposed to paint in van Gogh's style. They used a photo of a river in their hometown, the Neckar river in Tuebingen, Germany.

NeckarfrontThe AI works like an assembly line — each layer is responsible for one thing. The lower layers identify the painting's simple details like dots and strokes. Upper layers recognize more sophisticated features like the use of color.

The figure below shows how each layer of the AI took the style of the van Gogh's painting detail by detail and applied it to the photo of the river.

AI paintingThe result looks as though van Gogh was standing at the riverbank in Germany rather than at his window in Saint Remy de Provence, the original setting for "Starry Night."

Tuebingen_VanGoghThe AI didn't just take colors and place them in corresponding areas on the photo. The program was able to make sense of the original photo's shadows and highlights and actually 'understand' the scene, in a sense.

Gatys wrote that the AI gives us a mathematical basis for understanding how humans, including one of the most influential artists in the world, perceive and create art because the program can mimic biological vision and the human brain.

What van Gogh actually saw or thought while painting his works of art is still elusive, but the algorithm gives an interesting view into the patterns that influence his creations.

Join the conversation about this story »

NOW WATCH: This breakfast burrito may actually help you lose weight


Ethicists are trying to ban sex robots

$
0
0

sex robot

This summer, thousands of artificial-intelligence researchers called for a ban on robot killers.

And now, in an interesting twist, ethicists have started calling for a ban on robotic lovers — sex robots.

Artificially intelligent (AI) sex robots that could pass for humans don't currently exist, but the Campaign Against Sex Robots thinks researchers and ethicists need to get ahead of the issue before they become a reality.

"Sex robots seem to be a growing focus in the robotics industry, and the models that they — how they will look, what roles they would play — are very disturbing indeed," Kathleen Richardson, robot anthropologist and ethicist at the De Montfort University, told the BBC.

Richardson and Erik Billing, a robotics and cognitive-science researcher from the University of Skovde in Sweden, are leading the charge to ban sex robots. Their new paper, decrying the development of sex robots, is available on their website.

Here's their argument:

  • Sex robots would lead to more objectification of women and children.
  • The relationships between humans and their sex robots would mirror that of the prostitute and the john. The john has all the power and the prostitute is reduced to an object, "just like a robot."
  • People who frequently use sex robots would be deprived of the benefits of relationships with real humans.
  • The widespread use of sex robots wouldn't reduce the exploitation and trafficking of prostitutes. According to the paper, "all the evidence shows how technology and the sex trade coexist" and could actually create more a demand for human prostitutes.

At the rate that AI's ability to mimic human movements and conversational skills is improving, full-size sex toys that look and act like humans aren't all that far away.

Philosopher Nick Bostrom surveyed 550 AI researchers to gauge when they think human-level AI, or AI that has the full breadth of capabilities an average human has, would be possible. The researchers responded that there is a 50% chance that it will be possible between 2040 and 2050, and a 90% chance that it will be built by 2075.

While Bostrom wasn't talking specifically about sex robots, the idea that the AI and robotics fields are developing that quickly will have widespread consequences for all different areas of our lives.

According to Wired, a company called True Companion claims to have built the first sex robot, slated to be released later this year. Douglas Hines, the president of True Companion, told the BBC that the sex robots would be "a solution for people who are between relationships or someone who has lost a spouse" but aren't meant to replace real human beings.

But David Levy, author of "Love and Sex with Robots" told the BBC that humans and robots in intimate relationships will be a common sight by 2050.

"There is an increasing number of people who find it difficult to form relationships," Levy told the BBC. Sex robots, he said, "will fill a void."

Join the conversation about this story »

NOW WATCH: Snapchat's new selfie filters are super trippy

This terrifying tool shows you whether robots will take your job

$
0
0

If artificial-intelligence researchers, entrepreneurs and economists are to be believed, the robotic onslaught that will annihilate our jobs and transform all of society is nigh.

Thankfully, the BBC assembled a handy guide that calculates which jobs are likely to be automated in the next 20 years, based on data from a 2013 Oxford study.

The Oxford study identified nine skills people needed and used for each profession — including "social perceptiveness, negotiation, persuasion, assisting and caring for others, originality, fine arts, finger dexterity, manual dexterity, and the need for a cramped work space."

Just enter your profession, and click "Find my automation risk" to see where it falls on the scale.

BBC job automation calculatorAs an example, I entered journalist, which has an 8% likelihood of automation. Here's part of the report the calculator spits out for that job:

oxford bbc robot jobsThe report also includes details about the workforce in that job now and how it has been trending over time. Check out your job on the calculator.

According to the BBC, the careers most immune to automation required employees to negotiate, help and assist others, and come up with creative ideas. The jobs that were least likely to be automated included social workers, nurses, therapists, and jobs that required creative and original ideas, like artists and engineers.

On the other hand, the jobs most likely to be taken over by AI or robots required people to squeeze into small spaces, assemble objects, or manipulate small objects. The top three jobs most likely to be automated are telephone salespersons, typists, or legal secretaries. In fact, some of these jobs are already being automated.

Here's the report for a legal secretary. It's not looking good:

BBC calcuator legal secretaryJerry Kaplan, author of "Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence," told Tech Insider that any person that toils through many "repetitive and structured" tasks for a living won't be safe from the bread lines.

"Even for what you think of as highly trained, highly skilled, intuitive, personable professions, it is still true that the vast majority of the work is routine," Kaplan told Tech Insider.

robot

So what's left for the humans?

The study suggests that a lot of jobs are on the line — at least 47% of all employment in the US may be automated in the next 20 years.

But Toby Walsh, a professor in AI at National Information and Communications Technology Australia told Tech Insider that in the foreseeable future, that number could be much much worse.

"It's hard to think of a job that a computer ultimately won't be able to do as well if not better than we can do," Walsh told Tech Insider.

That being said, Walsh agrees with the Oxford study that artistic and creative work is probably the last stand for human jobs as we know them.

"Go into the most people-facing, artistic, creative places that you can think of," Walsh told Tech Insider. "The artists of the world are, for a long time, still going to be real, physical people. The people who are in most people-facing, sociological, empathetic jobs are going to be people."

Calculate your job's future at the BBC >>

Join the conversation about this story »

NOW WATCH: Here’s what really makes someone a genius

This man thinks he can crack one of the greatest mysteries about what makes us sick

$
0
0

Deep Genomics Brendan Frey

Brendan Frey thinks his company could solve what many consider the biggest problem in genetics, the thing that keeps us from from using our genetic blueprint in order to completely transform the way we understand our health.

The problem is that despite our rapidly improving ability to quickly read and sequence a human genome and despite our even more astounding ability to accurately and cheaply edit the genetic code, we don't know what most of the genome actually says.

We don't yet know how a complex trait like intelligence is explained by our genes. Diseases like Alzheimer's are at least partially explained by genetics, but we're still trying to figure out which genes are involved. The same is true for many other illnesses.

The effects of a particular random genetic mutation, which happen all the time but may have a great effect or may have no effect, are even more unknown.

"After 110 years of genetics, and 15 years after the $3.8 billion Human Genome Project promised fast cures, after more billions spent and endless hype about results just around the corner, we have few cures," David Dobbs wrote in May, in a much-discussed story for Buzzfeed. "And we basically know diddly-squat."

We can see and change billions of letters, but we simply don't know what many of them mean.

DNA

Frey's startup, Deep Genomics, is leveraging artificial intelligence to help decode the meaning of the genome, something he started investigating in his lab at the University of Toronto. Specifically, the company is using deep learning: the process by which a computer takes in data and then, based on its extensive knowledge drawn from analyzing other data, interprets that information.

Their learning software is developing the ability to try and predict the effects of a particular mutation based on its analyses of hundreds of thousands of examples of other mutations — even if there's not already a record of what those mutations do.

They're trying to build not just a Rosetta Stone that explains a language, but a way to predict how a tiny change in the letters will create something new.

Brendan Frey.

'The ultimate goal'

For Frey, the inspiration behind the company was personal.

Back in 2002, Frey's wife was pregnant. Tests revealed that there was something wrong with the baby, caused by some sort of genetic problem.

But doctors and genetic counselors had no idea what exactly the problem was.

"Living with that uncertainty was exhausting — diagnosticians couldn't make sense of it," Frey tells Tech Insider. "That was really frustrating."

It was hard to hear that doctors knew the problem was somehow explained by genetics but couldn't read that code well enough to pinpoint what was actually happening. Further, he says, he realized that most researchers weren't even trying to understand the full text of a human genome.

"Everyone agreed that being able to understand how DNA generates life is the ultimate goal," Frey says, "but they were pretty pessimistic [about being able to do it], so they didn't try." Jumping from being able to sequence those billions of letters to actually interpreting them like a language was too intimidating.

Using genetics in medicine at that time usually involved seeing if doctors could match an existing problem with a known mutation, Frey explains. But that didn't help answer questions about unknown problems.

Frey specialized in machine learning at that time — not genetics. His work was building computers that could learn.

Based on his experience with artificial intelligence and deep learning, he thought he could perhaps (with the help of people who knew more about genome biology) design a way to help a machine learn to interpret the genetic code.

DNA microscope genes genome

What they still need to figure out

So far, Deep Genomics has used their computational system to develop a database that provides predictions for how more than 300 million genetic variations could affect a genetic code. They've published results from their work in the journal Science, explaining how this has led to new insights into the genetic connections of autism, cancer, and spinal muscular atrophy. That predictive system has revealed things about the genome that weren't known before.

But as Frey explains, it's still not perfect and far from complete. Right now, he says, "there could be a mutation that our system predicts is going to be problematic," but while that mutation might have an effect — which is part of what the system is predicting, since most mutations don't do anything — "it's just not a disease." That would lead to many results that are something like a false positive.

The system needs to learn more and get better at interpreting genomic data. Right now, it's still building the tools to find meaning in all that mysterious code.

Deep Genomics

As a company, Deep Genomics is still getting started. Frey explained that they hope to grow from five full-time employees to 12 soon, so they can start working on a bigger scale.

There are tons of companies out there doing fascinating work with genetics and gene sequencing especially, Frey says. But he argues that they're doing something no one else is.

"If you look at what's out there in terms of this field, there's quite a few companies that have emerged," he says, especially companies working in gene sequencing and others that are working on processing that data. But he says "you won't find any that say we're going to connect the genome to what you find in your cells." Having a system that analyzes mutations and predicts what effect they will have is still unique, Frey says.

If that system can predict the effect of mutations, that's something that will help doctors figure out diseases in ways that have never been done before. Combined with new gene editing technology, it could be the thing that helps transform how we use genetics in medicine.

Join the conversation about this story »

NOW WATCH: We're finally getting a better idea about the story driving LEGO's next video game and it looks awesome

A new Barbie model can talk back using artificial intelligence

$
0
0

barbie

A prototype for Mattel's ubiquitous Barbie doll has been developed that incorporates advanced artificial intelligence elements to allow it to process human speech, and even answer profound questions like: "Do you believe in God?"

The new Hello Barbie, unveiled to The New York Times ahead of its November launch, will combine AI software with a microphone, WiFi capabilities and speech-recognition capabilities in order to communicate through more than 8,000 lines of pre-recorded dialogue.

Through the speech-recognition software, key words are used to trigger certain responses from the Hello Barbie. For example, "good", and "fantastic" would cue the doll to say something like: "Great, me too!" The toy is also able to remember answers − such as being told a relative has died − in order to avoid such topics or draw upon them for future interactions.

At its current level, the Hello Barbie is reportedly not sophisticated enough to pass the Turing Test − the threshold that machine intelligence can pass itself off as human intelligence. However, that's not to say it couldn't fool a six-year-old child.

"It is very hard for [young children] to distinguish what is real from what is not real," said Doris Bergen, professor of educational psychology at Miami University in Ohio.

One conversation described by The Timesshows the doll's ability to speak about complex concepts, like relationships. It also infers, to the child at least, that the Barbie is capable of real emotions and feelings.

"I was wondering if I could get your advice on something," Barbie asked, before explaining that she and her friend were not speaking with each other following an argument. "I really miss her, but I don't know what to say to her now," Barbie said. "What should I do?"

The girl playing with the doll responded: "Say 'I'm sorry'."

"You're right," Barbie said. "I should apologize. I'm not mad anymore. I just want to be friends again."

(In answer to the God question, Barbie replied: "I think a person's beliefs are very personal to them.")

Privacy issues

Mattel came under fire earlier this year after privacy advocates raised concerns about Hello Barbies recording conversations between the doll and its user and transmitting them to a ToyTalk server. The doll − dubbed Eavesdropping Barbie− sparked an online petition to withdraw the doll, garnering over 4,000 signatures.

"If I had a young child, I would be very concerned that my child's intimate conversations with her doll were being recorded and analysed," Angela Campbell, from Georgetown University's Center on Privacy and Technology, said at the time.

"In Mattel's demo, Barbie asks many questions that would elicit a great deal of information about a child, her interests and her family. This information could be of great value to advertisers and be used to market unfairly to children."

In response, Mattel released a statement that stated: "The No. 1 request we receive from girls globally is to have a conversation with Barbie, and with Hello Barbie we are making that request a reality."

The rise of AI toys

Barbie dolls are shown in the toy department of a retail store in Encinitas, California October 14, 2014.    REUTERS/Mike Blake

Hello Barbie is arguably the most advanced iteration of artificial intelligence to be found in a children's toy, but Mattel is not the only manufacturer to be working on integrating the technology into its toys. In 2013, UK-based Supertoy Robotics developed a cuddly toy that can listen, learn and interact with its surroundings, while evolving its capabilities through an app described as "Siri on steroids".

More recently, the MiPosaur robotic dinosaur has been on show at toy fairs demonstrating its AI capabilities that allow it to enhance playtime by bringing the computer game experience to the real world. The multi-talented MiPosaur, developed by hi-tech toy firm WowWee, has been described by its creators as "like interacting with a pet" and is capable of altering its mood depending on the interaction.

"Connected toys, robotics and AI is really where the industry is heading I feel," Michael Yanofsky, from WowWee, told IBTimes UK at the annual London Toy Fair earlier this year. "A lot of retailers are actually expanding those categories. For us the AI can enable us to do so many things."

Join the conversation about this story »

NOW WATCH: The new trailer for ‘Steve Jobs’ makes it clear that Sorkin is going to show us the man like never before

Banning sex robots is a bad idea

$
0
0

Gigolo Joe A.I.

"Ban sex robots!" scream the tech headlines, as if they're heralding the arrival of the latest artificial intelligence threat to humankind since autonomous killer robots.

The campaign, led by academics Kathleen Richardson and Erik Billing, argues that the development of sex robots should be stopped because it reinforces or reproduces existing inequalities.

Yes, society has enough problems with gender stereotypes, entrenched sexism and sexual objectification. But actual opposition to developing sexual robots that aims for an outright ban? That seems shortsighted, even – pardon the pun – undesirable.

Existing research into sex and robots generally centers on a superficial exploration of human attachment, popularized by films such as "Her" and "Ex Machina": a male-dominated, male-gaze approach of machine-as-sex-machine, often without consideration of gender parity. Groundbreaking work by David Levy, built on the early research into teledildonics– cybersex toys operable through the internet – describes the increasing likelihood of a society that will welcome sex robots. For Levy, sex work is a model that can be mirrored in human-robot relations.

Carving a new narrative

Richardson does not relish this prospect and to an extent I agree with her misgivings; it is a narrative that should be challenged. I absolutely agree that to do so would require, as Richardson states in her recent paper: "a discussion about the ethics of gender and sex in robotics." Such a discussion is long overdue. In the gendering of robots, and the sexualized personification of machines, digital sexual identity is too often presumed, but to date little-considered.

The relationship between humans and their artificial counterparts runs right back to the myths of ancient Greece, where sculptor Pygmalion's statue was brought to life with a kiss. It is the stuff of legend and of science fiction – part of our written history and a part of our imagined future. The feminist thinker Donna Haraway's renowned A Cyborg Manifesto laid the modern groundwork for seriously considering a post-gendered world where distinction between natural and artificial life is blurred. Written in 1991, it is prescient in terms of thinking about artificial sexuality.

Pygmalion and Galatea

But just as we should avoid importing existing gender and sexual biases into future technology, so we should also be cautious not to import established prudishness. Lack of openness about sex and sexual identities has been a source of great mental and social anguish for many people, even entire societies, for centuries. The politics behind this lack of candor is very damaging.

The campaign seeks to avoid the sexualization of robots, but at the cost of politicizing them, and doing so in a narrow manner. If robots oughtn't to have artificial sexuality, why should they have a narrow and unreflective morality? It's one thing to have a conversation and conclude something about the development of technology; it's another to demand silence before anyone has had the chance to speak.

The scope for sex robots goes far beyond Richardson's definition of them as "machines in the form of women or children for use as sex objects, substitutes for human partners or prostitutes." Yes, we impose our beliefs on these machines: we anthropomorphize and we bring our prejudices and assumptions with us. Sex robots have, like much of the technology we use today, been designed by men, for men. Think of the objects we use everyday: smartphones better suited to a man's larger hands and the pockets of men's clothes, or pacemakers only suitable for 20% of women.

Machines are what we make them

sex robot

But robotics also allows us to explore issues without the restrictions of being human. A machine is a blank slate that offers us the chance to reframe our ideas. The internet has already opened up a world where people can explore their sexual identity and politics, and build communities of those who share their views. Aided by technology, society is rethinking sex/gender dualism. Why should a sex robot be binary?

And sex robots could go beyond sex. What about the scope for therapy? Not just personal therapy (after all, companion and care robots are already in use) but also in terms of therapy for those who break the law. Virtual reality has already been trialled in psychology and has been proposed as a way of treating sex offenders. Subject to ethical considerations, sex robots could be a valid way of progressing with this approach.

To campaign against development is shortsighted. Instead of calling for an outright ban, why not use the topic as a base from which to explore new ideas of inclusivity, legality and social change? It is time for new approaches to artificial sexuality, which includes a move away from the machine-as-sex-machine hegemony and all its associated biases.

Machines are what we make them. At least, for now – if we've lost control of that then we have a whole other set of problems. Fear of a branch of AI that is in its infancy is a reason to shape it, not ban it. A campaign to stop killer robots is one thing, but a campaign against sex robots? Make love, not war.

Kate Devlin, Senior Lecturer, Department of Computing, Goldsmiths, University of London

This article was originally published on The Conversation. Read the original article.

Join the conversation about this story »

NOW WATCH: The actress who plays The Master on 'Doctor Who' summed up her character perfectly with a simple fable

Billionaire investor Yuri Milner says human brains work better with computers

$
0
0

Russian entrepreneur and venture capitalist Yuri Milner arrives on the red carpet during the second annual Breakthrough Prize Awards at the NASA Ames Research Center in Mountain View, California in this November 9, 2014 file photo. REUTERS/Stephen Lam/Files

Elon Musk may be afraid of artificial intelligence overtaking humans, but billionaire investor Yuri Milner has to disagree. 

Rather, he thinks artificial intelligence will develop in a different way than Musk is worried about. 

"What we see very clearly is that there is a convergence between the human brain and computers," Milner said during an interview at Tech Crunch Disrupt in San Francisco.

He's not talking about chips implanted in our heads, but computers and humans working together — and being better for it. 

"Google is good example of that. You have a million people feeding the machine, then you have servers that are analyzing this data and feeding it back to our human brains," Milner said.

Chess is another area where computers have shown they can beat humans, even as early as the 1990s. But, pair a computer with a human and the duo can beat the computer. It's the combination that's better, and how Milner envisions artificial intelligence developing. 

"A human brain plus computer is better than just computers," Milner said.

SEE ALSO: The clever way Zenefits CEO Parker Conrad makes sure his super-hot startup focuses on the right stuff

Join the conversation about this story »

NOW WATCH: This Excel trick will save you time and impress your boss

A new computer program does better at SAT geometry than the average high school student

$
0
0

Students Test Classroom Exam

Artificial intelligence is coming for everything you hold dear, and not even your beloved SAT scores are safe.

A new AI system can now tackle geometry questions on the SATs about as well as an average high school junior.

The system answered 49% of the geometry questions from the official SAT tests correctly, and was 61% accurate on practice test questions.

If the computer's geometry scores were extrapolated to the whole math portion of the SAT test, the system would get a score of 500 out of 800, on par with the average high-school student, according to a press release from the Allen Institute for Artificial Intelligence (AI2) and the University of Washington.

Oren Etzioni, the CEO of AI2, said in the written statement that the SATs and other kinds of standardized tests make for fertile ground when testing AI.

"Unlike the Turing Test, standardized tests such as the SAT provide us today with a way to measure a machine's ability to reason and to compare its abilities with that of a human," Etzioni said.

The system, called GeoS, can interpret diagrams, read and understand the text of a question, and choose the most appropriate selection from multiple answers on questions it had never encountered before. This is particularly notable when you consider the fact that the diagrams used in geometry tests include a lot of implicit information not explained in the question.

Here's an example of how it works. GeoS simultaneously reads the question and looks at how the words are related to each other. It matches those phrases to the corresponding areas of the diagram.

GeoS first examines the text and diagram and describes them as equations. It then scores and ranks the accuracy of the equations, and sends them to a geometric solver, which chooses one of the multiple choice answers that best reflects the system's interpretation of the problem.

Etzioni said the geometry questions that GeoS encounters on the SAT are a lot like the kinds of information people have to deal with everyday, and why geometry was the perfect subject for the computer to tackle.

"Much of what we understand from text and graphics is not explicitly stated, and requires far more knowledge than we appreciate," he said in the press release. "Creating a system to be able to successfully take these tests is challenging, and we are proud to achieve these unprecedented results."

Computer scientists presented their work at conference in Lisbon, Portugal. Check out AI2's demo of GeoS and see how well you stack up.

Join the conversation about this story »

NOW WATCH: What US cities will look like under 25 feet of water


Here's how well an AI scored on the math section of the SAT

$
0
0

student sat test studying

Artificial intelligence has already beaten us at chess and bested us in IQ tests.

Now, scientists say they have created an AI that can solve SAT geometry questions as well as the average American 11th-grade student.

Researchers at the Allen Institute for Artificial Intelligence in Seattle created a program called GeoS, which uses computer vision to see, uses language algorithms to read and understand problems, and uses a mathematical algorithm to solve them.

The AI correctly answered 49% of geometry questions from official SAT tests, and 61% of practice questions, the researchers showed. If you extrapolate its performance on the official questions to the entire SAT math section, the AI would have an approximate SAT score of 500 out of 800 — the same as the average high schooler's math score in 2015.

The findings were presented Monday presented at the Conference on Empirical Methods in Natural Language Processing (EMNLP) in Lisbon, Portugal.

The best-known test of a machine's intelligence is the Turing test, invented by mathematician Allen Turing in 1950. But this test, which involves fooling a human in a blind conversation, is no longer considered a good measure of articifial intelligence by many scientists today.

"Unlike the Turing Test, standardized tests such as the SAT provide us today with a way to measure a machine’s ability to reason and to compare its abilities with that of a human," Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, said in a statement.

A big part of how humans understand text and graphics is not explicitly stated, but requires more subtle knowledge. The team's biggest challenge was converting the SAT questions into language the AI could understand. In order to solve geometry questions, the AI had to have an in-depth understanding of text, diagrams, and reasoning.

The Allen Institute researchers say they are also working on systems that can take science tests, which require a knowledge of unstated facts and common sense that humans develop over the course of their lives.

SEE ALSO: A Chinese artificial intelligence program just beat humans in an IQ test

NOW READ: IBM's Watson says it can analyze your personality in seconds — but the results are all over the place

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

Facebook's artificial intelligence research director says today's 'best AI systems are dumb'

$
0
0

futurama bender

Economists and computer scientists believe future artificial intelligence (AI) will not only take our jobs but fundamentally change everything we know about work and society.

So it's easy to imagine AI as super-intelligent machines that are leaps and bounds ahead of human intelligence.

But we just aren't there yet.

Not even close, Yann LeCun, the director for Facebook's artificial intelligence research (FAIR) told Popular Science. Today's artificial intelligence is quite, well, stupid.

"Right now, even the best AI systems are dumb," LeCun told Popular Science's Dave Gershgorn. "They don't have common sense."

Researchers have created artificial narrow intelligence (ANI) — programs that are very good at very specific tasks, like IBM Watson's Jeopardy-playing supercomputer or a world-champion-beating chess program.

But no computer scientist or AI researcher yet knows how to make artificial general intelligence (AGI), programs that are capable of exhibiting human-level intelligence on multiple tasks.

For AGI to exist, computers would need to be imbued with common sense reasoning, the ability to encounter new situations and make inferences about what's happening and what needs to be done.

LeCun used an everyday human behavior to describe how common sense works. When asked to imagine a person leaving a room with a bottle, a human would describe the scene in a pretty specific way: the person would stand up from the table, pick the bottle up, walk to the door, open the door to the room, and then walk out of the room with the bottle.

LeCun said that a human could deduce all those details "because [we] know the constraints of the real world," but an AI without "common sense" knowledge of how the world works would struggle to envision the scene playing out that way.

The phrase "person leaving a room with a bottle" simply doesn't contain not enough information for a machine to come to the same description.

In an email interview, LeCun told Tech Insider that one of the biggest obstacles to endowing machines with this kind of common sense is getting machines to learn in an "unsupervised manner, like babies and animals do."

"Right now, the way we train machines is 'supervised,' a bit like when we show a picture book to a toddler and tell them the name of everything," LeCun said.

This kind of learning can teach the program to make connections, but not the "common sense" of why those connections exist and how they apply to other real world situations. We still haven't figured out the best way to teach AI that kind common sense yet, LeCun told Tech Insider, which means we still don't know how to make our AI any smarter.

And, that, he said, is "why it's very difficult to make a prediction as to when 'human-level AI' will come about."

Join the conversation about this story »

NOW WATCH: Only 6% of Americans surveyed can answer these basic science questions — how would you do?

It’s impossible for someone to build a super-intelligent robot in their basement

$
0
0

ex machina osscar

Science fiction movies, like "Metropolis" and the more recent "Ex Machina," often depict lonely researchers toiling away in secret labs developing super-intelligent, sophisticated robots.

But the director of Facebook's Artificial Intelligence Research (FAIR) Yann LeCun told Popular Science that this scenario would be impossible:

"The scenario you seen in a Hollywood movie, in which some isolated guy in Alaska comes up with a fully-functional AI system that nobody else is anywhere close to is completely impossible," LeCun said, "This is one of the biggest, most complicated scientific challenges of our time, and not any single entity, even a big company, can solve it by itself. It has to be a collaborative effort between the entire research and development community."

While a lone scientist developing an ultra-smart artificial intelligence technology makes for a great movie, it hides the fact that real development in AI is actually because of hard work done by thousand of researchers over the last six decades.

Researchers have been working on perfecting AI that can beat humans at very narrow tasks, but we're nowhere near the kind of AI that you see in the movies — robots that can do everything an average human can.

These narrowly-useful AI programs are what makes the internet work, silently running behind the curtain of Amazon's recommendation systems and Facebook's news feed. But there are still major sticking points — researchers are still figuring out how to make computers accurately explain what they see.

LeCun knows this better than anyone — he has devoted his career to teach computers to see images. Since he joined the FAIR team in 2013, Facebook has been wading further into the artificially intelligent waters and expanding their AI capabilities.

In August, they unveiled M, a personal assistant that can make reservations and book tickets with the initial help of a human working at Facebook. Over time, the program learns from its "AI trainers," as Facebook calls them, to complete requests on its own, Popular Science reports.

Even as Facebook is training their AI, they work as part of a community — and their work is no secret. LeCun told Popular Science that all of FAIR's work is open-source, published either on their research site or ArXiv, an open-sourced journal that publishes papers about computer science, mathematics, and physics.

"The research we do, we're doing it in the open," LeCun told Popular Science. "Pretty much everything we do is published, a lot of the code we write is open-sourced."

Join the conversation about this story »

NOW WATCH: Lexus just revealed exactly how they made the hoverboard everyone is talking about

Here's what Facebook's artificial intelligence expert thinks about the future

$
0
0

facebook AI director yann lecun

Almost everything on Facebook — the posts in your timeline, the ads, and features like speech recognition and automatic image tagging — is driven by artificial intelligence (AI).

And Facebook is just getting started. The social media company's new personal helper, M, can help you make dinner reservations, buy things off Amazon, and even recommend anniversary gifts for your spouse.

Behind the technology that powers M and other Facebook features is a special team led by Yann LeCun, an AI researcher and former New York University professor.

Since LeCun joined Facebook in 2013, the company has opened AI laboratories in California and in Paris, France, where LeCun was born.

LeCun has researched image recognition since the 1980s. He even helped revive an AI field of study called "deep learning," which now powers the self-learning abilities of Facebook's ad targeting, Apple's Siri, Amazon's recommendation services, and more.

Tech Insider emailed LeCun, director of the Facebook Artificial Intelligence Research team, to find out how AI is changing our lives, what most people get wrong about the field, and where the technology is headed.

You can read a version of that conversation below, which we've edited for length, style, and clarity.

TECH INSIDER: What do you think is the most impressive, real-world use of AI?

YANN LECUN: It changes quickly. I'd say the many new applications of image recognition used by Facebook, Google and others.

Some of these are apparent, like keyword-based image search and face recognition, some are acting behind the scenes, e.g. for content filtering.

TI: How did you become interested in AI?

YI: My dad is an engineer (retired now) and got me interested in designing and building things, particularly model airplanes and electronics.

I was fascinated by the mystery of intelligence as a kid, which got me interested in human evolution, epistemology and AI. Because I like building things, I taught myself electronics and programming in high school.

TI: Your field started as a way to study human intelligence by recreating it on computers. Does AI have to imitate how the human brain works?

YL: No, but AI is to the brain as airplanes are to birds. The details are different, but the underlying principles are the same. For airplanes and birds, the underlying principle is aerodynamics.

The question is, what is the equivalent of aerodynamics for intelligence?

TI: So to build AI that's human-like, what obstacles do we need to overcome?

YL: The short answer is: We have no idea. That's why it's very difficult to make a prediction as to when "human-level AI" will come about.

Right now, though, the main obstacle we face is how to get machines to learn in an unsupervised manner, like babies and animals do. Right now, the way we train machines is "supervised," a bit like when we show a picture book to a toddler and tell them the name of everything.

TI: What do you think are the biggest and most common misconceptions about AI?

YL: Misconceptions:

(1) "AIs won't have emotions." They most likely will. Emotions are the effect low-level/instinctive drives and the anticipations of rewards.

(2) "If AIs have emotions, they will be the same as human emotions." There is no reason for AIs to have self-preservation instincts, jealousy, etc. But we can build into them altruism and other drives that will make them pleasant for humans to interact with them and be around them.

(3) Most AIs will be specialized and have no emotions. Your car's auto-pilot will just drive your car.

TI: Emotions play a big role in AI in popular culture. Do you have a favorite sci-fi depiction of AI?

YL: Most of them are terrible. But I like HAL in "2001: A Space Odyssey," not because it goes crazy and murderous, but because I watched the movie when I was nine years old, and I became fascinated by the very idea of AI.

I also like "Her." The robot child in the movie "A.I." is interesting. It's a harmless form of AI animated by the best human emotion: love.

hal 2001 a space odysseyTI: Are any of those sci-fi depictions close to being possible?

YL: Something like "Her" is not entirely implausible, but it's very far from what we can do today. We are decades away.

Almost all Hollywood depictions of AI and robots are implausible. AI either has to be emotionless, or if it does have emotions, they seem to be caricatures of the worst human drives and emotions (jealousy, greed, desire to dominate, becoming murderous when threatened, etc...).

AIs will not have these destructive "emotions" unless we build these emotions into them. I don't see why we would want to do that.

TI: What do you think this technology's greatest impact on society will be in the future?

YL: AI will transform society, but probably not more (relatively speaking) than past technological progress like cars, airplanes, telecommunications, regular computers, modern medicine, etc. Some jobs will disappear, others will be created as with every wave of technological progress.

There will be immediate benefits of AI like better medicine and reduced traffic accidents (due to self-driving cars). The overall wealth of societies will increase.

The questions are (1) how will societies adapt to share the benefits, (2) what human activities will become highly valued?

Number one is for the political process to solve. For number two: Uniquely human activities will become more valuable, for example artistic expression.

Join the conversation about this story »

NOW WATCH: Cloning your dog is easier than you think — here’s how it works

Watch this creepy knife-wielding robot make noodles

$
0
0

china robot noodle

We've heard about all kinds of people losing their jobs to robots — from cab drivers to bartenders— but almost any repetitive job can be replaced by a robot.

One fantastically tasty example of this is the Noodlebot, which was originally patented under the name Chef Cui by its inventor Cui Runguan.

Noodblebot burst onto the scene back in 2012, but we just discovered him and we couldn't resist sharing.

Noodlebot is cheap, uncomplicated, and according to some restaurant owners, actually "better than human chefs."

He cuts a specific kind of noodle called dao xiao mian, or "knife cut noodles,"according to a CNN post from 2012.

Traditionally, a chef makes and kneads the wheat-based dough by hand, then holds the dough in one hand and cuts with the other.

The stationary robot works much in the same way, but it's faster and more accurate. According to CNN, Noodlebot can slice 150 pieces of noodles a minute, and can be programmed to cut noodles of different widths and lengths.

Noodlebot's knife-wielding arm works like a windshield wiper — slicing noodles in an up and down motion. The cut noodles fire directly into the wok.

The uncut dough sits on a platform in front of the robot. The platform moves up and side to side, allowing the other arm to cut across the dough.

Noodlebot's aim isn't perfect, so it helps to have a more experienced chef standing guard.

Runguan believes Noodlebot will allow entry-level cooks to work on more rewarding tasks in the kitchen.

"Young people don't want to work as chefs slicing noodles because this job is very exhausting," Runguan told Zoomin.TV. "It is a trend that robots will replace men in factories, and it is certainly going to happen in noodle slicing restaurants."

You'd think many restaurant owners would be terrified of Noodlebot. Its menacing unibrow and constantly shifting eyes make it impossible to tell if it'll suddenly turn to you and say "You're next" all while calmly cutting noodles.

Runguan designed the Noodlebot to look like characters from a famous 1960 Japanese show called "Ultraman" at the behest of his son, according to CNN.

"The robot chef can slice noodles better than human chefs and it is much cheaper than a real human chef," Liu Maohu told Zoomin.TV in 2012. "It costs more than [$4700] to hire a chef for a year, but the robot just costs me [$1500]."

And Maohu's customers don't seem to mind. According to Zoomin.TV, one customer said he couldn't tell the difference between the human-made and robot-made noodles, and that Noodlebot's noodles "taste good and look great."

In fact, like many food-making robots, watching Noodlebot is strangely mesmerizing. Noodlebot makes for some pretty awesome pre-dinner entertainment and some customers find him irresistible.

Other manufacturers are now building robots just like Noodlebot. Foxxconn, the same company that assembles iPhones and iPads, got in the noodle-cutting game early in 2015, according to Wall Street Journal. Foxxconn has only built three noodle-cutting robots so far and it doesn't seem to have as much flair as Noodlebot.

Watch Noodlebot in action below.

CHECK OUT: I tried the San Francisco restaurant that serves quinoa delivered via robot cubbies, and it's totally awesome

RELATED: Some academics are proposing a ban on sex robots — here's why it's a bad idea

Join the conversation about this story »

NOW WATCH: We tried the 'crazy wrap thing' everyone is talking about

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>