Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Here's how AI could fix the toxic nature of the internet

$
0
0

cyber

Venture outside the Safe For Work confines of the internet and you'll find off-the-scale offensive images that will rattle your brain.

Or you could provoke the ire of strangers on the internet, like Anita Sarkeesian, who runs a blog exposing the way sexist videogames depict and exclude women. On a daily basis, she receives sexual harassment and death threats from anonymous Twitter users. Last October, she had to cancel a Utah State University event because of an anonymous terror threat.

Because of incidents like these, Twitter and SRI International, the company that created Siri, are now deploying algorithms to crack down on online harassment and offensive material, according to Wired.

Online harassment covers a wide range of interactions, including personal attacks, offensive images, and threats on social media platforms. The most extreme forms of online harassment include stalking and doxing, when personal details like someone's home addresses are publicly revealed online.

According to a 2014 Pew Research Center Survey, 73% of those surveyed were the victims of some form of online harassment, with people ages 18 to 29 experiencing the brunt of it. Women aged between 18 and 24 were also overwhelming the target of online sexual harassment.

Currently, social networks like Facebook and Twitter require users to flag and report harassment. The problem is, the offending messages have then already taken their emotional toll.

In some other cases, content moderators are tasked with keeping offensive material from popping up in timelines and profiles. According to a 2014 Wired report by Adrian Chen, they "might very well comprise as much as half the total workforce for social media sites."

Shielding everyday users from offensive images and violent content is an emotionally and psychologically exhausting job. Content moderators, usually employed overseas in countries like the Philippines, spend 8 hours a day, day in and day out, staring at pornographic and gory images, Chen wrote.

With any luck, advances in artificial intelligence (AI) would make these content moderator jobs — and the internet at large — less toxic.

Norman Winarsky, SRI International's vice president president, told Wired that their AI could help filter harassment and offensive images out — much like the AI that's responsible for filtering spam in your email inbox — without the need for human moderators to see them.

"Social networks are overwhelmed with these kinds of problems, and human curators can't manage the load," Winarsky said.

SRI's AI systems build on the company's past expertise on how people communicate online using supervised machine learning. According to Wired, 250 people curated online harassment data from an anonymous social network and fed it into the AI system. Over time, the AI learned what words and phrases constitute harassment.

Eventually, SRI hopes to build a system that can even learn to flag sarcastic statements — which may seem harmless — as bullying by "analyzing patterns of actions," according to Wired.

While SRI's anti-cyberbullying AI may be available for practical use about six months to a year from now, Twitter is currently implementing AI systems to catch offensive images. This is especially important since images are shown natively in the network.

Last year, Twitter purchased MadBits, a company that builds image recognition AI systems, and set them on the herculean task of catching offensive images, according to Wired.

MadBits co-founder Clement Farabet told Wired that the system is pretty effective. When primed to catch 99% of offensive images, Farabet said it will "incorrectly flag perfectly acceptable pics just 7% of the time."

The AI system isn't Twitter's first line of defense, yet. Twitter's AI uses supervised machine learning to perfect it, requiring human employees to view and label offensive images before feeding the data into the system. The AI system is comprised of an artificial neural network, which mimics the interconnected structure of human brain cells.

But as the AI encounters more images, it gets better at distinguishing between what is and isn't offensive. Over time, "the need for this tagging diminishes," and fewer people will need to actually see the images.

SEE ALSO: One of Mark Zuckerberg's mind-blowing predictions about the future already exists

Join the conversation about this story »

NOW WATCH: Scientists have developed paint that changes color based on heat, light and impact


This robot wakes you up in the morning and checks if you turned off the oven when you leave the house

$
0
0

French tech startup Blue Frog Robotics has created BUDDY, a family-oriented household robot. The design was inspired by robots from pop culture like R2-D2 from "Star Wars" and Disney's "WALL-E." Chief operating officer Franck de Visme says BUDDY is especially helpful for children, the elderly, and those with special needs.

Video courtesy of Reuters

Follow BI Video: On Facebook

Join the conversation about this story »

IBM's Watson says it can analyze your personality in seconds — but the results are all over the place

$
0
0

watson jeopardy ibm

"You are heartfelt, confident and opinionated. You are calm under pressure: you handle unexpected events calmly and effectively."

At least that's what IBM's supercomputer program Watson thinks of me, based on a writing sample I gave it from a story I wrote on how skinny jeans are bad for your health.

Watson, the program that famously beat humans at Jeopardy!, now has a "Personality Insights" service that analyzes your blog posts, tweets, or other text you give it access to and spits out a horoscope-like description of your personality. You can enter text in English or Spanish, and it must be at least 100 words.

The program is amusing, but the results seemed a little inconsistent.

For example, when I plugged in the text of another story I wrote on a neuroscientist who found out he's a psychopath, I got a totally different response:

Screen Shot 2015 07 22 at 3.38.22 PM"You are unconventional and somewhat inconsiderate. You are unstructured: you do not make a lot of time for organization in your daily life. You are laid-back: you appreciate a relaxed pace in life. And you are carefree: you do what you want, disregarding rules and obligations..."

Thanks, Watson!

In addition to a description of your personality, the program gives you scores on the "Big Five" personality traits: openness, conscientiousness, extroversion, agreeableness, and emotional range (sometimes called neuroticism).

It also scores you on various needs (such as love and liberty) and values (such as hedonism). You can view your personality data in a nifty graphic as well.

I also had some fun trying the program on works of Shakespeare ("You are shrewd and somewhat inconsiderate...you are comfortable using every trick in the book to get what you want"), “Harry Potter” ("You are boisterous and social... you are hard to embarrass and are self-confident most of the time"), and Taylor Swift lyrics ("You are a bit compulsive, somewhat shortsighted and can be perceived as dependent").

To give a reliable estimate of personality, the Watson program requires at least 3,500 words, but preferably 6,000 words, according to the product's documentation. IBM representatives say the text should also be "reflective"— in other words, it should reveal the author's personal experiences, thoughts and responses.

 

SEE ALSO: IBM's supercomputer Watson ingested 2,000 TED Talks and can answer your deepest questions

CHECK OUT: A machine is about to do to cancer treatment what ‘Deep Blue’ did to Garry Kasparov in chess

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

This robot passed a 'self-awareness' test that only humans could handle until now

$
0
0

Self consciousness

Robots can staff eccentric Japanese hotels, make logical decisions by playing Minecraft, and create trippy images through Google. Now the droids may have attained a new milestone by demonstrating a level of self-awareness.

An experiment led byProfessor Selmer Bringsjord of New York's Rensselaer Polytechnic Institute used the classic "wise men" logic puzzle to put a group of robots to the test. 

The roboticists used a version of this riddle to see if a robot is able to distinguish itself from others. 

Bringsjord and his research squad called the wise men riddle the "ultimate sifter" test because the knowledge game quickly separates people from machines -- only a person is able to pass the test. 

But that is apparently no longer the case. In a demonstration to the press, Bringsjord showed that a robot passed the test. 

The premise of the classic riddle presents three wise advisors to a king, wearing hats, each unseen to the wearer. The king informs his men of three facts: the contest is fair, their hats are either blue or white, and the first one to deduce the color on his head wins. 

The contest would only be fair if all three men sported the same color hat. Therefore, the winning wise man would note that the color of the hats on the other two, and then guess that his was the same color. 

The roboticists used a version of this riddle to prove self awareness -- all three robots were programmed to believe that two of them had been given a "dumbing pill" which would make them mute. Two robots were silenced. When asked which of them hadn't received the dumbing pill, only one was able to say "I don't know" out loud. 

Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn't received the pill.

To be able to claim that the robot is exhibiting "self-awareness", the robot must have understood the rules, recognized its own voice and been aware of the fact that it is a separate entity from the other robots. Researchers told Digital Trendsthat if nothing else, the robot's behavior is a "mathematically verifiable awareness of the self". 

Watch the demonstration below: 

 Some Twitter users sniffed fear when they watched the new development unfold, while others lauded the discovery. 

 

 

SEE ALSO: These trippy images show how Google's AI sees the world

Join the conversation about this story »

NOW WATCH: This robot wakes you up in the morning and checks if you turned off the oven when you leave the house

The man who created the world's first self aware robot says this next discovery will change the human-robot relationship forever

$
0
0

Self consciousness

Luciano Floridi issued a challenge to scientists to the world in 2005: prove that robots can display the human trait of self-awareness through a knowledge game called the "wise man" test. It was a venture he didn't ever see being achieved in the foreseeable future.

A decade later, the Oxford professor's seemingly unattainable challenge has been met. 

On July 9, a team of researchers led by Professor Selmer Bringsjord helped a robot solve the riddle, displaying a level of self-awareness and satisfying what had until then been considered "the ultimate sifter" test that could separate human from cyborg. 

But the professor says there's a bigger challenge he wants robots to accomplish: self-awareness in real time. If we achieve this milestone, he said, the way we interact with artificial intelligence and robots will drastically change.

"Real time" self-awareness means robots acting upon new situations that they are not pre-programmed for, and translating how to act into physical movements. This is a serious challenge that Bringsjord has not tapped into because self-awareness algorithms are still separate from a robot's body. If robots could work in real time, mind-to-body, he says, we would break through major barriers that could result in scenarios such as droids that act as our personal chauffeurs.

Robot"Think about a self driving vehicle for a moment, say one that's been built by Uber or Apple or Google," Bringsjord told Business Insider. "Three people get into it, and they're all talking at the same time. If the AI doesn't understand who's talking and that it's not him or herself talking -- but only recognizes when itself speaks -- there is no way to engineer a system that would be able to understand this conversation that good old fashioned taxi drivers have no trouble with. That is, unless the driving AI has a concept of itself versus its occupants."

According to the roboticist, this feat requires programming and math so complex that it has not been invented yet. So what's the next step? Bringsjord said there's a lot of low hanging fruit that needs to be grabbed in the near term.

"We're going to see more robots coming on the market soon that can make an intelligent discrimination of its machine body versus the body of humans and possibly animals as well, " Bringsjord said. "You're probably going to see this the most happening in cars, controlled home environments, and office buildings."

If we ever wanted robots to save lives -- give a soldier emergency medication, for example -- this "discrimination" skill is absolutely crucial, Bringsjord said. 

Bringsjord said he welcomes others to come up with new self-awareness tests for robots. 

"If we just design the tests ourselves, then everyone would be skeptical," Bringsjord said. "It's up for other people to give us challenges; we want to be challenged by companies like Uber. 

SEE ALSO: A robot killed a factory worker in Germany

Join the conversation about this story »

NOW WATCH: Scientists are having robots play 'Minecraft' to learn about human logic

Stephen Hawking is doing a Reddit AMA — now's your chance to ask him your burning questions

$
0
0

Stephen Hawking

Considered one of the greatest scientists of our age, Stephen Hawking is taking to Reddit this week to answer the public's questions on life, the universe, and everything. You can submit your questions via Reddit by clicking here.

The Cambridge University theoretical physicist, best-known for his studies of black holes and overcoming the challenges of living with motor neuron disease (ALS), is doing an "AMA" (ask me anything) session on the popular social media/news forum, to discuss "making the future of technology more human." Because Hawking requires more time to compose answers using his speech synthesizer, Reddit is collecting questions today, and users can vote on the best questions for him to answer this week.

Reddit is accepting questions today (they did not give a deadline for submissions). The thread says that Hawking will answer the questions over the next couple of weeks and then moderators will post his responses into the AMA.

From life in the cosmos to life-threatening robots, the questions already submitted address topics that Hawking is very passionate about. Here's a small sample:

  • "If we discovered a civilization in the universe less advanced than us, would you reveal to them the secrets of the cosmos or let them discover it for themselves?"
  • "If a more advanced civilisation were to contact you personally, would you tell them to reveal the secrets of the cosmos to humanity, or tell them to keep it to themselves?"
  • "What suggestions would you have for the global community when it comes to building an international consensus on the ethical use of AI tools and do we need a new UN agency similar to the International Atomic Energy Agency to ensure the right practices are being implemented for the development and implementation of ethical AI tools?"
  • "I've thought lately about biological organisms' will to survive and reproduce, and how that drive evolved over millions of generations. Would an AI have these basic drives, and if not, would it be a threat to humankind?"

Hawking has previously spoken out against contacting alien civilizations, saying, "I think it would be a disaster. The extraterrestrials would probably be far in advance of us. The history of advanced races meeting more primitive people on this planet is not very happy, and they were the same species."

But he's not opposed to looking for them. Just last week, Hawking joined billionaire investor Yuri Milner in announcing a $100 million investment in the search for extraterrestrial intelligence (SETI), and Hawking attended the announcement at the Royal Society in London. And later in the week, NASA scientists announced the discovery of an Earthlike planet that could potentially host life.

But extraterrestrial intelligence isn't the only kind Hawking thinks we should pay attention to.

He and thousands of other scientists and tech giants just signed an open letter calling for a ban on autonomous weapons, a.k.a. killer robots. Hawking has warned about the dangers of AI before, calling it humanity's "biggest existential threat."

Stay tuned for what Hawking has to say about these important topics of our age.

SEE ALSO: These are the research projects Elon Musk is funding to ensure A.I. doesn’t turn out evil

CHECK OUT: A crazy new theory solves 40-year-old mystery about what happens inside of a black hole

Join the conversation about this story »

NOW WATCH: Here's the real science behind time travel

Stephen Hawking, Elon Musk, Steve Wozniak and over 1,000 AI researchers co-signed an open letter to ban killer robots

$
0
0

MQ9 Drone

More than a thousand artificial intelligence researchers just co-signed an open letter urging the United Nations to ban the development and use of autonomous weapons.

The letter was presented this week at the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.

Tesla and SpaceX CEO Elon Musk and physicist Stephen Hawking signed the letter, alongside leading AI scientists like Google director of research Peter Norvig, University of California, Berkeley computer scientist Stuart Russell and Microsoft managing director Eric Horvitz.

The letter states that the development of autonomous weapons, or weapons that can target and fire without a human at the controls, could bring about a "third revolution in warfare," much like the creation of guns and nuclear bombs before it.

Even if autonomous weapons were created for use in "legal" warfare, the letter warns that autonomous weapons could become "the Kalashnikovs of tomorrow"— hijacked by terrorists and used against civilians or in rogue assassinations.

To everyday citizens, the Kalahnikovs — a series of automatic rifles designed by Mikhail Kalashnikov— are better known as AKs.

"They're likely to be used not just in wars between countries, but the way Kalashnikovs are used now ... in civil wars," Russell told Tech Insider. "[Kalashnikovs are] used to terrorize populations by warlords and guerrillas. They're used by governments to oppress their own people."

A life in fear of terrorists or governments armed with autonomous artificially intelligent weapons "would be a life for many human beings that is not something I would wish for anybody," Russell said.

Unlike nuclear arms, the letter states that lethal autonomous weapons systems, or LAWS, would "require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce."

But just how close are we to having usable autonomous weapons? According to Russell, affordable killer robots aren't a distant technology of the future. Stuart wrote in a May 2015 issue of Nature that LAWS could be feasible in just a few years.

In fact, semiautonomous weapons, which have some autonomous functions but not the capability to fire without humans, already exist. As Heather Roff, an ethics professor at the University of Denver, writes in Huffington Post Tech, the line between semiautonomous and fully autonomous is already quite thin, and getting even smaller.

For example, Roff writes, Lockheed Martin's long-range anti-ship missile can pick its own target, though it is fired by humans, at least for now. "The question has been raised whether this particular weapon slides from semi-autonomous to fully autonomous, for it is unclear how (or by whom) the decision is made," she wrote.

According to the New York Times: "Britain, Israel, and Norway are already deploying missiles and drones that carry out attacks against enemy radar, tanks, or ships without direct human control."

This open letter banning autonomous weapons is one of the latest warnings issued by scientists and entrepreneurs about the existential threats superintelligent AI could pose for humanity.

In December 2014, Hawking told the BBC that "the development of full artificial intelligence could spell the end of the human race."

In January, many of the signatories of today's letter signed a call for AI research to remain "robust and beneficial" on the Future of Life Institute, an organization headed by prominent names in AI research, scientists, and even Morgan Freeman.

The same month, Musk donated $10 million to the Future of Life Institute to fund a variety of research projects that aim to do just that. The 37 projects includes one that hopes to ensure that "the interests of superintelligent systems AI systems align with human values" and another, headed by Roff, aims to ensure that "AI-driven weapons are kept under 'meaningful human control'."

Jared Adams, the Director of Media Relations at the Defense Advanced Research Projects Agency told Tech Insider in an email that the Department of Defense "explicitly precludes the use of lethal autonomous systems," as stated by a 2012 directive.

"The agency is not pursuing any research right now that would permit that kind of autonomy," Adams said. "Right now we're making what are essentially semiautonomous systems because there is a human in the loop."

Join the conversation about this story »

NOW WATCH: Here's what we know about the new 'Earth' — a planet that could support life

DARPA hired a jazz musician to jam with their artificially intelligent software

$
0
0

Louis Armstrong

Artificial intelligence (AI) can paint hallucinatory images, shut down internet trolls, and critique the most creative paintings in history.

Now, with help from the Defense Advanced Research Projects Agency (DARPA), AI is coming for your saxophones and pianos, too. Jazz musician and computer scientist Kelland Thomas is building an AI program that can learn to play jazz and jam with the best of them, under a DARPA-funded project that aims to improve how we communicate with computers.

"A jazz musician improvises, given certain structures and certain constraints and certain basic guidelines that musicians are all working with," Thomas told Tech Insider. "Our system is going to be an improvisational system. So yeah, it will be able to jam."

Thomas and his team will first build a database of thousands of transcribed musical performances by the best jazz improvisers, including Louis Armstrong, Miles Davis, and Charlie Parker. Then, using machine learning techniques, they'll "train" the AI system with this database.

Eventually the AI will learn to analyze and identify musical patterns from the transcriptions, including Miles Davis's performance of "So What?" below:

The AI could use that knowledge to compose and play live, original music.

"A human musician also builds a knowledge base by practicing and by listening and by learning and studying," Thomas said. "So the thing we're proposing to do is analogous to the way a human learns, but eventually it will be able to do this on a much larger scale. It can scour thousands of transcriptions instead of dozens or hundreds."

Many people might not consider music a form of communication, but Paul Cohen, an AI researcher and head of the Communicating with Computers project, thinks music shares many qualities with the spoken and written word.

"Jazz, as with conversation, is sort of an interesting mixture of creativity and very tightly, ruled-down behavior," Cohen told Tech Insider. "There are strict rules about improvisation, following particular harmonic lines and making sure your timing is right. You can't end a phrase at the wrong place. It has to be done at exactly the right time and place."

Thomas thinks that making computers as convincingly creative as humans will make collaborations between humans and computers smoother and more efficient. For Thomas, jazz is the best way to model human creativity.

"In my mind, jazz and improvisation in music represent a pinnacle of human intellectual and mental achievement," Thomas said. "The ability to, on the fly and in the moment, create melodies that are goal-directed, that are going somewhere, doing something and evincing emotion in the listener, is really, really amazing."

Within five years, Thomas hopes to build an AI system that can improvise an electronic jazz number alongside a human musician. Following that: a robot that can manipulate musical instruments and accompany human musicians on stage.

But you don't have to wait five years to watch intelligent machines play music. Engineers from Japan to Germany are already building robots you can program to play pre-written songs.

Then there's Mason Bretan, a PhD student from Georgia Tech. He's been jamming alongside "Shimi" robots, which can partially improvise.

In the video below, Bretan provided the arrangement of his parts, and a recording of their tracks and cues. "But in between," including the mallet solo, the robots are "doing their own thing based on his chord progressions," according to the Washington Post.

Join the conversation about this story »

NOW WATCH: California is defying nature by growing coffee beans


Siri and Google Now hold the keys to the future

$
0
0

amit singhal google google now voice icon

Soon, your voice will be the best way to search for anything online, period. 

We're already living in a mobile world — sales of both iOS and Android devices have passed Windows PCs, and their ecosystems dwarf anything you'd find on a desktop computer.

But we're still using our hands to find what we're looking for, just as we did with desktop computers, and books before them.

Using our hands to operate computers, while commonplace, is inefficient for mobile devices. And that's because the mobile world we live in comes with several challenges and limitations:

  • Everything is smaller on a smaller screen.
  • Typing is harder on a smaller screen.
  • Finding and using content from all of the mobile apps you've downloaded isn't always easy.
  • Since mobile devices are typically used on the go, you want to get things done as quickly as possible.

That's why voice assistants like Siri and Google Now are so important — and why they'll be vital for the future of computing. 

How advanced digital assistants might work

Right now, people approach digital assistants to answer simple questions:

  • "How many cups in a gallon?"
  • "What's the capital of Nebraska?"
  • "What's the score of the Yankees game?"

Soon, though, digital assistants will be able to accomplish much more complex tasks.

Romantic dinnerLet's say you want to have a romantic evening with your partner. You can tell Siri or Google Now, "Help me create a romantic evening tonight," and it would complete the following tasks in mere seconds:

  • Find a restaurant with "romantic" properties — maybe mood lighting, or a specific type of cuisine that reviewers have found "romantic"
  • Find a romantic night activity like a movie, or a location that's been reviewed as "romantic," like a park or a promenade
  • Your voice assistant could schedule these things into your calendar, but know to leave enough time between these activities so you can get to each location in time

Another example: Maybe you want to book a vacation to Hawaii. Just ask Google Now or Siri and it could instantly help you find and book the following items, in quick succession:

  • An open week in your calendar far out in advance to save you some money on reservations
  • Cheap roundtrip flights via the travel apps you've downloaded
  • Affordable Airbnb or hotel rentals in Hawaii
  • Attractions like guided snorkeling tours or hula shows

It's not about questions and answers; it's about having a conversation

Ron Kaplan, who leads the natural language artificial intelligence lab at Nuance, a pioneer in voice-enabled technologies, has spent years building a better bridge between humans and machines.

He told Tech Insider that "we're at the cusp" where digital sources like Expedia or OpenTable can be easily accessed and controlled with just our voices.

siri reminder.PNGBut if you've ever used Siri, you've probably had this experience: You ask a question, and Siri gets the answer wrong because it A) doesn't know the answer, or B) didn't understand the question.

Unfortunately, when a voice assistant gets something wrong, you're forced to start all over again.

"You can go down these blind alleys, where [the voice assistant] misunderstands it's about to do something that's not as you intended, and there's no easy way out of that other than restarting," Kaplan said. "That's very different than what a human conversation would be."

Soon, voice assistants like Siri and Google Now will learn the principles of conversation: You won't have to restart a conversation if the voice assistant doesn't understand your request. It would just need to identify what it doesn't understand, and ask for clarification. (Of course, it's difficult to teach a computer to know when it doesn't know something!)

"It's never going to be perfect," Kaplan said. "They don't have to get it right all the time as long as it's easy to repair the misunderstandings... I understand 80% of what my wife says to me, but if I get something wrong — I don't take out the garbage, I empty the dishwasher instead — it gets corrected, quickly. This notion of repair, recognizing when things go off track on either side [of the conversation] — crossing that boundary is going to be important."

Of course, having a human-like conversation with our devices is only part of the recipe towards a hands-free future.

Expanding beyond the smartphone

A great personal assistant needs to be an expert at everything: People want their assistants to be great at navigating, finding restaurants, and helping you with day-to-day tasks at work and home. But soon, we might see personal assistants that are unique and specialized for certain tasks, markets and industries.

Tim COok Apple carplayMike Thompson, executive VP and general manager of Nuance's mobile division, has witnessed the evolution of his company's development on personal assistants: first on feature phones, and eventually on mobile devices and the mobile web. 

Thompson argues that personal assistants need to be specialized: the experience on a smartphone is different from a wearable device, which is different from a virtual reality device, which is different from a car. We want our personal assistants to do things relevant to those devices and designed around their capabilities.

"It's such a complex world out there, we're working with hundreds of companies to build personal assistants for individual use cases," Thompson tells us. "There's no doubt Apple and Google and Microsoft will center on their mobile phone experiences and that consumer wedge, but we're much broader than that: We're reaching deep into the enterprise; that's not typically a place Apple or Google would go with their personal assistant experiences. Similarly, around the TV space, we've seen incredible pick-up of personal assistants that are designed for the TV ecosystem, which is pretty sophisticated."

Google seems to be taking a different approach to Nuance. A Google spokesperson told Tech Insider that the cmpany is focusing on one assistant that can "organize information and make it accessible to you, no matter where you are." But Thompson believes specialized assistants are required to help organize the "tremendous amount of data" from personal assistants.

"The rate of progress personal assistants have made in the last three years is mind-blowing," Thompson said. "We're seeing improvements on a week to week, month to month basis. People have become familiar with the personal assistants that are on their phone, and that familiarity is broadening beyond that to other devices. There's an expanding world of personal assistants and device makers that want to do unique things — unique segments like healthcare and enterprise, where the personal assistants are very unique. We're working on different kinds of applications that cut across different segments. That's been our expansion. That's what we've been working on."

Join the conversation about this story »

NOW WATCH: We asked Siri the most existential question ever and she had a lot to say

Here’s the real reason researchers issued a dire warning about weapons that can aim and fire all by themselves

$
0
0

AK47

An open letter, signed by more than a thousand top artificial intelligence researchers, urges the United Nations to ban autonomous weapons.

The letter, presented at the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, includes signatories like Tesla and SpaceX CEO Elon Musk alongside leading AI scientists like Stuart Russell and Google director of research Peter Norvig.

The letter chillingly warns that, if developed, autonomous weapons would soon become the "Kaloshnikovs of the future."

"Kaloshnikov" is the Russian slang term for automatic rifles better known by Americans as AKs — cheap, durable, and the "world's most popular gun," according to Mother Jones.

It cautions that the development of autonomous weapons, or weapons that can target and fire without a human at the controls, would be hijacked by warlords against civilians or in rogue assassinations, similar to how AKs have flourished in the hands of terrorists.

"[Autonomous intelligent weapons are] likely to be used not just in wars between countries, but the way Kalashnikovs are used now ... in civil wars," Russell told Tech Insider. "[Kalashnikovs are] used to terrorize populations by warlords and guerrillas. They're used by governments to oppress their own people."

A life in fear of terrorists or governments armed with autonomous artificially intelligent weapons "would be a life for many human beings that is not something I would wish for anybody," Russell said.

Heather Roff, an assistant teaching professor at the Joseph Korbell School of International Studies at the University of Denver, and a contributor and signer to the letter, told Tech Insider that autonomous weapons systems could be crudely reproduced on the cheap and easy enough to smuggle.

Like AKs, autonomous weapons could easily get into the hands of terrorists, warlords, and criminals, who could wreak havoc on civilian populations with impunity.

"Small arms proliferation is a really big problem in the world," Roff said. "The types of technologies that AI and autonomous weapons can produce are not high-cost systems. You can build rather crude systems rather easily and those can proliferate."

The letter also warned widespread use of autonomous systems by militaries could bring about a "third revolution in warfare," much like the creation of guns and nuclear bombs before it.

Join the conversation about this story »

NOW WATCH: We found out if the 5-second rule is a real thing

One of Mark Zuckerberg's mind-blowing predictions about the future already exists

$
0
0

Mark Zuckerberg

From telepathy to total immunity from disease, Facebook CEO Mark Zuckerberg didn't shy away from bold predictions about the future during a Q&A on his Facebook profile on June 30.

But one of Zuckerberg's technology dreams is on the verge of coming true: a computer that can describe images in plain English to users.

Zuckerberg thinks this machine could have profound changes on how people, especially the vision-impaired, interact with their computers.

"If we could build computers that could understand what's in an image and could tell a blind person who otherwise couldn't see that image, that would be pretty amazing as well," Zuckerberg wrote. "This is all within our reach and I hope we can deliver it in the next 10 years."

In the past year, teams from the University of Toronto and Universite de Montreal, Stanford University, and Google have been making headway in creating artificial intelligence programs that can look at an image, decide what's important, and accurately, clearly describe it.

This development builds on image recognition algorithms that are already widely available, like Google Images and other facial recognition software. It's just taking it one step further. Not only does it recognize objects, it can put that object into the context of its surroundings.

"The most impressive thing I've seen recently is the ability of these deep learning systems to understand an image and produce a sentence that describes it in natural language," said Yoshua Bengio, an artificial intelligence (AI) researcher from the Universite de Montreal. Bengio and his colleagues, recently developed a machine that could observe and describe images. They presented their findings last week at the International Conference on Machine Learning.

"This is something that has been done in parallel in a bunch of labs in the last less than a year," Bengio told Business Insider. "It started last Fall and we've seen papers coming out, probably more than 10 papers, since then from all the major deep learning labs, including mine. It's really impressive."

Bengio's program could describe images in fairly accurate detail, generating a sentence by looking at the most relevant areas in the image.

computer image descriptionIt might not sound like a revolutionary undertaking. A young child can describe the pictures above easily enough.

But doing this actually involves several cognitive skills that the child has learned: seeing an object, recognizing what it is, realizing how it's interacting with other objects in its surroundings, and coherently describing what she's seeing.

That's a tough ask for AI because it combines vision and natural language, different specializations within AI research.

It also involves knowing what to focus on. Babies begin to recognize colors and focus on small objects by the time they are five months old, according to the American Optometric Association. Looking at the image above, young children know to focus on the little girl, the main character of that image. But AI doesn't necessarily come with this prior knowledge, especially when it comes to flat, 2D images. The main character could be anywhere within the frame.

In order for the machines to identify the main character, researchers train them with thousands of images. Bengio and his colleagues trained their program with 120,000 images. Their AI could recognize the object in the image, while simultaneously focusing on relevant sections of the image while in the process of describing it.

computer describes image"As the machine develops one word after another in the English sentence, for each word, it chooses to look in different places of the image that it finds to be most relevant in deciding what the next word should be," Bengio said. "We can visualize that. We can see for each word in the sentence where it's looking in the image. It's looking where you would imagine ... just like a human would."

According to Scientific American, the system's responses "could be mistaken for that of a human" about 70% of the time.

However, it seemed to falter when the program focused on the wrong thing, when the images had multiple figures or if the images were more visually complicated. In the batch of images below, the program describes the top middle image, for example, as "a woman holding a clock in her hand" because it mistook the logo on her shirt for a clock.

The program also misclassified objects it got right in other images. For example, the giraffes in the upper right picture were mistaken for "a large white bird standing in a forest."

Computer description errorsIt did however, correctly predict what every man and woman wants: to sit at a table with a large pizza.

Join the conversation about this story »

NOW WATCH: We tried cryotherapy — the super-cold treatment LeBron James swears by

This startup uses artificial intelligence to predict whether a Hollywood film will be a hit or a flop — just by scanning the script

$
0
0

david stiff

There’s a lot of literature that claims a successful Hollywood script can be broken down into a formula.

The most famous is the screenplay guide Save the Cat! by guru Blake Snyder, which has swept through the screenwriting world with its minute-by-minute formula for how to wow the audience.

Now one startup is taking a crack at the screenwriting formula from a machine learning perspective. Vault, an Israeli artificial intelligence company, has created a program that claims to be able to tell whether a film will be a hit or a flop, simply by reading the script.

But how?

David Stiff, Vault’s CEO and co-founder, says it hinges on an intensive analysis of 300,000 to 400,000 story “features,” which can be things like themes or level of violence. All these story features are pulled from the script by his program with no human input.

Vault trained its AI using script data from films going back to 1980, when Stiff says there was a shift in Hollywood toward the “blockbuster” model. The team fed the system the script, allowing it to compare data points to the box office performance data.

Stiff now claims his algorithm can predict the box office performance of a film with 65% to 70% accuracy. This an extremely high percentage given that only 20% of movies make their money back, he says.

When I press Stiff to reveal what factors are most important to a successful film, he cites themes. “If we take out themes from our predictions, our rates drop dramatically,” he says. This makes sense. There are themes that have been recycled from the time of Ancient Greece to now, and they still move us.

But one aspect Stiff thinks Hollywood puts too much emphasis on is the star power of actors. His AI can also suggest actors a studio could cast, based on the script, but the focus is on saving money. Stars can be useful at the box office, but a series of high-profile flops from actors like Johnny Depp prove that even an acting legend can’t save a sinking script. This thinking runs counter to the prevailing wisdom in Hollywood, which places screenwriters more toward the bottom of the food chain.

Avengers

Stiff says if he had to write a film based on what he’s learned, he’d use no-name actors in an action comedy with a budget of $30 million. But he stresses that the formula is too complex to be “gamed” in a straightforward way. It would be easier for him to select a portfolio of 10 movies, he says.

This nod to diversification harkens back to the team's background in algorithmic trading. While Vault works with both studios and investors, Stiff envisions the program working particularly well for people who want to wade into the film-funding marketplace.

Vault started its analysis efforts with film because the team thought it would be hard, Stiff says. There are so many elements that go into a film besides the script, that if they could nail this, they could easily move into other industries. To that end, Stiff says the company plans to move into TV and even publishing.

And when has Vault been the most wrong?

Stiff said his AI thought that the latest Terminator movie was going to be a hit, but it’s looking like a disappointment at the box office. Maybe it was just a case of robots favoring robots.  

You can see Vault’s full 2015 predictions, a mixture of indie and studio films, by clicking here.

SEE ALSO: Apple’s new music service could be an epic flop because of this one major issue

Join the conversation about this story »

NOW WATCH: An incredible true story is brought to life in the riveting first trailer for 'The 33'

I hired a virtual PA for a week and it's made me a far more organized person than I ever was before

$
0
0

robot

It's unusual to meet a journalist who has their own personal assistant. I've never met one. So I was well aware of the task I had ahead of me convincing my friends, colleagues, and contacts that I now had a PA.

And, I didn't even really have a PA. For a week, I decided to try out "Amy Ingram," a virtual PA designed to schedule meetings for busy executives, by taking away all the e-mail ping-pong that goes with organizing a rendezvous.

In practice, all you need to do is link your calendars and then CC "Amy Ingram/amy@x.ai" into an e-mail conversation about setting up a meeting. She then takes over, chats to your contact like a real PA to sort the best time and location, and the next thing you should expect to see is a calendar invite.

Dennis Mortensen, the CEO of the company that created Amy Ingram, x.ai, told me earlier this month that some people have been so convinced she is human, they've even sent her flowers, whiskey, and chocolates.

I had to try it out for myself.

The set-up

It all began with a friendly "hi."

amybeginning

I felt compelled to be polite back. "Hi Amy, nice to meet you. I hate meetings before 11AM — so please try to avoid those as I am usually busy in the mornings. Thank you, Lara."

I had to scold myself: She's not real! Yet it felt odd constructing an email with only the most basic of information, eschewing any sort of salutation. 

Connecting my calendars took just a couple of clicks as I was already signed into my Google accounts.

I was ready. Amy was ready. I had reached the big time.

A rocky start

The first meeting Amy attempted to arrange was via a PR I have known for years. I casually dropped Amy into our e-mail conversation.

AMY1

Chris was baffled.

amy2

So was Amy.

amycoffee

But, Amy: We hadn't decided yet! We were just trying to carve out a time slot! We can sort the finer details later! 

OK Amy, just this once. 

I sent an email asking Chris where we should meet. He replied back with a vague location. I responded asking for the exact address. He sent me the address. I forwarded it over to Amy.

It was starting to feel a lot like email ping-pong.

I didn't hear any more from either of them until two days later. Chris emailed:

amyweird

Chris hadn't been replying to Amy's polite emails attempting to suggest a time. Instead, he had gone ahead and sorted the meeting himself. The (human) PA of the person I was due to be meeting sent me a calendar invite. Eventually, Chris got back to Amy to let her know it had all been set up without her.

Sure enough, Amy let me know too.

amyallsorted

But continuing to use Amy actually made me better at organizing meetings

coffee shopMy first experience using Amy was far from seamless. But it highlighted the biggest mistake I didn't even know I was making when I try to set up meetings myself: I'm not specific enough from the outset. And that's why the email ping-pong occurs.

I asked Mortensen — Amy's creator — for some advice. He suggested I email Amy letting her know my favorite places for breakfast, coffee, lunch, and also my default meeting types: like phone calls, or at my office.

I don't really have a "default." One of the best things about being a journalist is you get to explore other people's offices and different parts of the city. Fortunately, Mortensen said I can also tell Amy that the location is "TBD."

From there on in, organizing meetings became a breeze. Those who didn't know me too well didn't question the fact that I was adding Amy into our conversations. Those who did were just as incredulous as Chris. One contact responded: "Well if I’d known you had people I would have got my people to speak to your people."

Later, I had to admit to a confused contact I had known all my career that I merely had a "virtual PA."

His response: a "HAL" reference:

am3

Nevertheless, despite not being human, Amy was very realistic. Even when meetings needed to be rescheduled (from my end, or theirs,) Amy seemed to understand, and the next email I'd see from her would be an updated calendar invite.

I'm the first to admit that I'm not the best at managing my inbox, and can often miss emails. My worst habit is "starring" emails and then forgetting to look at them later. Having Amy on board gave me reassurance that I wasn't letting things slide.

Usefully, she would also send me a copy of the conversations she had been having with my colleagues, so I could keep tabs on her. Amy also forwarded me a weekly meeting summary. She said she had scheduled seven meetings for me, adding: "I can happily say that there are no outstanding tasks."

Putting Amy to the test

It was difficult to resist having a little fun. I wondered whether Amy would respond to a general chit-chat email that had nothing to do with setting up meetings.

alreet

And what about an "important business meeting" with a friend?

amydoesntdopub

Amy responded with a calendar invite from 5pm to 8pm on the evening I requested. Oh Amy, you don't know me at all.

Would Amy pass probation?

alarm clockMy biggest issue using Amy was her erratic response times. Sometimes meetings would be sorted in minutes flat. Other times — particularly if emailed her in the morning (GMT) — she could take hours to respond. I mentioned this to my Friday night pub friend, and he became immediately suspicious (pub friends like to conspire) that Amy might be more human than she was letting on and I was simply waiting for a real person in the US to wake up and deal with my scheduling.

That thought quickly exited my head, though: X.ai has raised more than $11 million in funding to date, with a $40 million valuation, and Mortensen is a long-time data analytics entrepreneur. He sold his company Visual Revenue to Outbrain, was the COO of Indextools when it sold to Yahoo, and he sold his other company Canvas Interactive to TJ Group. 

I asked him why Amy was sometimes a little tardy. He responded:

We are moving towards a setting where Amy is near instant, but even in her current incarnation she tends to beat most human assistants in response time and working days (given her 24/7 machine setting). However, things can get queued up for multiple reasons, mostly due to verification needed or simply waiting for response on either Guest or Host side, but also due to potential response ambiguity (for her). We operate in a supervised learning environment where we go for accuracy over speed, to ensure high quality in our training data (and product). So a sentence or simple time expression might be pulled aside and that delays things a bit. All while building this verticalized AI.

Another bugbear I had with Amy was her email signature: 

amysignature

It's a bit of a giveaway! People might question my new (artificial) elevated status! (Mortensen says the paid-for version of Amy Ingram will allow for customization of the email signature.)

And, ultimately, that "status" was my main issue with using Amy. I am busy, and I do set up lots of meetings, but I'm certainly not an executive. I'm sure many of the people requesting meetings with me last week (and didn't notice the email signature) thought I was getting a little ahead of myself — or that Business Insider had some serious cash to burn, and was giving its section editors their own personal lackeys.

So I probably won't continue using Amy, and it'll be bittersweet saying goodbye as she was very efficient — I'd recommend her to actual executives (thousands of executives are already using Amy Ingram, including former Havas CEO David Jones, now the owner of You & Mr Jones.) Despite her not being an actual person, she's taught me a valuable lesson about being a lot more organized and specific from the outset when it comes to arranging meetings.

x.ai is currently in beta. You can sign up to the waiting list here.

SEE ALSO: People are sending flowers and chocolate to thank personal assistant 'Amy Ingram' — what they don't realize is she's a robot

Join the conversation about this story »

NOW WATCH: This drummer created a whole song by only using the sound of coins

These weapons can find a target all by themselves — and researchers are terrified

$
0
0

Samsung SGR 1

On July 27, over a thousand artificial intelligence researchers, including Google director of research Peter Norvig and Microsoft managing director Eric Horvitz, co-signed an open letter urging the United Nations (UN) to ban the development and use of autonomous weapons.

The letter, presented at the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, now has 16,000 signatures, according to The Guardian. It also featured signatures from Tesla and SpaceX CEO Elon Musk and physicist Stephen Hawking.

The letter's worries about 'smart' weapons that can kill on their own sounds like a science fiction trope — humans cowering in fear of killer robots they've unknowingly created.

In reality, these killer robots are being knowingly created, under the guise of being "semi-autonomous"— after picking out a target and aiming, they need a human to pull the trigger. To be fully autonomous, they wouldn't need a human to OK the kill shot.

There is a case to be made for these killer robots. According to The Conversation, autonomous and semi-autonomous weapons"are potentially more accurate, more precise" and "preferable to human war fighters."

But the development of semiautonomous weapons are secretive, and it's unclear what part humans play in choosing and firing on targets, assistant professor at the University of Denver's Josef Korbel School of International Studies and contributor to the letter Heather Roff told Tech Insider.

Without guidance from the UN, Roff believes that the current secrecy may set a dangerous precedent for how truly autonomous weapons hiding under a misleading moniker, and that the consequences could be disastrous.

Here are some weapons systems that are so advanced they are worrying researchers.

The Samsung SGR-1

The Samsung SGR-1 patrols the border between North and South Korea, called the Demilitarized Zone. South Korea installed the stationary robots, developed by Samsung Techwin and Korea University, according to NBC News.

Roff said the SGR-1 was initially built with the capability to detect, target, shoot intruders from two miles away.

"In that sense, it's a really sophisticated landmine, it can sense a certain thing and can automatically fire," she said.

But Peter Asaro, the co-founder of the International Committee for Robot Arms Control, told NBC News that South Korea received "a lot of bad press about having autonomous killer robots on their border."

Now the SGR-1 can now only detect and target but requires a human operator to approve the kill shot.

The Long Range Anti-Ship Missile

The long range anti-ship missile, or LRASM, is currently being developed by Lockheed Martin and recently aced its third flight test. The LRASM can be fired from a ship or plane and can autonomously travel to a specified area, avoiding obstacles it might encounter outside the target area, said Roff. The missile will then choose one ship out of several possible options within its destination based on a pre-programmed algorithm.

"The missile does not have an organic ability to choose and prosecute targets on its own," Lockheed Martin said in an email to Tech Insider. "Targets are chosen, modeled and programmed prior to launch by human operators. There are multiple subsystems that ensure the missile prosecutes the intended targets, minimizing collateral damage. While the missile does employ some intelligent navigation and routing capabilities, avoiding enemy defenses, it does not meet the definition of an autonomous weapon. The LRASM missile navigates to a pre-programmed geographical area, searches for its pre-designated targets, and after positive identification, engages the target."

A second email from Lockheed Martin said that "the specifics on how the weapon identifies and acquires the intended target are classified and not releasable to the public."

The vagueness with which the LRASM locks on to its target may leave too much room for error, Roff said.

"Is it the heat? Is it the shape? Is it the radar signature? Is it a weighting of all of these things, that the one in the middle with the most signatures on it is the best target?" she said. "The decision process of how that gets made isn't clear. We also don't know if it's always going to be a military object when it has all of those things."

The Harpy

The Israel Aerospace Industries' (IAI) Harpy is a "fire-and-forget" autonomous drone system mounted on a vehicle that can detect, attack and destroy radar emitters, according to the Times of Israel. The Harpy can "loiter" in the air for a long period of time as it searches for enemy radar emitters before it fires "with a very high hit accuracy," according to the IAI's website.

IAI HARPYBut Roff said the system may not have enough any safeguards as to where the radar is located.

"If you have radar emitters, it's mobile, so you can put it anywhere you want," she said. "The Israeli Harpy is going to fly around for several hours and it's going to try to locate that signature. It's not making a calculation about that signature, if it's next to a school. It's just finding a signal ... to detonate."

The Taranis

The British Aerospace (BAE) Systems' war drone Taranis was named after the Celtic God of Thunder. According to BAE's website, the stealthy Taranis is capable of "undertaking sustained surveillance, marking targets, gathering intelligence, deterring adversaries, and carrying out strikes in hostile territory," with the guiding hand of a human operator.

BAE Systems also wrote that it is collaborating with other companies to develop "full autonomy elements," though the autonomous functions are unclear.

In a 2013 UN report, Christof Heyns, a UN Special Rappeuter, wrote that the Taranis is one of the robotic systems whose "development is shrouded in secrecy."

A human in the loop

That secrecy is a driving force for the open letter, Roff said.

In 2012, the Department of Defense established a five-year directive that all weapons in operation and development will have a human in the loop.

Jared Adams, the director of Media Relations at DARPA told Tech Insider in an email that the DOD "explicitly precludes the use of lethal autonomous systems," as stated by a 2012 directive.

"The agency is not pursuing any research right now that would permit that kind of autonomy," Adams said. "Right now we're making what are essentially semi-autonomous systems because there is a human in the loop."

But Roff said it's unclear exactly what is autonomous and where the human is. Most weapons systems will have had a human in control at some point – whether a human pre-programs the weapon to look for a target that fits specific characteristics or if it's a human pressing the button to fire. To Roff, it's important to determine where humans should come into play early on, before the technology becomes too advanced.

"What does meaningful human control mean? What does select and engage mean and when does that occur?" Roff said. "These are serious questions. How far removed does the human being have to be?"

SEE ALSO: Here's why Alaska's gun problem is so bad

Join the conversation about this story »

NOW WATCH: Why BMI is BS

CVS and IBM are teaming up to use AI to predict when people are getting sick

$
0
0

crystal ball

IBM will begin to use its Watson artificial intelligence system to improve patient care for a growing number of people, especially those with chronic diseases, the company said Thursday, in a statement announcing a new partnership with CVS.

Teaming up with CVS — which has more than 7,800 pharmacies and more than 900 MinuteClinics — will give IBM a chance to unleash its Watson Health platform on the massive population of patients served by the pharmacy giant. The resulting analytics, IBM suggests, could help patients manage chronic diseases like diabetes and hypertension and help predict which patients are getting sicker — sometimes before they even know it.

"We're very excited about the ability to leverage a data-driven [approach] and cognitive analytics to better understand and predict what's going to happen with folks," Kathy McGroddy-Goetz, VP of Partnerships and Solutions for Watson Health, told Tech Insider.

Patients tend to be at their local CVS much more frequently than at the doctor, McGroddy-Goetz pointed out. People are also increasingly using fitness trackers like FitBits, smartwatches, and even Bluetooth-enabled scales that are all collecting data patients can choose to share with a provider. All that data isn't worth much though unless it is carefully interpreted — something Watson can do much more efficiently than a team of people.

"The Watson computing system can access health records, pharmacy information and other resources to help CVS Health employees provide guidance to patients and work with primary care doctors," Jennifer Calfas explained in USA Today.

A drop in activity levels, a sudden change in weight, or prescriptions that aren't being filled are the kinds of things that might be flagged by the system, McGroddy-Goetz suggested, adding that the way it works will vary depending on what information a patient chooses to share and what chronic conditions they are managing. Certain changes could even indicate a developing sickness before someone feels ill — and certainly before someone decides to visit the doctor.

McGroddy-Goetz also mentioned the potential usefulness of a Watson-CVS combined system to insurance companies and self-insured employers especially. "It could help address absenteeism and help people have a better quality of life," she said. "That's a win for everyone."

While McGroddy-Goetz emphasized that users will get to decide how much data they want to share, "big data" solutions to healthcare problems often raise some privacy concerns. 

"Federal patient privacy rules ... don't apply to most of the information the gadgets are tracking," Ariana Eunjung Cha pointed out in The Washington Post, in a story on digital health in general, not Watson in particular. "Unless the data is being used by a physician to treat a patient, the companies that help track a person’s information aren’t bound by the same confidentiality, notification and security requirements as a doctor's office or hospital."

That's part of the trade-off in what McGroddy-Goetz calls "empowering [patients] to take care of their own health."

The specifics and logistics of the partnership are still being worked out. IBM hopes to roll out the Watson system in CVS stores starting sometime in 2016.

Join the conversation about this story »

NOW WATCH: Scientists have debunked these common myths about microwaves


Japanese robots have now figured out how to comfort sad humans

$
0
0

It's finally come to this: humans are relying on robots for emotional support. 

Pepper, Japan's latest robot to join the humanoid ranks, was built by the the telecom giant SoftBank. A $1,650 consumer version, announced in June, comes with cameras, sensors, and accelerometers that can track human emotion (to a point) and even allow Pepper to produce its own "emotions", the creators say.

And, if the marketing materials are to be believed, it can even make you feel better when you're sad. 

A month after the public release to consumers, SoftBank has now set an October launch date for its "Pepper for Biz" model, which will offer a variety of services to the businesses who rent it out:  greeting visitors, reciting programmable phrases, assisting customers, conducting product demos, and interacting with people on a personal level. SoftBank can then use the emotional information as part of future marketing efforts.

pepper robot

In 2014, when SoftBank debuted Pepper at its Japanese headquarters, the company said it envisioned Pepper entering all different fields, from high-stakes jobs like nursing and babysitting to more menial roles as party guest.

Beginning October 1, businesses will be able to rent Pepper for $16,000. After the three-year contract expires, they give the robot back to the manufacturer.

If all goes according to plan, however, people might never want to give Pepper back.

SEE ALSO: A Japanese hotel has almost entirely replaced humans with robots as a cost-cutting scheme

Join the conversation about this story »

NOW WATCH: This robot wakes you up in the morning and checks if you turned off the oven when you leave the house

Check out all the terrifying semi-autonomous 'killer robot' weapons humans have already created

$
0
0

Samsung SGR 1

On July 27, over a thousand artificial intelligence researchers, including Google director of research Peter Norvig and Microsoft managing director Eric Horvitz, co-signed an open letter urging the United Nations (UN) to ban the development and use of autonomous weapons.

The letter, presented at the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, now has 16,000 signatures, according to The Guardian. It also featured signatures from Tesla and SpaceX CEO Elon Musk and physicist Stephen Hawking.

The letter's worries about 'smart' weapons that can kill on their own sounds like a science fiction trope — humans cowering in fear of killer robots they've unknowingly created.

In reality, these killer robots are being knowingly created, under the guise of being "semi-autonomous"— after picking out a target and aiming, they need a human to pull the trigger. To be fully autonomous, they wouldn't need a human to OK the kill shot.

There is a case to be made for these killer robots. According to The Conversation, autonomous and semi-autonomous weapons"are potentially more accurate, more precise" and "preferable to human war fighters."

But the development of semiautonomous weapons are secretive, and it's unclear what part humans play in choosing and firing on targets, assistant professor at the University of Denver's Josef Korbel School of International Studies and contributor to the letter Heather Roff told Tech Insider.

Without guidance from the UN, Roff believes that the current secrecy may set a dangerous precedent for how truly autonomous weapons hiding under a misleading moniker, and that the consequences could be disastrous.

Here are some weapons systems that are so advanced they are worrying researchers.

The Samsung SGR-1

The Samsung SGR-1 patrols the border between North and South Korea, called the Demilitarized Zone. South Korea installed the stationary robots, developed by Samsung Techwin and Korea University, according to NBC News.

Roff said the SGR-1 was initially built with the capability to detect, target, shoot intruders from two miles away.

"In that sense, it's a really sophisticated landmine, it can sense a certain thing and can automatically fire," she said.

But Peter Asaro, the co-founder of the International Committee for Robot Arms Control, told NBC News that South Korea received "a lot of bad press about having autonomous killer robots on their border."

Now the SGR-1 can now only detect and target but requires a human operator to approve the kill shot.

The Long Range Anti-Ship Missile

The long range anti-ship missile, or LRASM, is currently being developed by Lockheed Martin and recently aced its third flight test. The LRASM can be fired from a ship or plane and can autonomously travel to a specified area, avoiding obstacles it might encounter outside the target area, said Roff. The missile will then choose one ship out of several possible options within its destination based on a pre-programmed algorithm.

"The missile does not have an organic ability to choose and prosecute targets on its own," Lockheed Martin said in an email to Tech Insider. "Targets are chosen, modeled and programmed prior to launch by human operators. There are multiple subsystems that ensure the missile prosecutes the intended targets, minimizing collateral damage. While the missile does employ some intelligent navigation and routing capabilities, avoiding enemy defenses, it does not meet the definition of an autonomous weapon. The LRASM missile navigates to a pre-programmed geographical area, searches for its pre-designated targets, and after positive identification, engages the target."

A second email from Lockheed Martin said that "the specifics on how the weapon identifies and acquires the intended target are classified and not releasable to the public."

The vagueness with which the LRASM locks on to its target may leave too much room for error, Roff said.

"Is it the heat? Is it the shape? Is it the radar signature? Is it a weighting of all of these things, that the one in the middle with the most signatures on it is the best target?" she said. "The decision process of how that gets made isn't clear. We also don't know if it's always going to be a military object when it has all of those things."

The Harpy

The Israel Aerospace Industries' (IAI) Harpy is a "fire-and-forget" autonomous drone system mounted on a vehicle that can detect, attack and destroy radar emitters, according to the Times of Israel. The Harpy can "loiter" in the air for a long period of time as it searches for enemy radar emitters before it fires "with a very high hit accuracy," according to the IAI's website.

IAI HARPYBut Roff said the system may not have enough any safeguards as to where the radar is located.

"If you have radar emitters, it's mobile, so you can put it anywhere you want," she said. "The Israeli Harpy is going to fly around for several hours and it's going to try to locate that signature. It's not making a calculation about that signature, if it's next to a school. It's just finding a signal ... to detonate."

The Taranis

The British Aerospace (BAE) Systems' war drone Taranis was named after the Celtic God of Thunder. According to BAE's website, the stealthy Taranis is capable of "undertaking sustained surveillance, marking targets, gathering intelligence, deterring adversaries, and carrying out strikes in hostile territory," with the guiding hand of a human operator.

BAE Systems also wrote that it is collaborating with other companies to develop "full autonomy elements," though the autonomous functions are unclear.

In a 2013 UN report, Christof Heyns, a UN Special Rappeuter, wrote that the Taranis is one of the robotic systems whose "development is shrouded in secrecy."

A human in the loop

That secrecy is a driving force for the open letter, Roff said.

In 2012, the Department of Defense established a five-year directive that all weapons in operation and development will have a human in the loop.

Jared Adams, the director of Media Relations at DARPA told Tech Insider in an email that the DOD "explicitly precludes the use of lethal autonomous systems," as stated by a 2012 directive.

"The agency is not pursuing any research right now that would permit that kind of autonomy," Adams said. "Right now we're making what are essentially semi-autonomous systems because there is a human in the loop."

But Roff said it's unclear exactly what is autonomous and where the human is. Most weapons systems will have had a human in control at some point – whether a human pre-programs the weapon to look for a target that fits specific characteristics or if it's a human pressing the button to fire. To Roff, it's important to determine where humans should come into play early on, before the technology becomes too advanced.

"What does meaningful human control mean? What does select and engage mean and when does that occur?" Roff said. "These are serious questions. How far removed does the human being have to be?"

Join the conversation about this story »

NOW WATCH: Why BMI is BS

Here's why we should build killer robots

$
0
0

terminator

Last Monday, more than a thousand artificial-intelligence researchers cosigned an open letter urging the United Nations to ban the development and use of autonomous weapons.

Presented at the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, the letter features prominent researchers studying artificial intelligence including scientists such as Google director of research Peter Norvig alongside Tesla and SpaceX CEO Elon Musk and physicist Stephen Hawking. Since last Monday, more than 16,000 additional people signed the letter, according to The Guardian.

The letter says the development of autonomous weapons, or weapons that can target and fire without a human at the controls, could bring about a "third revolution in warfare," much as the creation of guns and nuclear bombs before it.

While killer robots sound terrifying, there are some real reasons that weapons powered by sophisticated AI may even be preferable to humans.

Autonomous weapons would take human soldiers out of the line of fire and potentially reduce the number of casualties in wars. Killer robots would be better soldiers all around — they're faster, more accurate, more powerful, and can take more physical damage than humans.

Stuart Russell, an AI researcher and the coauthor of "Artificial Intelligence: A Modern Approach," is a vocal advocate for the ban on autonomous weapons. Even as he fears that autonomous weapons could fall into the wrong hands, however, he acknowledges that there are some valid arguments for autonomous weapons.

"I've spent quite a long time thinking about what position I should take," Russell told Tech Insider. "They can be incredibly effective, they can have much faster reactions than humans, they can be much more accurate. They don't have bodies so they don't need life support ... I think those are the primary reasons various militaries, not just the UK but the US, are doing this."

Autonomous weapons wouldn't become afraid, freeze up, or lose their tempers. They can do their jobs without allowing their emotions to color their actions.IEEE Spectrum's Evan Ackerman wrote that autonomous weapons could be programmed to follow the rules of engagement and other laws that govern war.

"If a hostile target has a weapon and that weapon is pointed at you, you can engage before the weapon is fired rather than after in the interests of self-protection," he wrote. "Robots could be even more cautious than this, you could program them to not engage a hostile target with deadly force unless they confirm with whatever level of certainty that you want that the target is actively engaging them already."

Robot ethicist Sean Welsh echoes this idea in The Conversation, where he writes that killer robots would be "completely focused on the strictures of International Humanitarian Law and thus, in theory, preferable even to human war fighters who may panic, seek revenge, or just plain [mess] stuff up."

Ackerman suggests doing away with the misconception that technology is either "inherently good or bad" and focus rather on how it is used. He suggests coming up with a way to make "autonomous armed robots ethical."

"Any technology can be used for evil," Ackerman wrote. "Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil: we'd need a much bigger petition for that."

Heather Roff, a contributor to the open letter and a professor at the University of Denver's Josef Korbel School of International Studies, wouldn't disagree with him.

"The United States military doesn't want to give up smart bombs," Roff told Tech Insider. "I, frankly, probably wouldn't want them to give that up. Those are very discriminate weapons. However, we do want to limit different weapons going forward [that] have no meaningful human control."

And it's likely this recent public outcry may not be enough to stop an international war machine that is already building semi-autonomous weapons to identify and aim at targets by themselves. Many, such as the Australian navy's anti-missile and close-in weapons systems, attract no scrutiny or objections.

"Why? Because they're employed far out at sea and only in cases where an object is approaching in a hostile fashion," defense researcher Jai Gaillot wrote for The Conversation. "That is, they're employed only in environments and contexts whereby the risk of killing an innocent civilian is virtually nil, much less than in regular combat."

And, as Ackerman writes, it may be impossible to stop the tank now that it's rolling, and "barriers keeping people from developing this kind of systems are just too low."

Join the conversation about this story »

NOW WATCH: Here's what we know about the new 'Earth' — a planet that could support life

Intelligent robots don't need to be conscious to turn against us

$
0
0

Stuart Russell

Last week Elon Musk, Stephen Hawking, and more than 16,000 researchers signed an open letter warning against the dangers of autonomous weapons.

A top signatory who studies artificial intelligence (AI) was Stuart Russell, a computer scientist and founder of the Center for Intelligent Systems at the University of California. Russell is also the co-author of "Artificial Intelligence: A Modern Approach," a textbook about AI used in more than 100 countries.

In the past few months, Russell has urged scientists to consider the possible dangers AI might pose, starting with another open letter he wrote in January 2015. That dispatch called on researchers to only develop AI they can ensure is safe and beneficial.

Russell spoke to Tech Insider about AI-powered surveillance systems, what the technological "singularity" actually means, and how AI could amplify human intelligence. He also blew our minds a little on the concept of consciousness.

Below is that conversation edited for length, style, and clarity.

TECH INSIDER: You chose a career in AI over one in physics. Why?

STUART RUSSELL: AI was very much a new field. You could break new ground quite quickly, whereas a lot of the physicists I talked to were not very optimistic either about their field or establishing their own career. There was a joke going around then: "How do you meet a PhD physicist? You hail a taxi in New York."

TI: That's funny.

SR: It's slightly different now. Some PhD physicists write software or work for hedge funds, but physics still has a problem with having very smart people but not enough opportunities.

TI: What's your favorite sci-fi depiction of AI?

SR: The one I would say is realistic, in the not-too-distant future, and also deliberately not sensationalistic or scary, is the computer in "Star Trek" onboard the Enterprise. It just acts as a repository of knowledge and can do calculations and projections, essentially as a completely faithful servant. So it's a very non-controversial kind of computer and it's almost in the background. I think that's sort of the way it should be.

In terms of giving you the willies, I think "Ex Machina" is pretty good.

TI: If the Enterprise computer is realistic, what sci-fi depiction would you say is the least realistic?

SR: There's a lot of them. But if everyone was nice and obedient, there wouldn't be much of a plot.

In a lot of movies there is an element of realism, yet the machine somehow spontaneously becomes conscious – and either evil or somehow intrinsically in opposition to human beings. Because of this, a lot of people might assume 1) that's what could actually happen and 2) they have reason to be concerned about the long-term future of AI.

I think both of those things are not true, except sort of by accident. It's unlikely that machines would spontaneously decide they didn’t like people, or that they had goals in opposition to those of human beings.

ex machinaBut in "Ex Machina" that's what happens. It's unclear how the intelligence of the robot is constructed, but the few hints that they drop suggest it’s a pretty random trial-and-error process. Kind of pre-loading the robot brain with all the information of human behavior on the web and stuff like that. To me that's setting yourself up for disaster: not knowing what you’re doing and not having a plan and trying stuff willy nilly.

In reality, we don't build machines that way. We build them with precisely defined goals. But say you have a very precisely defined goal and you build a machine that's superhuman in its capabilities for achieving goals. If it turns out that the subsequent behavior of the robot in achieving that goal was not what you want, you have a real problem.

The robot is not going to want to be switched off because you’ve given it a goal to achieve and being switched off is a way of failing — so it will do its best not to be switched off. That's a story that isn’t made clear in most movies but it I think is a real issue.

TI: What’s the most mind-blowing thing you’ve learned during your career?

SR: Seeing the Big Dog videos was really remarkable. Big Dog is a four-legged robot built by Boston Dynamics that, in terms of its physical capabilities, is incredibly lifelike. It’s able to walk up and down steep hills and snow drifts and to recover its balance when its pushed over on an icy pond and so on. It’s just an amazing piece of technology.

Leg locomotion was, for decades, thought to be an incredibly difficult problem. There has been very, very painstakingly slow progress there, and robots that essentially lumbered along at one step every 15 seconds and occasionally fell over. Then, all of the sudden, you had this huge quantum leap in leg locomotion capabilities with Big Dog.

Another amazing thing is the capability of the human brain and the human mind. The more we learn about AI and about how the brain works, the more amazing the brain seems. Just the sheer amount of computation it does is truly incredible, especially for a couple of pounds of meat.

A lot of people talk about sometime around 2030, machines will be more powerful than the human brain, in terms of the raw number of computations they can do per second. But that seems completely irrelevant. We don’t know how the brain is organized, how it does what it does.

TI: What a common piece of AI people use everyday they might take for granted?

SR: Google or other search engines. Those are examples of AI, and relatively simple AI, but they're still AI. That plus an awful lot of hardware to make it work fast enough.

TI: Do you think if people thought about search engines as AI, they'd think differently about offering up information about about their lives?

SR: Most of the AI goes into figuring which are the important pages you want. And to some extent what your query means, and what you’re likely to be after based on your previous behavior and other information it collects about you.

It’s not really trying to build a complete picture of you, as a person as yet. But there are lots of other companies that are doing this. They’re really trying to collect as much information as they can about every single person on the planet because they think its going to be valuable and it probably already is valuable.

Here's a question: If you're being watched by a surveillance camera, does it make a difference to you whether a human is watching the recording? What if there's an AI system, which actually can understand everything that you're doing, and if you're doing something you're not supposed to — or something that might be of interest to the owner of the camera? That it would describe what was going on in English, and report that to a human being? Would that feel different from having a human watch directly?

The last time I checked, the Canadian supreme court said it is different: If there isn't a human watching through a camera, then your privacy is not being violated. I expect that people are going to feel differently about that once they're aware that AI systems can watch through a camera and can, in some sense, understand what it's seeing.

TI: What's the most impressive real-world use of AI technology you've ever seen?

SR: One would be Deep Mind's DQN system. It essentially just wakes up, sees the screen of a video game, and works out how to play the video game to a superhuman level. It can do that for about 30 different Atari titles. And that's both impressive and scary, in the sense that if a human baby was born and, by the evening of its first day, already beating adult human beings at video games.

In terms of a practical application, though, I would say object recognition.

TI: How do you mean?

SR: AI's ability to recognize visual categories and images is now pretty close to what human beings can manage, and probably better than a lot of people's, actually. AI can have more knowledge of detailed categories, like animals and so on.

There have been a series of competitions aimed at improving standard computer vision algorithms, particularly their ability to recognize categories of objects in images. It might be a cauliflower or a German shepherd. Or a glass of water or a rose, any type of object.

The most recent large-scale competition, called ImageNet, has around a thousand categories. And I think there are more than a million training images for those categories — more than a thousand images for each category. A machine is given those training images, and for each of the training images it's told what the category of objects is.

Let's say it's told a German shepherd is in an image, and then the test is that it's given a whole bunch of images it's never seen before and is asked to identify the category. If you guessed randomly, you'd have a 1-in-1,000 chance of getting it right. Using a technology called deep learning, the best systems today are correct about 95% of the time. Ten years ago, the best computer vision systems got about 5% right.

There's a grad student at Stanford who tried to do this task himself, not with a machine. After he looked at the test images, he realized he didn't know that much about different breeds of dogs. In a lot of the categories, there were about 100 different breeds of dog, because the competition wanted to test an ability to make fine distinctions among different kinds of objects.

The student didn't do well on the test, at all. So he spent several days going back through all the training images and learned all of these different breeds of dogs. After days and days and days of work, he got his performance up to just above the performance of the machine. He was around 96% accurate. Most of his friends who also tried gave up. They just couldn't put in the time and effort required to be as good as the machine.

TI: You mentioned deep learning. Is that based on how the human brain works?

SR: It's a technique that's loosely based on some aspects of the brain. A "deep" network is a large collection of small, simple computing elements that are trainable.

You could say most progress in AI has been gaining a deeper mathematical understanding of tasks. For example, chess programs don't play chess the way humans play chess. We don't really know how humans play chess, but one of the things we do is spot some opportunity on the chess board toward a move to capture the opponent's queen.

Garry Kasparov Deep Blue

Chess programs don't play that way at all. They don't spot any opportunities on the board, they have no goal. They just consider all positive moves, and they pick which one is best. It's a mathematical approximation to optimal play in chess — and it works extremely well.

So, for decision-making tasks and perception tasks, once you define the task mathematically, you can come up with techniques that solve it extremely well. Those techniques don't have to be how humans do it. Sometimes it helps to get some inspiration from the brain, but it's inspiration — it's not a copy of how the neural systems are wired up or how they work in detail.

TI: What are the biggest obstacles to developing AI capable of sentient reasoning?

SR: What do you mean by sentient, do you mean that it's conscious?

TI: Yes, consciousness.

SR: The biggest obstacle is we have absolutely no idea how the brain produces consciousness. It's not even clear that if we did accidentally produce a sentient machine, we would even know it.

I used to say that if you gave me a trillion dollars to build a sentient or conscious machine I would give it back. I could not honestly say I knew how it works. When I read philosophy or neuroscience papers about consciousness, I don't get the sense we're any closer to understanding it than we were 50 years ago.

TI: Because we don't really know how the brain works?

SR: It's not just that: We could not know how the brain works, in the sense that we don't know how the brain produces intelligence. But that's a different question from how it produces consciousness.

There is no scientific theory that could lead us from a detailed map of every single neuron in someone's brain to telling us how that physical system would generate a conscious experience. We don't even have the beginnings of a theory whose conclusion would be "such a system is conscious."

There is no scientific theory that could lead us from a detailed map of every single neuron in someone's brain to a conscious experience. We don't even have the beginnings of a theory whose conclusion would be "such a system is conscious."

TI: I suppose the singularity is not even an issue right now then.

SR: The singularity has nothing to do with consciousness, either.

Its really important to understand the difference between sentience and consciousness, which are important for human beings. But when people talk about the singularity, when people talk about superintelligent AI, they're not talking about sentience or consciousness. They're talking about superhuman ability to make high-quality decisions.

Say I'm a chess player and I'm playing against a computer, and it's wiping the board with me every single time. I can assure you it's not conscious but it doesn't matter: It's still beating me. I'm still losing every time. Now extrapolate from a chess board to the world, which in some sense is a bigger chess board. If human beings are losing every time, it doesn't matter whether they're losing to a conscious machine or an completely non conscious machine, they still lost. The singularity is about the quality of decision-making, which is not consciousness at all.

TI: What is the most common misconception of AI?

SR: That what AI people are working towards is a conscious machine. And that until you have conscious machine, there's nothing to worry about. It's really a red herring.

To my knowledge nobody — no one who is publishing papers in the main field of AI — is even working on consciousness. I think there are some neuroscientists who are trying to understand it, but I'm not aware that they've made any progress. No one has a clue how to build a conscious machine, at all. We have less clue about how to do that than we have about build a faster-than-light spaceship.

TI: What about a machine that's convincingly human, one that can pass the Turing Test?

SR: That can happen without being conscious at all. Almost nobody in AI is working on passing the Turing Test, except maybe as a hobby. There are people who do work on passing the Turing Test in various competitions, but I wouldn't describe that as mainstream AI research.

Almost nobody in AI is working on passing the Turing Test.

The Turing Test wasn't designed as the goal of AI. It was designed as a thought experiment to explain to people who were very skeptical, at the time, that the possibility of intelligent machines did not depend on achieving consciousness — that you could have a machine you'd have to agree was behaving intelligently because it was behaving indistinguishably from a human being. So that thought experiment was there to make an argument about the importance of behavior in judging intelligence as opposed to the importance of, for example, consciousness. Or just being human, which is not something machines have a good chance of being able to do.

And so I think the media often gets it wrong. They assume that everyone in AI is trying to pass the Turing Test, and nobody is. They assume that that's the definition of AI, and that wasn't even what it was for. 

TI: What are most AI scientists actually working toward, then?

SR: They're working towards systems that are better at perceiving, understanding language, operating in the physical world, like robots. Reasoning, learning, decision-making. Those are the goals of the field.

TI: Not making a Terminator.

SR: It's certainly true that a lot of funding for AI comes from the defense department, and the defense department seems to be very interested in greater and greater levels of autonomy in AI, inside weapons systems. That's one of the reasons why I've been more active about that question.

TI: What's the most profound change that intelligent AI could bring to our lives, and how might that happen?

SR: We could have self-driving cars — that seems to be a foregone conclusion. They have many, many advantages, and not just the fact that you can check your email while you're being driven to work.

Google self drivingI also think systems that are able to process and synthesize large amounts of knowledge. Right now, you're able to use a search engine, like Google or Bing or whatever. But those engines don't understand anything about pages that they give you; they essentially index the pages based on the words that you're searching, and then they intersect that with the words in your query, and they use some tricks to figure out which pages are more important than others. But they don't understand anything.

If you had a system that could read all the pages and understand the context, instead of just throwing back 26 million pages to answer your query, it could actually answer the question. You could ask a real question and get an answer as if you were talking to a person who read all those millions and billions of pages, understood them, and synthesized all that information.

So if you think that search engines right now are worth roughly a trillion dollars in market capitalization, systems with those kinds of capabilities might be more 10 times as much. Just as 20 years ago, we didn't really know how important search engines would be for us today. It's very hard to predict what kind of uses we'd make of assistants that could read and understand all the information the human race has ever generated. It could be really transformational.

Basically, the way I think about it is everything we have of value as human beings — as a civilization — is the result of our intelligence. What AI could do is essentially be a power tool that magnifies human intelligence and gives us the ability to move our civilization forward. It might be curing disease, it might be eliminating poverty. Certainly it should include preventing environmental catastrophe.

If AI could be instrumental to all those things, then I would feel it was worthwhile.

Join the conversation about this story »

NOW WATCH: MIT reveals how its military-funded Cheetah robot can now jump over obstacles on its own

A new robot helper could make daily chores astronomically more fun

$
0
0

dirty dishes dirty apartment plates sink nasty gross smelly bad

The robots are coming, and that's not necessarily a bad thing. In fact, a robot could be the best roommate you've ever had.

New research has imbued a robot arm with a special skill: the ability to offer a helping hand doing the dishes.

The researchers used body tracking technology to teach robots how to pay attention to and react to a human's body movements when working together around the house.

This has been a big problem for scientists trying to develop robots that will collaborate with humans — they have trouble adapting to our personal style. "We want robots to follow our lead, or at least plan their actions with an awareness of ours," Bilge Mutlu, of University of Wisconsin-Madison told MIT Technology Review.

Mutlu and collaborators from the University of Madison-Wisconsin and the University of Washington published the study in Robotics Science and Systems Online Proceedings.

If you're unloading a dishwasher with another person, you will speed up or slow down how quickly you hand off each dish according to your partner's readiness. Robots helping with such a task should learn that same awareness, so as to avoid smacking you with a plate while your back is turned.

To teach the robot, researchers recorded a pair of volunteers unloading dishes in different ways. A Microsoft Kinect documented the movements of volunteer's body joints. The program then "watched" this interaction, taking note of the thousands of joint readings from the Kinect and training them into an algorithm.

robot handoff 1The team then analyzed the movements of eight teams and found that the "giver" handing the dishes off to the "receiver" monitored and adapted to the "receiver's" pace by either pausing or slowing down their speed until their partner was ready.

They trained a Kinova Mico robot arm with their program and put it to the test in the same scenario, shown below. The robot's algorithm was able to predict when the "receiver" was ready for a dish about 90% of the time, according to MIT Technology Review, by tracking the partner's movements in real time and adjusting its speed accordingly.

robot handoff 2While the idea of an intuitive, learning robot sounds extremely useful, the researchers note that this success only applies to this one task of unloading dishes. In the future, as reported by MIT Technology Review, they hope to apply it to other more useful chores such as unloading groceries, handing off tools, or guiding someone through a physical therapy session.

Join the conversation about this story »

NOW WATCH: University of Michigan just opened their $10 million city for driverless cars

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>