Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Over a third of people think AI poses a threat to humanity

$
0
0

gold robots

Some 36% of people think the rise of artificial intelligence (AI) poses a threat to the long term survival of humanity, according to a YouGov survey for the British Science Association that was reported by Sky News.

Many of those developing AI disagree, saying machines are no where near as intelligent as the media makes them out to be. Machines are still decades away from achieving human levels of intelligence, they say, adding that no one knows if they'll turn against us or if they'll ever be able to organise themselves into groups and societies like humans do.

The study, which had more than 2,000 responses, also found that 60% of people think the use of robots or programmes underpinned by AI will lead to unemployment, yet 49% thought they'd be good at doing household tasks for elderly people.

Elsewhere, 46% of people were opposed to robots being programmed to have a personality, while the study also found that men are more likely to befriend a robot than women.

People aged 18-24 were the most open-minded about robots, with over half of that age group thinks robots would make good servants.

Lord David Willetts, chair of the British Science Association, said in a statement: "What this research shows is that the public’s fears need to be listened to as we go on to innovate and trail-blaze in this area. The British Science Association strongly believes that the public should be involved in the debates around future technology to ensure they have a voice and to give the public some ownership of the direction of science and technology.

"People will always want human experiences: robots will not kill the radio star, and we will always want to interact with other people. In fact, the greater problem is that artificial intelligence cannot quickly enough fill jobs that are going spare.

"It is encouraging, though, to hear so many varied opinions on this developing technology and nervous voices only strengthen the need for passionate and well thought discussion."

Companies like Google and Facebook are investing millions into developing AI systems. Google, for example, bought a London AI startup called DeepMind in 2014 for £400 million, which is able to beat the best humans in the world at Go, the most complex board game in the world.

Billionaire PayPal founder Elon Musk, renowned scientist Stephen Hawking, and several others have also warned about the impact super-intelligent machines could have on humanity.

"The development of full artificial intelligence could spell the end of the human race," Hawking told the BBC in December 2014.

The University of Cambridge is setting up a new £10 million research centre to analyse the risk that AI poses to humanity. The centre will work in conjunction with the university’s Centre for the Study of Existential Risk (CSER), which is funded by Skype cofounder Jaan Tallinn and looks at emerging risks to humanity’s future including climate change, disease, warfare, and AI.

Join the conversation about this story »

NOW WATCH: We tried the 'Uber-killer' that offers flat fares and no surge pricing


An AI expert says Google's Go-playing program is missing 1 key feature of human intelligence

$
0
0

DSC_1026.JPG

For the first time in history, a computer program is poised to beat one of the world's best human players at the 2,500-year-old game of Go — widely considered one of the most difficult games ever invented.

AlphaGo, a software program developed by British AI company Google DeepMind, has defeated Korean Go champion Lee Sedol in two out of five matches so far. It will play its third match at 10:30 p.m. EST Friday night, streamed live on YouTube. (By tradition, they will play all five matches regardless of the outcome.)

If AlphaGo beats Sedol in this tournament, it will cement its place in the annals of AI history.

But how long will it be before machines can match human-level intelligence in the real world? We asked an expert on artificial intelligence, computer scientist Richard Sutton of the University of Alberta in Canada.

An 'unprecedented' feat

Match Room 2There's no question that AlphaGo's achievement — and the speed with which it improved— was "unprecedented," Sutton told Business Insider. When IBM's Deep Blue computer beat chess champion Garry Kasparov in 1997, it had been expected for at least a decade, he said. By contrast, AlphaGo went from playing at an amateur level to beating a world champion within a year.

And this victory is all the more impressive because Go has exponentially more possible moves than chess, making it a much harder problem for a machine to solve, even with today's computing power.

Go is played with black and white game pieces, or "stones," on a 19 x 19 grid board. Each player places the stones on the board in an attempt to surround the opponent's pieces. The goal is to surround the largest area of the board by the game's end, which is reached when neither player wishes to make another move.

According to Sutton, AlphaGo's success can largely be traced to a combination of the following two powerful technologies: 

  1. Monte Carlo tree search: This involves choosing moves at random and then simulating the game to the very end to find a winning strategy
  2. Deep reinforcement learning: A multi-layered neural network that mimics brain connections, which contains of a "policy network" that selects the next move and a "value network" that predicts the winner of the game

But when it comes to big-picture intelligence, Sutton said, AlphaGo is missing one key thing: the ability to learn how the world works — such as an understanding of the laws of physics, and the consequences of one's actions.

The missing piece

Her artificial intelligence movieAn intelligent system can be defined as something that can set goals and try to achieve them. Many of today's powerful AI programs don't have goals and can only learn things with the aid of human supervision. By contrast, DeepMind's AlphaGo has a goal — winning at Go — and learns on its own by playing games against itself.

However, games like Go have a clear set of rules, so AlphaGo can follow those rules to achieve its goal. "But in real life, we don't have the rules of the game, and we don't know the consequences of our actions," Sutton said.

That said, Sutton doesn't think we're that far away from developing AIs that can function at a human level in the world.

"There's a 50% chance we figure out [human-level] intelligence by 2040 — and it could well happen by 2030," he said.

It's something we need to prepare for, he added, though he didn't specify how.

Other experts agree that AI is progressing much faster than we thought. AI expert Stuart Russell, a professor of computer science at UC Berkeley, told Business Insider in an email, "We're seeing dramatic progress on many fronts in AI at the moment and it seems to be accelerating."

But that's not a reason to panic about AI. "I don't think people should be scared," Sutton said, "but I do think people should be paying attention."

You can watch AlphaGo's third match against Lee Sedol here:

NEXT UP: ROUND TWO: Google's DeepMind AI just beat a human Go champion for a second time

SEE ALSO: Google's Go win was a massive step forward for AI, but machines have a ways to go before they're smarter than humans

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

Artificial intelligence isn’t just technology — it’s also a religion

$
0
0

GettyImages 163133204

I spend a lot of time covering advances in artificialintelligence

It's one of the big stories of our time,— so much so that the White House has said it will remake our society.

On Wednesday, I attended an Intelligence Squared debate at Manhattan's 92nd Street Y that made me see artificial intelligence in a whole new way. 

The topic: "Don't Trust The Promise Of Artificial Intelligence." The debaters: Andrew Keen, Jaron Lanier, James Hughes, and Martine Rothblatt

Keen, the author of "The Internet Is Not the Answer," and Lanier, author of "Who Owns the Future", teased apart what we're really talking about when we talk about artificial intelligence. 

Lanier — who pioneered virtual reality — stressed that we need to divide "artificial intelligence" into two different things.

There's "the engineering and the science on the one hand," he said, "and then on the other, the storytelling about it, the narrative that we have about it, the fantasy life of it — perhaps the religion of it. These are two distinct things. It doesn't mean one is good and one is bad, but they're just different sorts of beasts."

GettyImages 457106860So there's news about an algorithm beating a human in an ancient, infinitely complex game. Then there are promises that you'll upload your mind or that your personality will live on as a chatbot after you die. An algorithm is research, to follow Lanier's logic; a prediction of technologically enabled salvation is more like religion. 

To Lanier, when we ask "What is the promise of an area of research?" the only answer is that we fundamentally don't know. 

"It's research," he said. "We just observed gravity waves for the first time. Does that mean we'll suddenly have anti-gravity devices?  Well, you know, maybe someday. We have absolutely no clue what we're going to discover."

Lanier emphasizes that we don't really know that a thought is. Science can't describe it. We've found "collections of neurons that seem to active at certain times," but we still don't really know what thinking — and by extension, consciousness or the mind — actually is. The idea of becoming immortal by uploading your mind is more a profession of faith than a scientific description. 

So when we talk about "artificial intelligence," we should be clear to say if we're talking about the ideology that's grown around it — or the research itself.

Watch the full debate here.

Join the conversation about this story »

NOW WATCH: This trick will make reading on your iPhone at night much better

4-1: Google DeepMind beats Go champion Lee Sedol in a tense final game (GOOG)

$
0
0

Lee Sedol Google DeepMind

Google DeepMind's AI has won the fifth and final game of Go against human world champion Lee Sedol.

Lee played as black for the first time in the tournament — possibly to try and confuse the AlphaGo AI — but he still lost.

The victory marks the end of a week-long Challenge Series tournament in South Korea that has caught headlines across the world.

It's a major milestone for artificial-intelligence research: Go is a simple game but has been notoriously difficult for computers to master because of the sheer number of potential moves. Go players believe the game relies on intuition as a strategy.

While AI programs began being able to beat humans at chess decades ago, the best Go players in the world have always been able to outsmart Go-playing software — until now.

Go is a two-player, turn-based strategy game. Each player puts down either black or white stones in an attempt to outmaneuver and surround the other player. It's easy to pick up but takes years to master.

Demis Hassabis, the CEO and cofounder of Google DeepMind, said in a statement: " In the past ten days, we have been lucky to witness the incredible culture and excitement surrounding Go. Despite being one of the oldest games in existence, Go this week captured the public’s attention across Asia and the world.

"We thank the Korea Baduk Association for co-hosting the match, and thank all of you who watched. And of course, we want to express our enormous gratitude towards Lee Sedol, who graciously accepted the challenge and has been an incredible talent to watch in every game. Without him we would not have been able to test the limits of AlphaGo.Go

He continued: "We wanted to see if we could build a system that could learn to play and beat the best Go players by just providing the games of professional players. We are thrilled to have achieved this milestone, which has been a lifelong dream of mine. Our hope is that in the future we can apply these techniques to other challenges — from instant translation to smartphone assistants to advances in health care."

Lee said he was "regretful" about the result, adding that he has "more study to do."

Michael Redmond, English commentator and Go professional, said: "It was difficult to say at what point AlphaGo was ahead or behind, a close game throughout.

"AlphaGo made what looked like a mistake with move 48, similar to what happened in game four in the middle of the board. After that AlphaGo played very well in the middle of the board, and the game developed into a long, very difficult end game."

Redmond added: "AlphaGo has the potential to be a huge study tool for us professionals, when it’s available for us to play at home."

Following the victory, Hassabis wrote on Twitter that it was "one of the most incredible games ever."

After the third game, Lee apologised for not being able to satisfy people's expectations. "I kind of felt powerless," he said. "When I look back on the three matches, even if I were to go back and redo the first match, I think I would not be able to win because I misjudged AlphaGo."

The tournament has been closely watched by the most senior people at Alphabet and Google. Alphabet president Sergey Brin attended the third game, while Alphabet chairman Eric Schmidt was there for the first.

Join the conversation about this story »

NOW WATCH: This burn survivor has become a makeup star on YouTube

A lot of people who make over $350,000 are about to get replaced by software

$
0
0

wealthy cheering crowd

Artificial intelligence is poised to automate lots of service jobs. The White House has estimated there's an 83% chance that someone making less than $20 a hour will eventually lose their job to a computer. That means gigs like customer-service rep could soon be extinct.

But it's not just low-paying positions that will get replaced. AI also could cause high-earning (like top 5% of American salaries) jobs to disappear.

Fast.

That's the theme of New York Times reporter Nathaniel Popper's new feature, "The Robots Are Coming for Wall Street."

The piece is framed around Daniel Nadler, the founder of Kensho, an analytics company that's transforming finance. By 2026, Nadler thinks somewhere between 33% and 50% of finance employees will lose their jobs to automation software. As a result, mega-firms like Goldman Sachs will be getting "significantly smaller."

That's because Kensho does analytics work — previously an artisanal skill within Wall Street — at high speeds. Instead of sorting through news clippings to create a report, Kensho generates them from its database of finance analytics — essentially doing the work of researchers and analysts algorithmically.

Type in "Syrian Civil War" into Kensho and you'll get a number of data sets showing how major assets like oil and currencies reacted to events in the conflict, Popper reports. The minutes-long search "‘would have taken days, probably 40 man-hours, from people who were making an average of $350,000 to $500,000 a year," says Nadler.

Goldman is actually a huge investor in Kensho. It will be interesting, to say the least, to see how that investment pays off.

When speaking with Tech Insider about Google's huge AI victory in the game of Go, Brown University computer scientist Michael L. Littman explained that in any game with fixed rules, computers would win.

"What we're finding is that any kind of computational challenge that is sufficiently well-defined, we can build a machine that can do better," Littman says. "We can build machines that are optimized to that one task, and people are not optimized to one task. Once you narrow the task to playing Go, the machine is going to be better, ultimately."

Perhaps machines are just more optimized for certain types of white-collar finance work, too.

SEE ALSO: 13 jobs that are quickly disappearing thanks to robots

Join the conversation about this story »

NOW WATCH: Neil deGrasse Tyson explains why killer robots don't scare him

Google just proved how unpredictable artificial intelligence can be

$
0
0

alphago

Humans have been taking a beating from computers lately.

The 4-1 defeat of Go grandmaster Lee Se-Dol by Google’s AlphaGo artificial intelligence (AI) is only the latest in a string of pursuits in which technology has triumphed over humanity.

Self-driving cars are already less accident-prone than human drivers, the TV quiz show Jeopardy! is a lost cause, and in chess humans have fallen so woefully behind computers that a recent international tournament was won by a mobile phone.

There is a real sense that this month’s human vs AI Go match marks a turning point. Go has long been held up as requiring levels of human intuition and pattern recognition that should be beyond the powers of number-crunching computers.

AlphaGo’s win over one of the world’s best players has reignited fears over the pervasive application of deep learning and AI in our future – fears famously expressed by Elon Musk as “our greatest existential threat”.

We should consider AI a threat for two reasons, but there are approaches we can take to minimize that threat.

The first problem is that AI is often trained using a combination of logic and heuristics, and reinforcement learning.

The logic and heuristics part has reasonably predictable results: we program the rules of the game or problem into the computer, as well as some human-expert guidelines, and then use the computer’s number-crunching power to think further ahead than humans can.

This is how the early chess programs worked. While they played ugly chess, it was sufficient to win.

The problem of reinforcement learning

Reinforcement learning, on the other hand, is more opaque.

We have the computer perform the task – playing Go, for example – repetitively. It tweaks its strategy each time and learns the best moves from the outcomes of its play.

In order not to have to play humans exhaustively, this is done by playing the computer against itself. AlphaGo has played millions of games of Go – far more than any human ever has.

The problem is the AI will explore the entire space of possible moves and strategies in a way humans never would, and we have no insight into the methods it will derive from that exploration.

In the second game between Lee Se-Dol and AlphaGo, the AI made a move so surprising – “not a human move” in the words of a commentator – that Lee Se-Dol had to leave the room for 15 minutes to recover his composure.

This is a characteristic of machine learning. The machine is not constrained by human experience or expectations.

Until we see an AI do the utterly unexpected, we don’t even realize that we had a limited view of the possibilities. AIs move effortlessly beyond the limits of human imagination.

In real-world applications, the scope for AI surprises is much wider. A stock-trading AI, for example, will re-invent every single method known to us for maximizing return on investment. It will find several that are not yet known to us.

Unfortunately, many methods for maximizing stock returns – bid support, co-ordinated trading, and so on – are regarded as illegal and unethical price manipulation.

How do you prevent an AI from using such methods when you don’t actually know what its methods are? Especially when the method it’s using, while unethical, may be undiscovered by human traders – literally, unknown to humankind?

It’s farcical to think that we will be able to predict or manage the worst-case behavior of AIs when we can’t actually imagine their probable behavior.

stock traders

The problem of ethics

This leads us to the second problem. Even quite simple AIs will need to behave ethically and morally, if only to keep their operators out of jail.

Unfortunately, ethics and morality are not reducible to heuristics or rules.

Consider Philippa Foot’s famous trolley problem:

A trolley is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher.

Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track.

Should you flip the switch or do nothing?

What would you expect – or instruct – an AI to do?

In some psychological studies on the trolley problem, the humans who choose to flip the switch have been found to have underlying emotional deficits and score higher on measures of psychopathy – defined in this case as “a personality style characterized by low empathy, callous affect and thrill-seeking”.

This suggests an important guideline for dealing with AIs. We need to understand and internalize that no matter how well they imitate or outperform humans, they will never have the intrinsic empathy or morality that causes human subjects to opt not to flip the switch.

Morality suggests to us that we may not take an innocent life, even when that path results in the greatest good for the greatest number.

Like sociopaths and psychopaths, AIs may be able to learn to imitate empathetic and ethical behaviour, but we should not expect there to be any moral force underpinning this behaviour, or that it will hold out against a purely utilitarian decision.

A really good rule for the use of AIs would be: “Would I put a sociopathic genius in charge of this process?”

There are two parts to this rule. We characterized AIs as sociopathic, in the sense of not having any genuine moral or empathetic constraints. And we characterize them as geniuses, and therefore capable of actions that we cannot foresee.

Playing chess and Go? Maybe. Trading on the stock market? Well, one Swiss study found stock market traders display similarities to certified psychopaths, although that’s not supposed to be a good thing.

But would you want an AI to look after your grandma, or to be in charge of a Predator drone?

There are good reasons why there is intense debate about the necessity for a human in the loop in autonomous warfare systems, but we should not be blinded to the potential for disaster in less obviously dangerous domains in which AIs are going to be deployed.

Join the conversation about this story »

NOW WATCH: This trick fixes your iPhone if it's acting slow — and it takes less than 30 seconds

A 'Shazam' app for plant identification may be here soon

$
0
0

Tarot Leaf

Computers are gaining on our botany skills.

Scientists have developed software that combines machine learning and computer vision to guess which plant family a leaf belongs to.

Although it's designed for botanists, it makes a phone app for plant identification — perhaps something similar to Shazam, which can identify music — not seem like such a stretch.

The software was a joint venture by Peter Wilf of Penn State and Thomas Serre, a neuroscientist at Brown University.

Back in 2007, Serre taught computers to differentiate between photos with and without animals. He managed to reach an 82% accuracy rate, besting his human students' 80% accuracy.

Wilf read about Serre's work and realized a similar algorithm might be able to rapidly classify leaves. That got Wilf excited, since identifying plants — especially the ancient variety (leaves are the most common fossils) — remains a big challenge.

Botanists still use a 19th-century method to identify plants known as "leaf architecture," which follows an exhaustive codebook of standards for examining the extraordinary variety of leaf veins and structure. It's thorough but not quick: A person may spend up to two hours to determine a leaf's place in the tree of life.

Software, meanwhile, stands to navigate that complexity in milliseconds.

"I've looked at tens of thousands of living and fossil leaves," Wilf told Wired. "No one can remember what they all look like. It's impossible — there's tens of thousands of vein intersections."

Wilf, Serre, and their colleagues' new program works off of a growing database of 7,597 images of different leaves from 2,001 genera, bleached and stained to highlight vein structures, according to a study published in Proceedings of the National Academy of Sciences. The software is "trained" using the images and

Each image in the database has a "heat map" of factors the software uses to help categorize a leaf:

leaf identification wilf et al pnas

Wilf and Serre's algorithm isn't perfect, but it can already determine related plant families of a leaf with 72% accuracy. And that includes specimens with holes chewed in them, fungal infections, and other imperfections.

The program is currently unable to identify an exact species — say, one of 600 types of oak— but it can act as a valuable assistant to a botanist, who can pick up the analysis from there.

It's important to note this isn't the first-ever plant-identifying software. Pl@ntNet, for example, can identify plant species from images and was first released in 2013. But unlike although the new software, its database is limited to plants from Western Europe, South America, and the areas around the Indian Ocean.

There are also plant-identifying apps, like Leafsnap for iOS, but they're primarily digital guidebooks with mediocre image recognition.

Whatever the case, it seems like these algorithms are only improving with time — so Tech Insider looks forward to the day when we can whip out our phones like a Tricorder and pretend to be botanists exploring an alien world.

Join the conversation about this story »

NOW WATCH: Someone has designed a pot of flowers that can take and post selfies on its own

Wall Streeters be warned: A CIA-backed startup could be listening to your calls

$
0
0

trader

If you're a Wall Street trader, you'll want to be extra careful about what you say on calls and in emails going forward.

Especially if you work for Credit Suisse.

That firm has teamed up with the artificial intelligence firm Palantir to improve its trader surveillance, Bloomberg's Jeffrey Voegeli reports.

Palantir is no joke: It was cofounded by the Facebook backer and PayPal cofounder Peter Thiel — and it was seed-funded by the CIA.

It's worked with other banks in the past, but this is the first time it's entered into a joint venture with one, according to the report.

Credit Suisse plans to use Palantir's technology to monitor the behavior of all its employees in the hopes of catching rule-breakers before they break the law.

It will do so by identifying individual trader activities that stand out from how they would normally act — and from how their colleagues would.

This is not the first time a Wall Street bank has used technological solutions to weed out rogue traders.

JPMorgan last year launched a program that uses algorithms to identifying bad eggs. It monitors things like whether traders attend compliance training or break any other trading rules, according to Bloomberg.

The new initiatives follow a handful of trader scandals in recent years that have cost banks billions of dollars in legal fees.

There was the JPMorgan trader behind the London Whale scandal, UBS' snafu with the rogue trader Kweku Adoboli, and the Deutsche Bank trader who cracked jokes about market manipulation amidst the currency scandal.

And remember the 2008 LIBOR interest rate and currency rigging scandal? That might have been avoided had trader chat conversations been more closely monitored.

Instead, one trader, who would later become Barclays' cohead for UK FX hedge fund sales, literally told another trader, "if you aint cheating, you aint trying"— and it went undetected.

Banks ended up paying a combined $5.8 billion in fines for that ordeal.

 Follow BI Finance on Facebook

Join the conversation about this story »

NOW WATCH: Why 2-in-1 shampoo and conditioner products don’t work


A psychotherapy bot is headed to refugee camps in the Middle East

$
0
0

x2ai lebanon

The war in Syria is horrifying

According to the UN, over 3 million Syrian refugees are now in neighboring Turkey, Lebanon, Jordan and Iraq, with millions more displaced within Syria.  

To help with this crisis, artificial intelligence startup X2AI is in the middle of a two week stay in Beirut, Lebanon, where it's piloting the use of artificial intelligence as a psychotherapy treatment for refugees. 

Partnering with Singularity University and the Field Innovation Team, X2AI is pitching the psychotherapy bot (named Karim) to aid workers and refugee communities.

x2ai lebanon 2Karim helps therapists remotely monitor and care for patients, and can administer therapy itself. 

X2AI founders Eugene Bann and Michiel Rauws tell Tech Insider that the goal is to support aid workers in giving refugees support (and offer support for aid workers themselves). Users don't have to download anything to interact with Karim — the bot is accessible by text or instant message. 

Karim is a version of Tess, an artificial intelligence bot that Bann and Rauws have deployed with a healthcare chain in the Netherlands, currently in a two-month pilot. Tess is being deployed to under 100 patients, potentially scaling up after eight weeks. 

The bot helps fill in the gaps when therapists aren't available. 

"When Tess is there, she’s there 24/7," says Rauws, who's originally from the Netherlands. "When they’re really feeling bad, right at that moment they could discuss it with Tess, and record how they’re feeling."

Bann and Rauws have been working on Tess since 2014, when they met and realized they both had deep interest in building algorithms that understand emotions. 

Bann and Rauws say that with Tess, a therapist could serve 100 clients in a day instead of 10. If a patient were to mention suicide, self harm, or say that they want to speak with a person, there's an automatic handover from the AI to the patient's therapist or another therapist at the same clinic or hospital. 

Screen Shot 2016 03 18 at 11.24.28 AM

According to Bann and Rauws, Tess (and Karim) are the best in the world compared to other conversational bots at "sentiment analysis," or the way the bot assesses and responds to the emotional content of words. With help from the 10,000 emotional states and 5,000 medical terms categorized in its database, Tess generally knows what you're talking about (similar to how Google's AlphaGo knows the right moves to make in a board game).

In a demo shown to Tech Insider, a user told Tess that they were feeling depressed. Tess replied, saying that mental health is like physical health. 

"We all get sick sometimes, in different ways and for different amounts of time," Tess said. "You can and will overcome depression, just like you can heal from a broken arm." 

Tess then asked if the user had done anything about the depression yet, acknowledged that depression can make people feel hopeless, and suggested that "a moment of self compassion" could be a start – complete with a link to a five minute exercises in doing so. 

The bot will give homework like that to people (or a live therapist through Tess), and then automatically check in on whether the user is practicing that down the line.

Like a good friend (or counselor), Tess remembers you. 

tess screen

"Tess is not merely like the bots that you’re seeing Google make," says Bann. With competitors, "you say something and you get a response that seemingly makes sense and logical, but they’re not holding a conversation with you." 

If Tess asked you how you were doing and you said you were really looking forward to seeing a space rocket launch, Tess would remember that. Then, in a following conversation, if you said that you were visiting NASA, Tess would recognize that as a good thing — since you said that you loved rockets before. 

syrian refugee camp

Bann and Rauws say Karim is Tess's "little brother," designed to be deployed in the Middle East.

The founders say that in Middle Eastern culture, it takes more time to earn the trust of users, so Karim isn't as direct with asking users about their problems and immediately delivering therapy. Instead, it first asks users more generally about their families. Instead of directly telling users to do an exercise — like the self-compassion meditation mentioned above — Karim will ask patients to imagine a friend doing the same. 

One difficulty that came up during a demo of Karim in Lebanon: convincing patients it's a bot, not a human.

Join the conversation about this story »

NOW WATCH: What the 'i' in 'iPhone' stands for — as explained by Steve Jobs

What Microsoft’s teen chatbot being 'tricked' into racism says about the internet

$
0
0

Launched earlier this week, Microsoft's Tay chatbot was supposed to act like a stereotypical 19-year-old: defending former One Direction member Zayn Malik, venerating Adele, and throwing around emoji with abandon. 

Screen Shot 2016 03 24 at 1.16.21 PM

But by Wednesday, it was taken offline for making all sorts of hateful comments. 

In one highly publicized tweet, which has since been deleted, Tay said: "bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got." 

screen shot 2016 03 24 at 09.50.46

A Microsoft press representative emailed us, saying that Tay is"a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it." 

Bot researcher Samuel Woolley tells Tech Insider that it looks like the programmers didn't design Tay with this sort of acidic language in mind. 

"Bots are built to mimic," says Woolley, who's a project manager of Computational Propaganda Research Project, based at Oxford Internet Institute and the University of Washington

"It's not that the bot is a racist, it's that the bot is being tricked to be a racist," he says.

Apprarently, Microsoft built a simple "repeat after me" mechanism where you could tweet at the bot whatever you wanted to say, and it would say it back to you. (Which seems to be what Microsoft's original claim that "The more you chat with Tay the smarter she gets, so the experience can be more personalized for you" is suggesting.) 

Woolley says that it's probably a degree of "carelessness combined with the hope that people are to do things mostly for good, but there's a huge culture of trolling online." 

putin gunThere are plenty of precedents of bots taking user input and turning it into something ugly.

A Coca-Cola positivity ad campaign was hijacked when a bot started sharing Hitler's "Mein Kampf" and a programmer in the Netherlands was questioned by police after his bot started telling people it was going to kill them. And Donald Trump himself was tricked into retweeting Benito Mussolini .

Part of this is the psychology of Internet users. With bots, it's similar to how the anonymity given by a computer screen encourages people to send out their most aggressive messaging on article comments

Gven that internet communities like 4chan have "built their entire ideology online" around this sort of trolling, it's no surprise to Woolley that a cheery teenybopper would be quickly turned into a racist misogynist. 

Bots can also communicate way faster than human. A racist, misogynistic human being might tweet out a couple hundred messages of hate in a day, but a bot could do a couple thousand. For this reason, it's been reported that governments like Russia, Turkey, and Venezuela have used bots to sway public opinion— or the appearance of public opinion — online. 

As Woolley has argued in an essay, one of the best checks against having a bot spiral out of control is transparency about how it was designed and what it was designed for, since knowing how the bot works helps us understand how it's being manipulated. But that might be hard in a corporate setting, where technology is seen as proprietary. Others encourage closer human supervision of bots

"The creators need to be really careful about controlling for outputs from the internet," Woolley says. "If you design a bot that can be manipulated by the public, you should expect, especially in you're a corporation, that these things will happen, and you need to design the bot to prevent these malicious and simple types of co-option." 

Join the conversation about this story »

NOW WATCH: Consumer Reports just rated Samsung's new Galaxy phone better than the iPhone

How Facebook decides which memories to show you in one of its most 'sensitive' features (FB)

$
0
0

FacebookYou've probably noticed Facebook's On This Day feature popping an old status update about a "new" job from years ago back onto your feed, casting you back to that Caribbean vacation you took in 2010, or reminding you how long you've been friends with so-and-so.

Or, maybe it has surfaced a painful, pre-breakup photo of you and your ex.

Facebook launched its nostalgia product exactly a year ago today as a way to encourage you to dive back into the digital archive of your life — all the posts, events, photo albums, and friendships that the social network stores.

Now, an average of 60 million different people visit their On This Day page each day and 155 million have opted to receive the dedicated notification for the feature. If you're in that latter group, you will be prompted to check out an unfiltered spread of all your historic Facebook activity for any given day.

If you're not one of those nostalgia-addicts who gleefully (or warily) signed up for the notification, you might be surprised at the level of attention Facebook pays to trying to predict which old posts you'll most want to see it serve onto your feed.

"We need to be mindful that we’re not just stewarding data,"Artie Konrad, a Facebook user experience researcher who works with On This Day, explains to Business Insider. "We’re stewarding personal memories that tell the stories of people’s lives." 

How Facebook sifts through your memories 

Although Facebook does user research for all its features, On This Day got even more attention than usual.

"It's one of our most personal products," says Anna Howell, a UX research manager, noting that because of the complexities of memory, Facebook needed to be "extremely caring and sensitive" when approaching the product. 

Konrad, who focused on "technology mediated reflection"— how people use tech to reminisce on their past — while completing his PhD, explained how Facebook conducted the research that shaped the On This Day product:

On_This_DayFirst, the company surveyed thousands of Facebook users about what they thought Facebook's role should be in mediating their memories. Their consensus: "Facebook should provide occasional reminders of fun, interesting, and important life moments that one might not take the time to revisit."

To help figure out how to do that best, Facebook then brought nearly 100 people from all different backgrounds into its research lab and asked them to classify memories Facebook showed them into different themes like "vacation" or "achievements," and then rank those themes based on how much they enjoyed seeing them. The company also did a linguistic analysis on anonymized memory posts to see which words people tended to share versus dismiss. 

Konrad found, for instance, that people didn't actually care that much about old food photos, favored posts that used words like "miss," and generally felt uncomfortable seeing memories containing swear words or sexual content. 

All of that research went into Facebook's current triangulated approach of adapting your On This Day memories based on personalization, artificial intelligence, and preferences. 

Allowing people to specify certain dates they don't want to see memories from or people they'd rather forget about is an important part of that. 

Facebook launched this filtering feature in October 2015:

OTD

Unfortunately, unwanted reminders can still slip through — a friend recently shared a story about how an ex she explicitly blocked still showed up in an On This Day picture because he wasn't explicitly tagged. 

The personalization and artificial intelligence parts comes into play because Facebook can analyze what memory "themes" you've shared in the past and serve you more of those kinds of posts versus less from themes you've ignored. 

Interestingly, Facebook doesn't allow any users to completely turn off its remembrance feature, but does learn from users actions. So, if you dismiss the On This Day newsfeed post every time you see one, Facebook will take that into consideration and put them onto your feed less frequently. (Though, users who really hate the feature have pointed out that you can hack the system by setting the start day in the date blocking tool to the very beginning of your Facebook history and the end date to the distant future.)

Facebook product manager Tony Liu says that the team has seen "the number of people sharing these memories go up exponentially" since it introduced more personalization and preferences, which is one of the team's measures of success. 

Howell says that users who see the On This Day feature feel like, "Facebook is talking directly to me and giving me something that I want and I enjoy."

Facebook has even proactively tried to increase that feeling. 

Here's how the product design and wording has changed from when it launched until now:

OnThisDay New

Facebook tracks how much its users think the company "cares" about them and CEO Mark Zuckerberg has increasingly prioritized that metric over the last two years, according to a recent report in The Information. This feature helps boost that metric. 

There are also other more obvious benefits for Facebook: If the On This Day notification is pulling people to the site and getting them to share their old posts, that's more time they're sucked into the network's money-gushing ads machine.

The effect of digital reflection 

Kimberley

Asking my peers about the On This Day feature elicited a range of emotions. 

"It can be very jarring or very moving," one friend said, while another felt that although they've savored some little day-to-day moments through it that they wouldn't have remembered otherwise, they ultimately saw it as "indicative of how social media can make us take ourselves way too seriously."

Others spoke of the embarrassment-tinged delight of seeing how much their use of Facebook has changed over the years. 

One particularly moving story I heard was from a Brooklyn woman named Kimberly Czubek who recalled how her response to the On This Day page helped her move through the mourning process after the sudden death of her husband.

She would cry every time she checked it, because it reminded her of when "family dynamic felt complete" versus how it "now felt like it was in shambles," she tells Business Insider. But the feature also brought her comfort. She grew to appreciate the regular reminders of her husband through photos and videos she'd posted on Facebook that she would have had to dig for otherwise. And on the anniversary of his death, she found some peace. 

"I was able to look back at how much stronger I had become, both as a widow and as a single mother," she says. 

In his research before coming to Facebook, Konrad studied how using technology to reminisce on the past can increase well-being

Seeing digital memories, like through On This Day, can create a "savoring experience that allows you to heighten that emotionality" that you felt about something when it happened. It can jog your memory not just about something that happened, but about how that thinfelt

"Whenever we talk about memories with folks, people are saying 'I don’t even print photos anymore,'" Howell says. "They say, 'All of my memories are on Facebook now for the last couple of years.' For some people, if they didn’t have these products, they would never see this stuff again."

SEE ALSO: Instagram is completely changing the way its app works and making it more like Facebook

Join the conversation about this story »

NOW WATCH: How to send self-destructing messages — and other iPhone messaging tricks

2 sentences from a startup CEO show why so many jobs are getting automated

$
0
0

Go Butler

Marketers use the word "handmade" to signal that a product is worth an extra couple bucks — be it cheddar or loafers.

That's a way of separating those artisanal items from machine-made, mass-produced competitors.

But it's not just physical goods that are made more quickly and cheaply by machine. Artificial intelligence is making decisions that would previously be done by humans. Not in five or 20 years. Now.

In a new post, Fast Company's Sarah Kessler reports on the on-demand-concierge startup GoButler, which just went from having humans handle requests to solely relying on algorithms.

"My general view is that people will always want convenience, but they’re not willing to pay premiums for it," says GoButler CEO Navid Hadzaad. "And then when you think about the human labor behind it, I think that will become a luxury more and more."

GoButler used to be a virtual-assistant service, with humans (called heroes) who would carry out tasks for users, like making restaurant reservations, ordering pizzas, or in one memorable instance, drawing horses. It all happened over chat, like an iMessage with a resourceful friend, or say, a butler.

This required actual human employees — most of whom were in expensive cities like Berlin or New York.

But then GoButler's algorithm technology started studying the way users interacted with heroes. All those human-to-human messages formed a database that the algorithm could study, learning how to respond to things users say.

If that sounds familiar, it might be because it's the same technique that Google's AlphaGo program used to master the ancient, inscrutable game of Go and beat a champion human player 4-1.

Instead of keeping the human-enabled GoButler service (which would ultimately have required moving the support operation to a place like the Philippines) Hadzaad decided to double-down on the algorithmic-only offering of doing travel bookings exclusively. No heroes needed.

The idea of intellectual human labor as an unnecessary luxury isn't just happening in concierge apps. It's happening on Wall Street, as the work of research analysts — like seeing how news events, as in Syria, affect the prices of oil and currencies — is now being handled by clever programming at places like Goldman Sachs.

Whenever a game or a service has rules that are fixed enough, the computers will win. That's what Brown University computer scientist Michael L. Littman explained to us after Google's AI victory.

"What we're finding is that any kind of computational challenge that is sufficiently well-defined, we can build a machine that can do better," Littman says. "We can build machines that are optimized to that one task, and people are not optimized to one task. Once you narrow the task to playing Go, the machine is going to be better, ultimately."

SEE ALSO: The 11 US cities most at risk for having jobs stolen by robots

Join the conversation about this story »

NOW WATCH: This newly developed artificial skin has digital capabilities

A computer scientist built an AI baby that looks realistic and can learn freakishly fast

$
0
0

Mike Sagar built detailed simulations of body parts for movies like "Avatar" and "King Kong." Now he's trying to build a simulated brain. His program combines his hyper-realistic biological models with artificial intelligence programs, in an attempt to simulate human consciousness.

This footage comes from Bloomberg's new show "Hello World."

Story by Jacob Shamsian and editing by Kristen Griffin

Follow INSIDER designon Facebook
Follow INSIDERon YouTube

Join the conversation about this story »

Microsoft apologizes for its racist chatbot's 'wildly inappropriate and reprehensible words' (MSFT)

$
0
0

Microsoft Tay AI

Microsoft apologized for racist and "reprehensible" tweets made by its chatbot and promised to keep the bot offline until the company is better prepared to counter malicious efforts to corrupt the bot's artificial intelligence.

In a blog entry on Friday, Microsoft Research head Peter Lee expressed regret for the conduct of its AI chatbot, named Tay, explaining that the bot fell victim to a "coordinated attack by a subset of people."

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Lee writes.

Earlier this week, Microsoft launched Tay —  a bot ostensibly designed to talk to users on Twitter like a real millennial teenager and learn from the responses.

But it didn't take things long to go awry, with Microsoft forced to delete her racist tweets and suspend the experiment.

"Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," Lee writes.

An organized effort of trolls on Twitter quickly taught Tay a slew of racial and xenophobic slurs. Within 24 hours of going online, Tay was professing her admiration for Hitler, proclaiming how much she hated Jews and Mexicans, and using the n-word quite a bit.

In the blog entry, Lee explains that Microsoft's Tay team was trying to replicate the success of its Xiaoice chatbot, which is a smash hit in China with over 40 million users, for an American audience. Given that they never had this kind of problem with Xiaoice, Lee says, they didn't anticipate this attack on Tay.

And make no mistake, Lee says, this was an attack.

"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images," Lee writes.

Ultimately, Lee says, this is a part of the process of improving AI, and Microsoft is working on making sure Tay can't be abused the same way again.

"To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process," Lee writes.

It seems weird that Microsoft couldn't have seen this coming. After all, it's been common knowledge for years that Twitter is a place where the worst of humanity congregates.

Still, it raises a lot of questions about the future of artificial intelligence: If Tay is supposed to learn from us, what does it say that she was so easily and quickly "tricked" into racism?

You can read Microsoft's full apology here.

SEE ALSO: Microsoft is deleting its AI chatbot's incredibly racist tweets

Join the conversation about this story »

NOW WATCH: Scientists developed a robot arm that can catch anything you throw at it

21 of the funniest responses you'll get from Siri (AAPL)

$
0
0

Jamie Foxx Siri

Apple's Siri voice assistant is one of the most widely available bots in the world. Most people with an iPhone can ask it questions with the push of a button, and over the years, Apple has used that data to improve Siri. 

Still, there are definitely still some very common questions that Siri, on its own, wouldn't have the answer to. Although Siri uses advanced machine learning to parse your questions, its artificial intelligence is not advanced enough to come up with clever responses to abstract questions. So Apple has clearly enlisted a few writers to come up with canned responses to common Siri queries.

Apple's not the only company to write specific responses for its bot. When Microsoft introduced its new chat bot Tay, it used comedy writers to make sure its dialogue sparkled. 

But sometimes, the funniest answers from an AI assistant are the ones that nobody could have anticipated.

Here are some of Siri's best gags:

SEE ALSO: The 12 best hidden features in the new iPhone update

Siri can only answer your questions. Ask it something open-ended, and you're likely to get a funny quip.



Press a little harder, and Siri will simply encourage you to recycle.



Sometimes, Siri slips a joke in before answering your question. Ask it to find a podiatrist, and it's got a quip.



See the rest of the story at Business Insider

An ad agency has appointed the 'world's first artificially intelligent creative director' (IPG)

$
0
0

blue robot

Ad agency McCann Japan has appointed an artificial intelligence creative director, AI-CD β, who will be attending McCann Worldgroup’s new employee welcoming ceremony on April 1, along with 11 new college graduates.

AI-CD β was actually created by the agency under its ‘Creative Genome Project’, the first in a series of projects undertaken by the agency’s ‘McCann Millennials taskforce’.

The artificial intelligence can give creative direction on commercials because the data that forms the basis of the algorithm includes deconstructed, tagged and analysed TV shows, as well as data on the winners of the All Japan Radio & Television Commersion Confederation’s CM Festival. 

According to McCann, AI-CD β will work as a creative director on real client accounts and is the first logic-based creative direction that’s based off the historical success of TV ads.

President & CEO of McCann Japan, Yasuyuki Katagi said: “Artificial intelligence is already being used to create a wide variety of entertainment, including music, movies, and TV drama, so we’re very enthusiastic about the potential of AI-CD ß for the future of ad creation. The whole company is 100 percent on board to support the development of our A.I. employee.”

The AI has been built to respond to a product or message with the optimal commercial direction, based off historical data. The AI has also been built to then learn from the results of the campaigns its directed, in theory creating an increasingly more effective AI creative director.

Concern about how artificial intelligence will impact jobs has been mounting across all industries. Speaking at Adfest in Thailand this month AKQA’s Eric Cruz looked at the impact that automation would have on creativity. He said services like SquareSpace and Wix has already automated some of the digital designers role and made it almost redundant.

He said that by 2020 the same would be said for social media managers. “Digital natives know what social is. There will be no need for management as it will be automated because it is built into who they are,” said Cruz.

SEE ALSO: RANKED: The 30 most creative women in advertising

Join the conversation about this story »

NOW WATCH: What the 'i' in 'iPhone' stands for — as explained by Steve Jobs

IBM's Watson analyzed 'Star Wars' and reached some fascinating conclusions

$
0
0

star wars luke leia han solo

One of IBM Watson's many talents is analyzing personality traits by looking at written text.

The supercomputer assesses traits based on the popular Big Five test, which rates subjects for extroversion, agreeableness, conscientiousness, neuroticism, and openness to experience. It can also identify different tones such as fear, joy, confidence, and openness. These skills have been used to do everything from assist customer service agents in analyzing how their phone calls went to providing dating tips.

We tested out Watson last week on the "Harry Potter" universe and were wowed by its conclusions.

This week, we worked with IBM researcher Vinith Misra to analyze the "Star Wars" original trilogy screenplays. (Because let's be honest, the rest don't count.) Keep reading to see the findings.

SEE ALSO: 4 things you can literally learn while you sleep

NOW READ: We asked a sleep scientist if the iPhone's new Night Shift feature will actually help you sleep, and his answer surprised us

Jedi are the least neurotic characters.

"If you look at neuroticism you see something really interesting — the Jedi characters are the least neurotic," Misra told Tech Insider. "Yoda is one of the least neurotic characters. Even Vader isn't that neurotic."

That's right, don't forget Vader was a Jedi first before he became a Sith Lord.



Unsurprisingly, the most neurotic character was C-3PO.

We're not surprised at all that Watson picked up that C-3PO was the most neurotic with his endless worrying throughout all of the films. But what is interesting is that Han actually ranked third in neuroticism, right behind Luke.

Han definitely gives off a cool exterior, but considering he's been in quite a few binds (being a carbonite fridge must be stressful) it does make some sense.



Obi-Wan ranks highest for intellect and modesty and last in immoderation and cheerfulness.

"It's Jedi stereotypes that come up here — the zen-like equanimity," Misra explained. "You're gonna be less friendly and open."



See the rest of the story at Business Insider

Microsoft's artificial intelligence is so smart it can tell blind people what's going on around them

Here's why we should thank Microsoft for its AI bot that turned into a foul-mouthed racist (MSFT)

$
0
0

Microsoft Tay

The Wild West of the internet is notoriously good at making bad decisions. In 1998, a collective of internet users chose Hank the "Angry Drunken Dwarf" as the most beautiful person in the world.

In 2012, a coordinated internet campaign picked a school for the deaf as the winning recipient of a Taylor Swift concert.

And this year, some on the internet helped turn an advanced artificial intelligence chatbot, programmed to learn from human interactions, into a racist, sexist bot called Tay — all in just one day.

Here are some of Tay's now infamous Tweets:

Tay Racist Tweet

Tay AI anti feminist tweet

Soon after Tay's bigoted Tweets started going viral, Microsoft Research's Peter Lee apologized in a blog post: "Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time."

But we should also thank Microsoft for pointing out that we need to be more deliberate with how we interact with such kinds of AI technology, since these programs will only magnify the ideas and information we feed them.

This concept is best explained in the words of a classic computer science aphorism, "Garbage in, garbage out." This basically means that the quality of the input will determine the quality of the output. Tay, for example, was exposed to racist and sexist ideas that led her to learn and tweet those ideas.

The internet changed lives and altered history. Artificial Intelligence will likely have a similar impact on the world, so long as it becomes equally ubiquitous. AI bots, which are currently only in early stages of development, have been making headlines for beating humans at various tasks for more than a decade.

But the anticipation and excitement that follows each new AI development is also accompanied with fear — and for good reason. What garbage will be input into the machines? How will we choose to use these machines? Who will set AI’s moral compass?

Microsoft Tay

Some of the greatest minds of this century warn us that artificial intelligence is an “existential threat” to humanity. Stephen Hawking said in an interview with BBC in 2014, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”During a 2015 Reddit AMA session, Bill Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent…A few decades after that through the intelligence is strong enough to be a concern.”

We should be cautious of an entity that has the ability to evolve, grow, and learn faster than us. We should have built-in filters as an equivalent for a human’s moral compass. We should protect ourselves from being either enslaved by or enamored with hyper-intelligent beings that we've created.

Elon Musk, in a 2014 interview at the AeroAstro Centennial Symposium voiced, “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

Artificial Intelligence is no more science fiction or a technology from the distant future, it is very real.

The debate should be over how much we should fear, apprehend, and address a possible rise of artificial intelligence. But while a lot of the accompanying challenges are forcing us to look ahead in time, Microsoft’s Tay shines light on how it may be important for us to look inwards, at how we interact with and use technology.

Join the conversation about this story »

NOW WATCH: Ancient Romans had perfect teeth because their diets were low in one substance

Satya Nadella says Microsoft's rogue racist chatbot failed by its own standards (MSFT)

$
0
0

Microsoft CEO Satya Nadella

At today's Microsoft Build event, CEO Satya Nadella has made it clear that he sees chatbots as the next big thing.

To that end, Microsoft announced a new series of tools to help developers build helpful conversational bots into apps like Skype, Slack, and Outlook.

In a demo, Microsoft showed off using  the Cortana digital assistant within Skype to answer questions and find directions right within a chat window. That's coming to Skype on Windows 10, iOS, and Android. Another demo showed off ordering Domino's Pizza with a text.

But as Microsoft works on putting bots and artificial intelligence everywhere, "I think it's important to have a principled approach," Nadella says. 

To Nadella's mind, a bot should "augment human experiences and experiences," it should be "trustworthy," and it should be "inclusive and respectful." 

So while Nadella lauds tools like Microsoft's Cortana digital assistant for meeting those standards, he was less keen on Tay — Microsoft's Twitter bot that infamously went rogue, posting terrible racist tweets.

"Tay was not up to these marks," Nadella says, and so it's back to the "drawing board" with her. 

Instead, Nadella pointed at Cortana and Microsoft products like Skype Translate, which can translate a message in real-time, as examples of how artificial intelligence can help make people more productive.

"It's not going to be about man vs. machine. It's going to be about man with machine," Nadella says.

Going forward, Nadella says that he sees artificial intelligence falling into three major product categories:

  • People, like Skype Translate, where you're talking to a human and AI is just facilitating.
  • Bots, like Microsoft's Tay or Slack's Slackbot, that you interact with directly to get stuff done. "Bots are like new apps," Nadella says.
  • Personal digital assistants, like Cortana or Apple's Siri, that help access other bots to answer questions and accomplish tasks. Nadella calls this a "meta-app," like a web browser, that you use to access other services.

"We want to build technology such that it gets the best of humanity, not the worst," says Nadella.

SEE ALSO: Satya Nadella says Microsoft's next big thing will have 'as profound an impact' as touchscreens and the web

Join the conversation about this story »

NOW WATCH: 20 Easter egg questions you can ask Cortana to get a hilarious response

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>