Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

THE AI DISRUPTION BUNDLE: The guide to understanding how artificial intelligence is impacting the world (AMZN, AAPL, GOOGL)

$
0
0

This is a preview of a research report bundle from Business Insider Intelligence, Business Insider's premium research service. To learn more about Business Intelligence, click here.

Artificial intelligence (AI) isn't a part of the future of technology. AI is the future of technology.

Elon Musk and Mark Zuckerberg have even publicly debated whether or not that will turn out to be a good thing.

global ai commerce financing trend

Voice assistants like Apple's Siri and Amazon's Alexa have become more and more prominent in our lives, and that will only increase as they learn more skills.

These voice assistants are set to explode as more devices powered by AI enter the market. Most of the major technology players have some sort of smart home hub, usually in the form of a smart speaker. These speakers, like the Amazon Echo or Apple HomePod, are capable of communicating with a majority of WiFi-enabled devices throughout the home.

While AI is having an enormous impact on individuals and the smart home, perhaps its largest impact can be felt in the e-commerce space. In the increasingly cluttered e-commerce space, personalization is one of the key differentiators retailers can turn towards to stand out to consumers. In fact, retailers that have implemented personalization strategies see sales gains of 6-10%, at a rate two to three times faster than other retailers, according to a report by Boston Consulting Group.

This can be accomplished by leveraging machine learning technology to sift through customer data to present the relevant information in front of that consumer as soon as they hit the page.

With hundreds of hours of research condensed into three in-depth reports, Business Intelligence is here to help get you caught up on what you need to know on how AI is disrupting your business or your life.

Below you can find more details on the three reports that make up the AI Disruption Bundle, including proprietary insights from the 16,000-member BI Insiders Panel:

AI Banking Cover

AI in Banking and Payments

Artificial intelligence (AI) is one of the most commonly referenced terms by financial institutions (FIs) and payments firms when describing their vision for the future of financial services.

AI can be applied in almost every area of financial services, but the combination of its potential and complexity has made AI a buzzword, and led to its inclusion in many descriptions of new software, solutions, and systems.

This report cuts through the hype to offer an overview of different types of AI, and where they have potential applications within banking and payments. It also emphasizes which applications are most mature, provides recommendations of how FIs should approach using the technology, and offers examples of where FIs and payments firms are already leveraging AI. The report draws on executive interviews Business Intelligence conducted with leading financial services providers, such as Bank of America, Capital One, and Mastercard, as well as top AI vendors like Feedzai, Expert System, and Kasisto.

AI Supply Chain

AI in Supply Chain and Logistics

Major logistics providers have long relied on analytics and research teams to make sense of the data they generate from their operations.

AI’s ability to streamline so many supply chain and logistics functions is already delivering a competitive advantage for early adopters by cutting shipping times and costs. A cross-industry study on AI adoption conducted in early 2017 by McKinsey found that early adopters with a proactive AI strategy in the transportation and logistics sector enjoyed profit margins greater than 5%. Meanwhile, respondents in the sector that had not adopted AI were in the red.

However, these crucial benefits have yet to drive widespread adoption. Only 21% of the transportation and logistics firms in McKinsey’s survey had moved beyond the initial testing phase to deploy AI solutions at scale or in a core part of their business. The challenges to AI adoption in the field of supply chain and logistics are numerous and require major capital investments and organizational changes to overcome.

explores the vast impact that AI techniques like machine learning will have on the supply chain and logistics space. We detail the myriad applications for these computational techniques in the industry, and the adoption of those different applications. We also share some examples of companies that have demonstrated success with AI in their supply chain and logistics operations. Lastly, we break down the many factors that are holding organizations back from implementing AI projects and gaining the full benefits of this disruptive technology.

AI in E-Commerce Report

ai ecommerce

One of retailers' top priorities is to figure out how to gain an edge over Amazon. To do this, many retailers are attempting to differentiate themselves by creating highly curated experiences that combine the personal feel of in-store shopping with the convenience of online portals.

These personalized online experiences are powered by artificial intelligence (AI). This is the technology that enables e-commerce websites to recommend products uniquely suited to shoppers, and enables people to search for products using conversational language, or just images, as though they were interacting with a person.

Using AI to personalize the customer journey could be a huge value-add to retailers. Retailers that have implemented personalization strategies see sales gains of 6-10%, a rate two to three times faster than other retailers, according to a report by Boston Consulting Group (BCG). It could also boost profitability rates 59% in the wholesale and retail industries by 2035, according to Accenture.

This report illustrates the various applications of AI in retail and use case studies to show how this technology has benefited retailers. It assesses the challenges that retailers may face as they implement AI, specifically focusing on technical and organizational challenges. Finally, the report weighs the pros and cons of strategies retailers can take to successfully execute AI technologies in their organization.

Subscribe to an All-Access pass to Business Insider Intelligence and gain immediate access to:

This report and more than 250 other expertly researched reports
Access to all future reports and daily newsletters
Forecasts of new and emerging technologies in your industry
And more!
Learn More

Purchase & download the full report from our research store

 

Join the conversation about this story »


The head of healthcare and life sciences at Intel reveals why doctors are reluctant to use new technology like AI, and what's needed to fix that

$
0
0

Jennifer_Esposito

  • Healthcare has had a mixedresponse to using artificial intelligence as part of daily life inside hospitals. 
  • Jennifer Esposito, general manager of chipmaker Intel's health and life sciences group, told Business Insider that the reason many doctors are reluctant to adopt AI is because they don't trust it when it comes to making big decisions. 
  • But AI can help out in other ways, especially on the administrative side that takes up a lot of doctors' time. "I believe things like AI aren't about replacing physicians it's about augmenting them," Esposito said. 

 

Artificial intelligence is slowly but surely making its way into every aspect of our lives. 

But one place that's had mixed reactions to the idea of placing major decisions in the hands of machines is the doctor's office. 

Over the summer, Intel conducted a survey in which it asked doctors why they weren't using AI.

The biggest reason: A lack of trust. Doctors were reluctant about relying on technology that could introduce a fatal error or harm patients. 

 Jennifer Esposito, the general manager of chipmaker Intel's health and life sciences group, told Business Insider that building up that trust comes down to better communication about what AI can and can't do, as well as highlighting some of ways it can be used. For example, AI could be applied to scan patients' prescriptions to make sure drugs they've been prescribed won't cause problems if used together, or to take away some of the administrative tasks doctors have to do in addition to seeing patients. 

"I believe things like AI aren't about replacing physicians it's about augmenting them," Esposito said. With the help of AI built into health system's health records, for example, doctors could see only the most complicated patients in person. 

"That also allows you to think, you don't have to necessarily worry that I've gotta see the patient to know for sure," Esposito said. "Now you can make decisions about which patients really do need to come into the office versus not." 

The market for AI in healthcare is expected to grow to $6.6 billion by 2021, with all sorts of companies from startups to healthcare giants like UnitedHealth Group coming up with different applications for AI to enhance the way we practice medicine. 

See also:

Join the conversation about this story »

NOW WATCH: NASA footage shows the 'nightmare' Hurricane Florence

The startup behind a tool designed to save you a doctor's visit has partnered with Bill and Melinda Gates

$
0
0

bill gates melinda gates

  • A symptom-checking tool called Ada Health is launching a new partnership with the Bill and Melinda Gates Foundation.
  • On Wednesday, the startup will begin working with the Gates Foundation to study how the tool could support healthcare workers in rural parts of the world.
  • Ada Health is already one of the most popular medical apps in over 130 countries.

Getting to the doctor when you're not feeling well is no easy task no matter where you live. But in many parts of the world, there are bigger problems than high costs and long wait times.

For roughly half the globe's population, basic healthcare is a luxury that's too expensive to get. So Ada Health, a tool that lets you type in your symptoms to learn what's causing them, is launching a new initiative with the Bill and Melinda Gates Foundation to extend the reach of its services.

The Ada app is designed to tell you what's causing your symptoms with more accurate results than you'd get from a Google search. Users open the app, enter their age and gender, and type in a symptom like pain or a cough. Then an AI-powered bot asks several questions, like what makes the symptom worse, and tells you the most likely culprit.

Starting today, Ada is working with the Bill and Melinda Gates Foundation to study how the platform can be used to support healthcare workers in rural parts of several countries in East and Sub-Saharan Africa, Southeast Asia, South America, and India.

The project is part of Ada's new Global Health Initiative, a series of projects focused on improving access to primary care in underserved populations across the world. The effort will involve work with local governments, NGOs and other partners as well. 

"The reason we’re doing this is the same reason why we started Ada in the first place: it’s about giving people better access to quality healthcare," Daniel Nathrath, CEO and co-founder of Ada Health, told Business Insider. "While it’s a noble goal to pursue it in the US or Germany, it’s even more important in countries where so many people don’t have access to a doctor."

Currently, the app is available in roughly 130 countries including Germany (where it started), the US, and Canada. Already, roughly a third of Ada's customers hail from countries outside of Germany, according to the company.

To Google or not to Google

ada health teamTo Google or not to Google — that's often the question when it comes to an ailment like a cough or stomach pain.

But researching your symptoms online can send you down a rabbit hole that leads you to think you have a life-threatening condition. A trip to the doctor, n the other hand, can be time-consuming and expensive.

Nathrath and his co-founder, Claire Novorol, created Ada Health to give people a third option.

Unlike the results that come from sites like WebMD, Ada's results are based on a growing database of hundreds of thousands of people that match your age and gender. The idea is that by homing in on a population sample you fit into, Ada can give more accurate results.

Say you're a 31-year-old woman experiencing stomach pain, for example. Once you type in your symptoms and answer Ada's questions, it might tell you that most of the other 31-year-old women in the database who reported your symptoms were diagnosed with Irritable Bowel Syndrome. Then Ada may advise visiting a healthcare provider. Or if the likely cause of your symptoms is not a serious issue, Ada may suggest that you simply rest.

ada health symptom q

Putting Ada into the hands of healthcare workers

As part of the new partnership with the Gates Foundation, Ada researchers will look at the data the app gathers in several rural, low-income parts of the world to better understand patients' needs and learn how to improve healthcare delivery to these regions. 

In the future, Nathrath said he hopes such insights could be used to do things like help stop a deadly outbreak.

Hila Azadzoy, Ada's managing director of the Global Health Initiative, told Business Insider that her team is now working to equip Ada with more relevant data on tropical diseases like Chagas and dengue. They're also analyzing what kinds of physical diagnostic tests they could give people — along with Ada — to confirm some of its assessments.

"Most healthcare workers work door-to-door and can track patient symptoms," Azadzoy said. "The vision we have is we can put Ada into their hands and even connect Ada with diagnostics tests so that — at the home of the patient —they can pull it out and say, 'OK this is confirmed,'" she said.

Are symptom checkers the next big thing in primary care?

Since it was founded in Berlin in 2011, Ada has raised $69.3 million with the help of several big-name backers including William Tunstall-Pedoe, the AI entrepreneur behind Amazon's Alexa, and Google’s chief business officer Philipp Schindler. The company says Ada has already been used by 5 million people in the US and Europe, where it is one of the highest ranked medical apps.

Ada is not the only tool that lets users input and track their symptoms. Another so-called "symptom checker" is primary-care app K Health, which launched in 2016.

If these services can get the science and AI right, they offer a long list of potential benefits, including reducing healthcare costs, saving time for patients and doctors, slashing unnecessary worry — and even, one day perhaps, helping to prevent an outbreak like Ebola.

But more data is needed on the effectiveness of these services. The last comprehensive assessment of symptom checkers was published by Harvard Medical School researchers in 2015, before Ada or K Health existed. Since then, at least half a dozen other services have emerged as well.

Until better data becomes available on these apps, they can at least offer users an educated assessment about what's causing a symptom like a sore throat. And in rural areas where people don't have access to a healthcare provider, that could be a huge source of support.

"The first step towards getting the right treatment is understanding what’s ailing you," Nathrath said.

SEE ALSO: A controversial startup that charges $8,000 to fill your veins with young blood is opening its first clinic

DON'T MISS: 40 AND UNDER: The Silicon Valley biotech stars who are backing startups aiming to cure disease, prolong life, and fix the food system

Join the conversation about this story »

NOW WATCH: The incredible story of one doctor who saved the lives of a million soldiers

LinkedIn is using AI to make recruiting diverse candidates a no-brainer (LKND)

$
0
0

LinkedIn employees

  • LinkedIn is announcing new artificial intelligence features to help recruiters hire more diverse candidates.
  • The feature, which is now rolling out in the U.S., will ensure that the top search results will have a more representative gender breakdown.
  • LinkedIn will also track the hiring process in regards to gender to help companies understand the talent landscape and compare themselves to their industry peers.

LinkedIn is betting that AI can help companies overcome the human biases that hinder diversity.

The professional social network is rolling out new features on Wednesday to help companies notice and hire diverse job candidates so that they don't miss out on potential talent. The new artificial intelligence features will be incorporated into LinkedIn's Talent Insight product, aimed at recruiters, and will focus on gender diversity. 

The announcement by LinkedIn comes shortly after news emerged that Amazon had to shut down a special AI hiring tool, specifically because the technology discriminated against women, Reuters reported. LinkedIn's new AI tool, which the company briefed Business Insider on before the news about Amazon emerged, appears designed to filter out the biases in data that can taint AI technology. 

LinkedIn will track what happens in the hiring process with regards to gender, showing companies reports and insights about how their job postings and InMail are performing on this. In addition, LinkedIn will re-rank the top search results in LinkedIn Recruiter to be more representative.  

“Say I search for an accountant, and there are 100,000 accountants in the city I’m looking at,” said John Jersin, VP of Product Management for LinkedIn Talent Solutions. “If the gender breakdown is 40-60, then what Representative Results will do is that no matter what happens in our AI, the top few pages will have that same 40-60 gender breakdown.”

Currently, progress on diversity in many companies, especially in the tech industry, is slow. But studies have shown that diverse teams see higher profit, have better focus and are more innovative, leading companies to pay more attention to diverse hiring.

Human error in the hiring process can inhibit diversity on teams, and as companies shift towards using artificial intelligence in hiring, algorithmic bias can as well. To address this, LinkedIn is launching new features within its new Talent Insights product. The diversity insights will be available in the U.S. on Wednesday and will roll out globally in the near future.

"You don’t even realize you’re forming an opinion in a certain way"

Unconscious bias can play a role in the hiring process. Recruiters and teams may gravitate to people who are more similar to them, and they may base their decisions on stereotypes about people’s skills. For example, if a team is mostly made up of men, they may be less likely to hire a woman onto their team.

LinkedIn

“Unconscious bias occurs in a split second,” Jersin said. “When you look at someone for the first time, you don’t even realize you’re forming an opinion in a certain way. As we’re shifting hiring decisions to be made by artificial intelligence or at least partially made by it, we lay out those decisions in the data and the code.”

With LinkedIn reporting data on gender in the hiring process, companies can see the gender breakdown in each step of the process. They can also have better insight on the talent landscape as a whole, such as how skill sets change over time and finding talent pools. Companies can also compare their own gender breakdown to their peers in the industry and identify how to tap into a more representative pool.

The company said it had no current plans to extend the AI features to focus on other forms of workplace diversity besides gender. 

On Monday, LinkedIn announced its intent to acquire Glint, an employee surveying startup, and the company hopes Glint’s insights will also improve how customer attract, develop and retain talent.

“We’re developing a new level of artificial intelligence that’s improving efficiency in our product,” Jersin said. “We’re ensuring that it’s working in a fair representative way. We want to make sure that we’re taking steps to help our customers with diversity.”

SEE ALSO: It took only a year and a half for these 22-year-olds to build a billion-dollar company. Here's how they did it.

Join the conversation about this story »

NOW WATCH: Ray Dalio says the economy looks like 1937 and a downturn is coming in about two years

Andy Rubin, the father of Android, is reportedly working on a new smartphone that can text for you

$
0
0

Andy Rubin at wired business conference

  • Andy Rubin, the father of Android, is reportedly working on a new phone for his startup, Essential, according to Bloomberg. 
  • The phone would be able to perform tasks without any instruction from the user. Bloomberg reports that it would be able to send texts, respond to emails, or schedule appointments on its own. 
  • Essential currently sells one other smartphone, the PH-1. While it has been positively reviewed, sales have lagged. 
  • Reports emerged in May that Rubin was considering selling the company altogether. 

The man who created Android, Andy Rubin, is reportedly working on a futuristic new project at his consumer tech startup, Essential.

Rubin's company is working on a new smartphone that would perform tasks without any instruction from the user, Bloomberg reports. Using artificial intelligence, the phone would be able to send texts, respond to emails, and schedule appointments — all on its own.

Bloomberg reports that the device wouldn't look like a typical smartphone, but instead just have a small screen. Consumers would use voice commands to interact with the device and work with the AI technology. Essential is trying to have a prototype of the phone finished by the end of the year, according to Bloomberg.

In an interview with Bloomberg last year, Rubin seemed to hint at the idea of an AI-powered phone. 

“If I can get to the point where your phone is a virtual version of you," Rubin said, "you can be off enjoying your life, having that dinner, without touching your phone, and you can trust your phone to do things on your behalf."

Rubin's vision isn't far off from the technology in "Her," a 2013 movie where a man falls for the artificially intelligent operating system that performs tasks on his phone. 

her movie joaquin phoenix

This latest technology venture could be Rubin's attempt to keep the struggling Essential afloat, especially since reports emerged in May that the Android creator was thinking about selling his company

Rubin unveiled his Essential PH-1 smartphone last year to positive reviews for its sleek design, but Bloomberg reported in May that the startup only sold around 150,000 phones and was forced to slash the price by $200. Essential has paused work on a second generation of the phone, as well as on a smart home speaker, so this AI-powered phone would seemingly be Rubin's third attempt to launch a successful device since his startup was founded in 2015

Up until he left Google in 2014, Rubin helped launch and run Android, which is now the most-used mobile operating system. It was revealed in late 2017 that an internal complaint from Rubin's time at Google alleged he had an "inappropriate" non-consensual relationship with a coworker.

SEE ALSO: Google made an $80 charging dock for the new Google Pixel 3 smartphone that turns it into a smart alarm clock

Join the conversation about this story »

NOW WATCH: Apple's entire iPhone XS event in 8 minutes

Society is at a "crossroads" when it comes to artificial intelligence — and a technology executive explains why we need to be careful

$
0
0

artificial intelligence

  • In this op-ed, SAP president of Americas and Asia Pacific Japan Global Customer Operations Jennifer Morgan argues that while artificial intelligence holds plenty of promise for businesses, it may also create numerous economic, political and social challenges. 
  • AI technology needs to be governed by clear ethics rules, Morgan says. 

The AI revolution has the promise to unlock boundless potential for businesses: from better products and services, to faster innovation and unimaginable leaps in productivity.

But, like all great technological advancements, AI also has the potential to create numerous economic, political and social challenges, depending upon how it is used and implemented. Because of that, the use of AI technology needs to be governed by clear rules of ethics — defined at the outset of this new era, instead of later on, when abuses or ill-considered practices could be far more difficult to control.  

This is not the first time society has been at a crossroads where we face new technological powers that can serve great and worthy purposes or be abused to support some very bad ones. Yet one thing is clear and remains in our power: artificial intelligence, will never substitute for human wisdom or moral responsibility.

Among technology companies, few are closer to the center of this ethical challenge than SAP. Our various systems play some part in about 77% of the world’s transactions, and our applications affect the lives of billions of people daily.  Add to that, more than 400,000 corporate customers worldwide, and it’s clear we are in a position of influence. We intend to use that influence, by laying out careful standards, and encouraging other companies to do the same.

Not long ago, we announced our guiding principles for the uses of AI. We became the first European technology company to create an external, transparent AI Ethics Advisory Panel consisting of thoughtful men and women from many fields. The panel will stay alert to possible misuses of AI in every area — from labor management to data protection. It will listen to concerns and work on ways to avert problems.

Moreover, in all practices employing AI technology, we firmly adhere to our company’s Human Rights Commitment Statement, as well as to the UN Guiding Principles on Business and Human Rights.

Both of those documents pay special attention, as they should, to the potential impact all new technologies could have on jobs. And it’s a safe assumption that AI will follow the familiar pattern of creating new types of work at the expense of traditional ones. It’s all the more important, therefore, that companies not be passive as these changes unfold, relentlessly adopting technologies without regard for their impact on workers or society.

As AI is just emerging, these guiding principles are just a starting point.  As large and resourceful as we are, we know we don’t have all the answers. In publishing these principles, we invite the best ideas of everyone on our team. That’s why an internal Steering Committee, comprised of SAP employees from development, strategy, human resources, and other departments, will work with the External Advisory Panel to refine the guidelines and ensure they keep pace with the dramatic changes to come.

We also know that the entire community of AI technology providers must work together to uphold ethical standards, and vague standards will not be enough. Clear, bright lines of ethics, far from hindering AI and ML, will be essential to their success.  As with every new power we gain from technology, what matters most is that technology serves humanity, and not the other way around.

 Jennifer Morgan is a member of SAP's executive board and the president of Americas and Asia Pacific Japan Global Customer Operations

Join the conversation about this story »

NOW WATCH: 3 compelling reasons why we haven't found aliens yet

These charts show how pumped up HR departments are about AI — even if many of them are still relying on paper documents

$
0
0

AI

  • The vast majority of human resources departments at larger companies plan to significantly increase their tech spending in the next two years, consulting company Bain found in a new survey.
  • Much of that investment will go to artificial intelligence technologies.
  • Some HR departments are already using AI and related technologies for things such as workforce planning and performance management, Bain found.
  • But many HR departments are still relying on older processes, including paper forms, and most have had trouble getting the most out of the technology they're already using.

Many corporate human resources departments are such technological backwaters that they still rely on Excel spreadsheets or even paper documents for many of their tasks or services.

But the vast majority of HR departments expect to make a quantum leap in their IT systems in just the next two years, with many of them embracing artificial intelligence to help with their functions, according to a new study from consulting firm Bain.

"HR departments are rapidly adopting new technologies," Michael Heric, a partner with Bain's Performance Improvement practice, said in the report.

It warned, though, that "the appetite of HR leaders for more digital tools may outpace their ability to absorb the tools."

For its survey, Bain polled human resource executives and managers at 500 large companies in the US, Germany, and the United Kingdom. The companies, which each have more than $500 million in annual revenue, including both publicly traded and privately owned organizations and represented a broad range of industries from manufacturing to retail to healthcare.

Some HR departments still rely on paper forms

Many of the HR departments are still relying on older processes for many of their services. Depending on the service, somewhere between 23% and 31% of such departments still rely on manual techniques, such as entering data into a basic spreadsheet or relying on paper forms, the study found.

For example, 31% of HR departments surveyed still rely on manual processes for career management of employees. Some 27% rely on such techniques to manage compensation and benefits. And a full quarter of HR departments even use manual processes to handle their payrolls.

Bain study on artificial intelligence — AI — in human resources departments (HR) — chart showing reliance on manual processesHR professionals expect to dramatically reduce their reliance on such outdated techniques and processes within the next two years, according to Bain's study. By then, just 7% expect to be using manual processes for career management or compensation and benefits. And just 2% expect to be using such techniques for their payroll.

Many departments are planning to up their tech spending significantly in the next two years to replace or upgrade older systems and processes. Some 57% of the HR leaders surveyed expect to increase their department's IT budget by between 1% and 10% a year during that time period and a full quarter of them expect to their annual budgets to go up by more than 10%.

Bain study on artificial intelligence — AI — in human resources departments (HR) — Chart on spending expectations

Much of that investment will go toward artificial intelligence and related technologies, according to Bain's study.

The HR departments at most of the companies surveyed already have bought into AI. Some 54% said they're using artificial intelligence in at least one of their functions. The most popular places where they're using it is in workforce planning and performance management.

Bain study on artificial intelligence — AI — in human resources departments (HR) — Chart on AI investment and use

Some companies surveyed are already seeing success from using the technology. Unilever, for example, is using AI to help with screening job candidates; the technology has helped cut the average time it takes to hire new people by 75%, according to Bain.

"Artificial intelligence in all its forms ... has already demonstrated promising results," Heric wrote.

Many plan to invest in AI

Other companies who haven't yet invested in AI expect to do so soon. Some 24% of the HR leaders surveyed said that while they aren't using AI yet, they expect to be using it in at least 1 of their processes within two years. By then, majorities of those surveyed expect to be using the technology in workforce planning, performance management, compensation and benefits, and learning and development.

But the rapid adoption of AI and other digital technologies, such as cloud services, could cause problems.

As Reuters reported this week, Amazon had to shut down an AI recruiting tool it had built because they found the technology had emulated the unconscious bias in certain people and was discriminating against women candidates for engineering jobs.

And, as Bain noted, many HR departments have already had trouble integrating new high-tech processes into their operations. Some three-fourths of respondents said their tech systems have not reached "optimal performance." Meanwhile, many expressed frustration with having too use too many digital tools; having to work with too many different data source, which frequently weren't connected together; and having to try to figure out how to use confusing interfaces.

"HR executives may be overconfident in how quickly they can make the shift [to AI and other new technologies], given how rocky the road has been so far in most HR departments," Heric wrote.

Now read:

SEE ALSO: AI is great at recognizing nipples, Mark Zuckerberg says

Join the conversation about this story »

NOW WATCH: Watch Apple unveil a new, bigger watch

This smart tech can tell instantly whether or not a US Army soldier is ready for war

$
0
0

A soldier stands in a virtual reality lab wearing a headset to measure his cognitive responses under stress

  • The Army is modernizing to ensure that it is ready to fight wars in an age of competition with adversarial powers like China and Russia.
  • Training is changing as the Army pursues dynamic live, virtual, and mixed-reality training that offers data analysis supported by artificial intelligence and other smart systems.
  • AI and machine learning are very important, Maj. Gen. Maria Gervais, the director of the Army's Synthetic Training Environment team, told reporters Wednesday, "Being able to take the data from your training to be analyzed for trend analysis and predictive analysis is going to be a game changer."

The Army is changing the way it prepares for war, and one of the ways the service is doing this is by turning to augmented reality and artificial intelligence for advanced training, putting combat readiness not only in the hands of experienced officers but also smart machines.

Let's say there's a four-man team preparing to clear a building in a training exercise. As the first man busts through the door, a biometric feedback sensor indicates that his adrenaline spiked off the charts while muzzle and eye tracking sensors showed the soldier looking one way while his gun pointed another. When the third man enters, a motion sensor indicates that he froze momentarily.

And all this data is being run through machine learning systems for trend and predictive analysis, producing a readiness score for essential tasks.

Imagine soldiers training to fight augmented reality adversaries in virtual battle spaces, showdowns that like Mortal Kombat can take place in cities around the world.  

"We have these abilities, and I have seen it from our industry partners. Instantaneous feedback," Maj. Gen. Maria Gervais, director of the Synthetic Training Environment team, told Business Insider Wednesday at the Association of the United States Army conference in Washington, DC. She revealed that while the Army is not there yet, the service is quickly moving in that direction.

Soldier lethality is one of the priorities of the newly-established Army Futures Command, a new four-star command focused on rapid research and development for future weapons and warfighting capabilities, as well as enhanced training options.

"There are systems that we're looking at that can allow the soldiers to train as they will fight, train where they will fight and train against who they will fight while back in the home-station training environment," Sgt. Maj. Jason Wilson, a representative for the Pentagon's Close Combat Lethality Task Force, told journalists at a combat lethality series presentation last month.

One option for the Army is next-level synthetic training environments, where troops can train individually or in groups in both fixed or mobile live, virtual, or mixed-reality battle spaces of all sizes. 

This is a big deal given the inadequacies of some of the existing training platforms.

The current training systems are limited in their capabilities. For example, the technology for the existing virtual trainers does not allow the Army to bring in all of the enablers, such as logistics, medical, engineering, and transportation teams.

"I can only bring air, ground platforms, and a few other capabilities," Gervais explained. "We need to train combined arms to prepare for large-scale combat against a peer or near-peer threat," such as China or Russia.

Terrain is also a huge challenge. "We are trying to get to one-world training," the general introduced. "Terrain is our Achilles heel. We are trying to get after that quickly."

User assessment testing for re-configurable virtual trainers began earlier this year. Within the next two years, the Army wants AI-driven trend and predictive analysis based on biometric and sensor data collected during training exercises. "Right now, we are only as good as someone's experience and their eye and what they catch or what we see in video," Gervais told Business Insider. "We want to be able to assess training, and we have some of that capability right now, but not to the degree we need."

"If you ask any soldier if he is combat ready, he will undoubtedly say, 'Yes, yes,'" Amul Asthana, a spokesman for Zen Technologies and a retired Indian army brigadier general, told BI while introducing his company's simulated training capabilities. "I can say I do not have high blood pressure, but without testing it, it is impossible to know for certain."

The aim of the new Synthetic Training Environment program is to ensure that the US Army knows American troops are ready for battle, especially when the next conflict could be one against a top adversary.

Join the conversation about this story »

NOW WATCH: Inside the Trump 'MAGA' hat factory


Grimes and Elon Musk seem to have reconnected — here's what you need to know about the Canadian singer and producer who is spending time with Tesla's CEO (TSLA)

$
0
0

Grimes

At the Met Gala in early May, a surprising new couple showed up on the red carpet: billionaire tech CEO Elon Musk and Canadian musician and producer Grimes.

While Musk has long been known to date successful and high-profile women, the two made a seemingly unlikely pairing. Shortly before they walked the red carpet together, Page Six announced their relationship and explained how they met — over Twitter, thanks to a shared sense of humor and a fascination with artificial intelligence.

Since they made their relationship public in May, the couple has continued to make headlines: Grimes for publicly defending Musk and speaking out about Tesla, and Musk for tweeting that he wants to take Tesla private, sparking an SEC investigation.

But shortly after Musk's run-in with the SEC, Grimes and Musk unfollowed each other on social media, igniting rumors that the pair had broken up. 

Now, it appears that the couple is spending time together again: they were spotted with Musk's five sons at a pumpkin patch in Los Angeles last weekend. 

For those who may still be wondering who Grimes is and how she and Musk ended up together, here's what you need to know about the Canadian singer and producer.

SEE ALSO: How to dress like a tech billionaire for $200 or less

Grimes, whose real name is Claire Boucher, grew up in Vancouver, British Columbia. She attended a school that specialized in creative arts but didn't focus on music until she started attending McGill University in Montreal.

Source: The Guardian, Fader



A friend persuaded Grimes to sing backing vocals for his band, and she found it incredibly easy to hit all the right notes. She had another friend show her how to use GarageBand and started recording music.

Source: The Guardian



In 2010, Grimes released a cassette-only album called "Geidi Primes." She released her second album, "Halfaxa," later that year and subsequently went on tour with the Swedish singer Lykke Li. Eventually, she dropped out of McGill to focus on music.

Source: The Guardian, Fader



See the rest of the story at Business Insider

Google CEO Sundar Pichai says employee protests against the company's work with US military had little impact on management: 'We don't run the company by referendum' (GOOG, GOOGL)

$
0
0

Sundar Pichai

  • At a tech gathering in San Francisco on Monday, Google CEO Sundar Pichai remarked on an internal protest that rattled the company earlier this year.
  • The protest stemmed from opposition over the company's work on a military project.
  • Pichai downplayed the influence those demonstrations had on management decisions.
  • He also said Google will work with the US armed forces in the future and "greatly respects what they do to protect our country."

Thousands of Google employees participated in an internal protest against the company's participation in a high-tech military project earlier this year, but the unprecedented revolt at the company had little influence on management's decision-making, according to CEO Sundar Pichai.

At a gathering to celebrate Wired magazine's 25th anniversary, Pichai was asked whether Google's employees had anything to do with the company's announcement last week that it will not compete for a much sought-after $10 billion cloud-computing contract offered by the Pentagon.

"Throughout Google's history we've given our employees a lot of voice and say in it, but we don't run the company by holding referendums," Pichai said. "It's an important input. We take it seriously. But even on this particular issue it's not just what the employees said. It's also about the debate within the AI community."

In March, when word leaked that Google had quietly contributed to Project Maven, a Pentagon effort to use artificial intelligence to analyze drone video footage, more than 4,000 Google employees signed a petition demanding the company stop the work. Some employees leaked documents to journalists and about a dozen resigned.

In June, Google's leadership appeared to respond to the protest by releasing a set of AI principles designed to govern the company's ethical use of the technology. They included a promise never to build AI weapons. Back then, it sure seemed like the opposition within Google to Project Maven had forced the company's hand.

But on stage at Wired25, Pichai said that Google plans to work with the US Department of Defense in the future, perhaps in such areas as cyber-security or transportation planning. He said Google very much supports the US armed forces.

"We deeply respect what they do to protect our country," he said.

'Important for us to explore search' in China

He made it clear, however, Google will not work on autonomous weaponry or anything that violates the company's AI principles.

Pichai was also asked about Google's possible plan to once again offer a search engine in China. This summer, The Intercept broke the news that Google had built a search engine that would censor information. Google search pulled out of China in 2010, claiming it could no longer comply with the government's demands that the company filter information.

To return to China, Google would again have to filter information that authorities find objectionable. Some groups argue that censoring information is a violation of human rights.

Pichai said the offering search in China, home to 20-percent of the world's population, is important to the company. As for a censored search engine, Pichai said the company wanted to see what a Google search engine that complied with Chinese law would look like. He gave no timetable for a return to China.

SEE ALSO: Google cloud boss Diane Greene won't attend Saudi investment event, throwing a wrench into Google's plans to tap new revenue in the country

SEE ALSO: Google is building a media and entertainment empire — here are 10 stars leading the effort

Join the conversation about this story »

NOW WATCH: Apple's entire iPhone XS event in 8 minutes

Stephen Hawking feared intelligent machines could destroy humans with weapons 'we cannot even understand'

$
0
0

Stephen Hawking

  • Stephen Hawking, who died earlier this year, wrote a collection of essays that were released on Tuesday. 
  • The book, Brief Answers to the Big Questions, includes a chapter on the potential dangers of artificial intelligence. 
  • Hawking wrote that superhuman intelligence could manipulate financial markets, human leaders, and more without our control.
  • People should invest more in researching the potential effects of artificial intelligence in order to prevent losing control of machines, Hawking said.

Machines with superhuman intelligence have the potential to subdue humans with weapons that "we cannot even understand," Stephen Hawking wrote in a posthumous collection of essays released Tuesday.

The book, Brief Answers to the Big Questions, comes seven months after the world-famous scientist's death. It features commentary on a variety of topics, including black holes and time travel, though some of the most dire predictions relate to artificial intelligence.

If computers keep doubling in both speed and memory capacity every 1.5 years, Hawking wrote, they will likely become more intelligent than people in the next 100 years. Such an intelligence "explosion" would require us to make sure that computers do not begin harming people, he said.

"It's tempting to dismiss the notion of highly intelligence machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever," Hawking wrote. 

Hawking noted that integrating artificial intelligence with neuroscience, statistics, and other fields has yielded many successful inventions — including speech recognition, autonomous vehicles, and machine translation. One day, even diseases and poverty could be eradicated with the help of artificial intelligence, he said.

While technology could benefit humans a great deal, Hawking wrote, researchers need to focus on making sure we avoid the risks that come with it. In the near future, artificial intelligence could increase economic equality and prosperity through job automation. But one day, the same systems could take over our financial markets, manipulate our leaders, and control our weapons, Hawking said.

"Success in creating AI would be the biggest event in human history," he wrote. "Unfortunately, it might also be our last, unless we learn how to avoid the risks."

Researchers have not focused enough on artificial intelligence-related issues, Hawking said, though some technology leaders are stepping in to change that. Hawking cited Bill Gates, Elon Musk, and Steve Wozniak as examples of people who share his concerns, adding that awareness of potential risks is growing in the tech community. 

People should not turn away from exploring artificial intelligence, Hawking wrote. Human intelligence, after all, is the product of natural selection in which generations of people adapted to new circumstances, he said.

"We must not fear change," Hawking wrote. "We need to make it work to our advantage."

When humans invented fire, people struggled with controlling it until they created the fire extinguisher, Hawking wrote. This time around, we cannot afford to make mistakes and respond to them later, he said.

"With more powerful technologies such as nuclear weapons, synthetic biology and strong artificial intelligence, we should instead plan ahead and aim to get things right the first," Hawking wrote. "It may be the only chance we will get."

SEE ALSO: Stephen Hawking warned of future 'superhumans' threatening the end of humanity

SEE ALSO: Stephen Hawking's warning that genetically altered superhumans could wipe out the rest of us doesn't mention a likely characteristic of the future elite

Join the conversation about this story »

NOW WATCH: Medical breakthroughs we will see in the next 50 years

MIT is giving you control of a real person on Halloween in a dystopian game that sounds like an episode of 'Black Mirror'

$
0
0

beeme internet social experiment halloween hands mit media lab

  • MIT Media Lab is hosting a mass online social experiment on Halloween at 11 p.m. EDT.
  • Called "BeeMe," the goal of the "dystopian game" is to let participants control an actor and defeat an evil artificial intelligence program.
  • Internet users will program the actor by crowdsourcing commands and then voting on them.
  • BeeMe's creators say they want the project to stoke conversations about privacy, ethics, entertainment, and social interactions.

This Halloween, the creepiest event to attend might be a mass online social experiment hosted by researchers at the Massachusetts Institute of Technology.

MIT is famous for churning out some of the world's top engineers, programmers, and scientists. But the university's Media Laboratory is increasingly known for launching experimental projects in October that are designed to make us squirm.

In 2016, researchers at the MIT Media Lab created the artificial-intelligence program Nightmare Machine, which converted normal photos into into macabre images. (The results were predictably creepy.) Then in 2017, a researcher made AI software called "Shelley" that learned how to write its own horror stories. (These were also creepy.)

This year, members of MIT Media Lab are taking their desire to freak us out to the next level with a project called "BeeMe."

BeeMe is described in a press release as a "massive immersive social game" that aims to "shed a new light on human potential in the new digital era." But it also sounds like a choose-your-own-adventure episode of the show "Black Mirror."

"Halloween night at 11 p.m. ET, an actor will give up their free will and let internet users control their every action," Niccolò Pescetelli, who studies collective intelligence at MIT Media Lab, told Business Insider in an email about BeeMe.

Pescetelli added: "The event will follow the story of an evil AI by the name of Zookd, who has accidentally been released online. Internet users will have to coordinate at scale and collectively help the actor (also a character in the story) to defeat Zookd. If they fail, the consequences could be disastrous."

How MIT will let you control a person

beeme internet social experiment halloween mit media lab

The project's slogan is: "See what I see. Hear what I hear. Control my actions. Take my will. Be me."

The full scope of gameplay is not yet public. However, Pescetelli, BeeMe's social media accounts, and promotional materials reveal a few key details.

The person being controlled will be a trained actor, not anyone randomly selected. Who that actor will be and where they will be located won't be disclosed, Pescetelli said. He said he expects the game to last about two hours, but added "it will be the audience who ultimately decides" how long the game will go on.

There will be limits to what crowd-generated commands can make the actor do.

"Anything that violates the law or puts the actor, their privacy, or their image in danger is strictly forbidden," Pescetelli said. "Anything else is allowed. We are very curious about what [is] going to happen."

beeme internet social experiment halloween vote command mit media lab

Participants will control the actor through a web browser, in two ways.

One is by writing in and submitting custom commands, such as "make coffee,""open the door,""run away," and so on. The second way is by voting up or down on those commands, similar to the system used by Reddit. Once a command is voted to the top, the actor will presumably do that very thing.

This is the origin of the word "bee" in the project's name: Internet users will have to act collectively as a "hive" to progress through the game.

BeeMe's Twitter account shared an eerie teaser video of the game on October 15.

"Many people have played an augmented reality game, but BeeMe is reality augmented," Pescetelli said in a press release. "In BeeMe an agent gives up their free will to save humanity — or perhaps to know whether humanity can be saved at all. This brave individual will agree to let the Internet pilot their every action."

The whole event will be broadcast live at beeme.online.

"In theory there is no limit to the number of users that the platform can support, but we will know for sure only on Halloween," Pescetelli said.

Why the researchers created BeeMe

beeme internet social experiment halloween message mit media lab

The BeeMe project is made by eight people, will cost less than $10,000, and quietly went public in May 2018, when it joined Twitter as @beeme_mit. The tweets posted by the account capture some of its thinking and evolution.

One tweet quotes philosopher Marshall McLuhan, who famously wrote in 1964 that "the medium is the message"— meaning that any new way to communicate influences what we say, how we say it, and ultimately what we think. McLuhan, who lived until 1980, is described by his estate as "the father of communications and media studies and prophet of the information age."

The account also references other visionaries, including analytical psychologist Carl Jung, social scientist Émile Durkheim, and biologist Charles Darwin.

"[In] the long history of humankind (and animal kind, too) those who learned to collaborate and improvise most effectively have prevailed," BeeMe tweeted in August, quoting a famous saying of Darwin's (and likely as a tip on how to win the game).

Another tweet highlights a shocking act of performance art called "Come Caress Me," created in 2010 by Amir Mobed. In the installation, Mobed stands before a huge target with a metal bucket on his hed, and volunteers are led into the room to shoot him with a pellet gun. (Many do, not seeming to understand the ammunition is real.)

These and other BeeMe posts seem to reflect what the experiment strives to be on Halloween: Something that is on its surface fun, but reveals some hidden truths about ourselves and our digital society.

In a release sent to Business Insider, the project described itself this way: "BeeMe is a dystopian game that promises to alter the face of digital interactions, by breaking the Internet's fourth wall and bringing it back to reality. BeeMe wants to reopen a serious — yet playful — conversation about privacy, ethics, entertainment, and social interactions."

Whatever the game ends up teaching those who play or watch it, we'll find out on Halloween if humanity can pull together to save itself — or fail in dramatic disarray.

This story has been updated with new information.

SEE ALSO: Watch a haunting MIT program transform photos into your worst nightmares

DON'T MISS: An MIT startup made a simple device that turns filthy car exhaust into beautiful ink

Join the conversation about this story »

NOW WATCH: Four MIT graduates created a restaurant with a robotic kitchen that cooks your food in three minutes or less

This Irish CEO explains how a barista in a hipster coffee shop inspired Intercom, a $1.3 billion startup

$
0
0

intercom_thenextchapter_0424

  • Eoghan McCabe is the CEO and cofounder of Intercom, one of the fastest growing startups in Silicon Valley.
  • Intercom offers services that help companies communicate with their customers.
  • McCabe and his partners formed Intercom after being inspired by the personalized service they got at a Dublin coffee shop.
  • Intercom is now offering chatbots that can help businesses grow.

Eoghan McCabe, CEO and co-founder of Intercom, first came up with the idea for his business in a coffee shop.

In Dublin, where McCabe is from, he and his coworkers would frequent a hipster coffee shop called 3fe and chat with the owner, Colin Harmon. There weren’t many other coffee shops with that vibe in the city, and McCabe appreciated how Harmon connected with his customers. The personalized service Harmon offered ended up inspiring McCabe and his partners.

"We got to meet and appreciate the guy, feel the passion for his craft," McCabe said. "We built a relationship with him and paid more for his overpriced coffee."

He continued: "When we looked at every internet business, they didn’t get to connect with us the way Colin connected with us."

McCabe started thinking about how internet businesses aren't great at interacting with customers. Typically they send customers emails from "donotreply" addresses and then route them through not always helpful "help" desks when they need more support.

"All these products were really impersonal," McCabe said. "With Colin, if you went into his store and asked, 'We have a question about Colombian roast,' he’d say, 'What is it?'"

McCabe and co-founders Des Traynor, Ciaran Lee, and David Barrett, soon got to work on a messaging service for companies that became the foundation of Intercom.

Intercom has become a billion-dollar company and is growing fast

Fast forward seven years to today, and the company has become a $1.3 billion business headquartered in San Francisco with offices in four other cities. It's one of the fastest growing startups in Silicon Valley. In March, it raised $125 million in new funds. And just this month, it launched its new product, the Answer Bot.

When Intercom first began, it focused on creating an instant messaging system to connect companies' customers with their sales, marketing, and support employees. Its next step is to use machine learning and artificial intelligence to create bots that can automate every stage of the customer lifecycle.

"The first chapter was about getting people back into the mix," McCabe said. "This next chapter is to facilitate automation, bots, to achieve the same vision and mission."

That may seem contradictory. After all, Intercom's original goal was to make businesses more personal, and bots are literally not persons at all.

But McCabe says the apparent contradiction goes away if you think about the meaning of "personal."

It's "all about treating the customer as an individual," he said. "It’s about respecting their time and their dignity. It's all about getting them to their ideal outcome.

"We started to realize these bots and automation technology could do all that — sometimes better than humans."

Intercom's bots are designed to help out human workers

Intercom's bots aren't pretending to be humans; they're open about being bots. They're also not intended to completely replace human workers. Instead, they're meant to assist them and allow companies to help more customers than they could before.

If a company only has human workers to handle customer service questions, customers can end up having to wait long periods for someone to handle their queries. Intercom's clients can use Answer Bot to handle some of those questions instead.

The service uses machine learning, relying on previous conversations between employees and customers to figure out how to answer new queries. Intercom clients can even build their own chatbots using Custom Bot, a product the company released in August.

While their bots are handling routine questions, companies can route more complicated ones to their human workers.

"Answer Bot has a deep bank of the questions people ask about that business," McCabe said. "The next time people ask that question, it doesn't get sent to your support team to answer that question for the 2000th damn time. Answer Bot can answer that and say, 'let us know if you need anything else.'"

McCabe predicts that bots will soon become commonplace. Intercom has over 30,000 paying customers, including Sotheby's, Atlassian, Shopify, and Expensify, and its service facilitates 500 million conversations a month. To improve its bot technology, the company is doubling down on research and development and expanding its product development team.

McCabe and his team learned from past mistakes

But leading a company hasn't always been easy, McCabe said. Intercom isn’t his first startup — he had previously founded two other ones with the company's other cofounders. Those experiences helped, since he and his partners made a lot of mistakes along the way in their prior ventures.

"A lot of the dumbest mistakes we got out of the way," he said.

McCabe and his cofounders learned that to be successful, they needed to figure out how to have their companies do what they're best at while continuing to innovate. Having learned that lesson, he’s ready to face the next chapter in automation. And he thinks his company is primed for a major transition in the industry.

"In the next couple of years, every single business that has invested in trying to accelerate their growth will have simple bots working alongside humans," he said. That will allow them "to have higher quality and faster response to allow humans to do what humans do best."

SEE ALSO: $20 billion Atlassian explains why it's blowing up its oldest product to evolve with today’s software teams

Join the conversation about this story »

NOW WATCH: Here's what caffeine does to your body and brain

The head of tech at one of the world’s largest consulting firms says the way businesses are piling into AI is different than anything they’ve ever seen

$
0
0

Paul Daugherty, chief technology and innovation officer at consulting firm Accenture, as seen in Business Insider's San Francisco offices on Monday, October 15, 2018.

  • Artificial intelligence is the most important technology trend today, said Paul Daugherty, the chief technology and innovation officer at giant consulting firm Accenture.
  • Companies are adopting it more rapidly and broadly than any previous technology trend, he said.
  • It's being used to drive efficiency and to better tailor products for customers.
  • But many companies are unprepared for it or have unrealistic expectations, he said.

Artificial intelligence, according to Paul Daugherty, is overhyped, many of the expectations for it are unrealistic, and most companies and their workers are unprepared for it.

At the same time, he says, it's the biggest and most important trend in technology today, will likely remain so for the next 10 to 20 years, and will profoundly change businesses around the world.

"We call it the alpha trend," Daugherty told Business Insider in an interview this week. He continued: "I don't want to be accused of hyping it more, but it is a big deal in terms of its impact."

Daugherty is in a position to know. He's the chief technology and innovation officer at Accenture, the giant consulting firm that counts more than three fourths of the Fortune Global 500 as its customers. As such, Daugherty leads the firm's effort to help clients identify, embrace, and integrate critical new technologies.

Every year, he and his team put together a list of the top technology trends in business. At the top of the list right now — and likely for many years to come — is AI, he said.

AI is being adopted by companies in every kind of business — that's different than other tech

Princess Cruise Lines' Caribbean Princess ship near the Princess Cays resort in the BahamasAI is remarkable for lots of reasons, but among them is how it's being adopted and by whom, Daugherty said. With previous trends, such as e-commerce or mobile apps or the cloud, the technology tended to be adopted quickly by only a handful of companies or a smattering of industries or in only a few countries around the world, he said. The companies on the cutting edge of the mobile phone trend tended to be banking and financial services firms, for example, while retailers tended to be the first ones to adopt e-commerce.

What's changed with AI is just how rapidly and broadly companies and industries globally are adopting it and related technologies, such as machine learning and automation, Daugherty said. Accenture has never seen interest among its clients or business grow this quickly with any other technology trend, he said. And instead of the interest being focused on a particular industry, it ranges from everything from the retail segment to utilities, he said.

"What we're seeing with AI is very different. It's very broad, immediate adoption," he said. He continued: "It's the fastest growing technology trend we've ever seen."

But there are still some unrealistic expectations

Utility companies are using machine learning and AI to try to become more efficient and get the most out of their production and distribution facilities, he said. Banks are using such technologies to try to better and more easily flag suspect transactions.

Stitch FixOnline clothing seller Stitch Fix is using AI to try to better understand its customers fashion preferences and to better predict what clothes they'll want next, he noted. Meanwhile, Carnival Cruise Lines has put in place a system to track the activities customers take part in and the stops they visit to better tailor its offerings.

"It's remarkably broad in terms of the adoption and going global very quickly," Daugherty said. He continued: "You see companies looking at how to better optimize their assets and create new revenue streams."

To be sure, there are likely to be hitches and hiccups in the race to embrace AI. Many companies have unrealistic expectations of what the technology will be able to do for them, he said. And many of them are unprepared for the technology.

In a recent study where Accenture surveyed executives at some 1,500 organization, some 65% of those polled said their workforces weren't yet ready to work with AI and related technologies. But only 3% said their companies were investing in training their employees to use them.

"It's an area that most companies are behind on," Daugherty said, continuing, it's "a striking disconnect."

Now read:

SEE ALSO: The founder of a beloved productivity app thinks Hollywood has a better blueprint for innovation than Silicon Valley —and he's taking his cues from Netflix to fix it

Join the conversation about this story »

NOW WATCH: An environmental group is testing giant floating pipes to clean up oceans

Most companies using AI say their No.1 fear is hackers hijacking the technology, according to a new survey that found attacks are already happening

$
0
0

hacker cyber code

  • Most companies that are early users of artificial intelligence have one big concern about the technology— cybersecurity.
  • That's not an idle concern; many say their AI systems have already been breached, according to a new study from Deloitte.
  • Researchers have shown that machine-learning systems can be manipulated and can potentially leak valuable data, analysts from Deloitte said in a new report.

Artificial intelligence holds lots of promise among corporations, but early adopters of the technology also see some big dangers.

Bad AI could cause their companies to make strategic mistakes or engage in unethical practices, lead to legal liabilities, and even cause deaths. But the biggest risk early adopters see from the technology has to do with security, according to a new study from consulting firm Deloitte.

That concern is "probably well placed," analysts Jeff Loucks, Tom Davenport, and David Schatsky said in the report. Many of the companies that are already testing or implementing AI technologies have already experienced a security breach related to them.

"While any new technology has certain vulnerabilities, the cyber-related liabilities surfacing for certain AI technologies seem particularly vexing," the Deloitte analysts said in the report.

For its study, the firm surveyed some 1,100 executives at US companies from 10 different industries that are already testing or using AI in at least one of their business functions.

Many see security as a huge risk

As part of the study, Deloitte asked the executives to rank what they saw as the biggest risks related to the technology. By far, more ranked cybersecurity as their top choice than any other risk, and more than half of respondents ranked it in their top three.

Deloitte chart on top risks ranked by early adopters of artificial intelligence (AI)

The executives weren't just being nervous Nellie's. Many had already experienced the security of dangers of AI. Some 30% of those surveyed said their companies had experienced a security breach related to their AI efforts in the last two years.

The Deloitte study didn't discuss the types of breaches that the companies experienced or what the consequences of them were. But it did note that researchers have discovered various ways that hackers could compromise or manipulate AI systems to nefarious ends.

Machine learning systems can be manipulated

For example, researchers have shown that machine learning systems can be manipulated to reach incorrect conclusions when they're intentionally fed bad data, the Deloitte analysts said. For example, hackers could potentially fool a face-detection system by feeding it enough photos of an imposter to get it to recognize that person as the authorized user.

Such systems could also potentially expose companies intellectual property, the analysts said. If malicious actors were able to generate numerous interactions with a company's machine learning system, they could potentially use the results to reverse engineer the data on which the system was built, they said.

Because of such dangers, many of those surveyed said they had delayed, halted, or decided not to even begin some planned AI efforts, according to the study.

Deloitte chart on how early AI adopters are responding to cybersecurity concerns from 10/18 report

Now read:

SEE ALSO: Amazon's Alexa is getting smarter about sports — it can tell you the odds of the next NFL game and give you an update on your favorite teams

Join the conversation about this story »

NOW WATCH: Apple might introduce three new iPhones this year — here’s what we know


The founder of an AI startup that just raised $30M explains why starting a company is like playing 'Super Mario'

$
0
0

super mario

  • People.ai raised $30 million in funding from Andreessen Horowitz and other VC firms.
  • It's the second startup that CEO Oleg Rogynskyy has founded.
  • The process of building a succesfull startup gets more familiar the more you do it, just like a video game, Rogynskyy reckons.

Starting a company is like playing a Super Mario game.

At least that’s what Oleg Rogynskyy, founder and CEO of artificial intelligence startup People.ai, believes.

“The more times you go through the initial levels, the more comfortable you get,” he tells Business Insider. 

Rogynskyy is progressing through an exciting level with People.ai, which he started in February 2016. Before that, he founded Semantria, a startup that analyzes the emotions behind social media posts and which was acquired by Lexalytics. 

In the last two years, People.ai has grown to just over 100 employees and acquired over 50 large enterprise customers. And on Tuesday, the artificial intelligence software startup announced a $30 million series B funding led by Andreessen Horowitz. Rogynskyy said fundraising only took four days, but also “a couple months of homework.”

headshot oleg rogynskyy 300x300

Peter Levine, a general partner at Andreessen Horowitz and one of the first investors in software development platform GitHub, also joined the People.ai board.

“He’s intimately familiar with the problem we’re solving,” Rogynskyy said of Levine. “He was probably the ultimate guy to have this conversation with. We went to the same college together, we had a lot in common.”

The inspiration behind People.ai came from Rogynskyy’s experience as a salesperson when he started his career. Much of his time involved the tedious work of maintaining and updating the Salesforce customer relationship management platform.

“Our COO literally grounded me for a week and had me update Salesforce and spend the week cleaning Salesforce,” Rogynskyy said.

Taking AI to the next level

That needed to change, Rogynskyy thought. People.ai uses machine learning software to gain insights from the behavioral data of employees, improve sales and marketing, and automate time-intensive tasks, like how Rogynskyy had to update Salesforce. Rogynskyy predicts that data entry alone can take up to one day a week.

From previous experience, Rogynskyy says the most valuable lesson he learned was focus — focusing on just one type of customer. So People.ai narrowed its focus on enterprise customers.

He's also getting the knack for other challenges and potential pitfalls in founding a company, like creating an efficient hiring process and building an executive team. 

“You need to start directing and aligning your team to execute the work,” Rogynskyy said. “The challenge with having the time is correlated with how mature your team is to execute the work under your guidance.”

Going forward, People.ai hopes to go public three years from now. Like a Super Mario Game, Rogynskyy is focused on jumping into the next level of artificial intelligence. Within five to 10 years, Rogynskyy believes, artificial intelligence will allow people to free up two days out of their week to focus on creative projects and spend more time with their families.

“Machine learning AI is the software of the future,” Rogynskyy said.  “I don’t think we’ll be writing code for the next few years afterwards … We believe there’s a lot of automation that can improve the quality of life.”

SEE ALSO: Oracle's Larry Ellison says Amazon’s database is like a semi-autonomous car: ‘You get in, you start driving, you die’

Join the conversation about this story »

NOW WATCH: This camper concept fits in the back of a van

A 22-year-old Johns Hopkins dropout is pioneering a new way to treat drug addiction using your phone, and health VCs are lining up to invest

$
0
0

man silhouette alone sunrise sunset

  • 22-year-old Shrenik Jain dropped out of a triple major at Johns Hopkins to create Marigold Health, a startup that offers a new kind of addiction treatment.
  • Jain's startup recently got backing from Silicon Valley health venture fund Rock Health, along with Rough Draft Ventures powered by General Catalyst, a VC that's funded successful startups like Jet.
  • Other notable backers include the National Institutes of Health, the US government's chief medical research agency.
  • Marigold offers patients access to text-based peer support groups that are monitored by a social worker and informed by advanced data analytics software.

After working as a Baltimore EMT for two years and regularly reviving people who'd overdosed on opioids, Shrenik Jain decided there had to be a better way to help people with addiction.

So a few years ago, he dropped out of Johns Hopkins and went to work on a startup designed to help prevent people from overdosing in the first place.

Today, that startup is called Marigold Health, and it recently received an undisclosed amount of funding from influential Silicon Valley health venture fund Rock Health. Other backers include Rough Draft Ventures of General Catalyst, the Cambridge-based VC that's funded successful startups like Jet, Snap Inc., and Kayak, as well as Johns Hopkins' tech venture arm. The National Institutes of Health also offered Marigold a grant to validate the platform.

Marigold provides people with access to group therapy — somewhat similar to what people with addiction may engage in face-to-face in the rooms of 12-step programs like Alcoholics Anonymous or Narcotics Anonymous. But unlike those resources, Marigold's group texts can be paid for by insurers. And thanks to a finely-tuned set of data analytics tools, Marigold also keeps track of patients in the system to ensure they're progressing; if they're headed in the other direction, someone from Marigold reaches out to help.

No other system for helping patients with addiction in this way currently exists. For patients with depression or anxiety, a handful of tools let individuals text a therapist (or an AI-powered chatbot) for support between in-person sessions; some of those tools allow people to keep track of their symptoms, but many do not. But when it comes to drug addiction, which can coincide with depression but is a separate clinical issue, those kinds of tools are scant.

"There's nothing else like this today," Jain told Business Insider.

To be clear, neither Jain nor his cofounder Ravi Shah aim for Marigold to replace any current addiction treatment method. Instead, Marigold is intended to complement a patient's existing treatments, which could include attending NA meetings, taking medications like naltrexone or buprenorphine, or having regular sessions with a therapist in person.

"We're not saying that peers are going to replace clinicians," Jain said. "We're saying that peers can do something valuable and distinct from clinicians in the care continuum. And they can do so cost-effectively."

Peer groups: an imperfect lifeline for patients with addiction

Two athletes reaching their hand outIn 2016 alone, 62,000 Americans died from a drug overdose, and recent data suggest that our current treatments for addiction are barely making a dent in the problem.

marigold health appPart of the problem is that addiction is a chronic condition that can last anywhere from several years to a lifetime, but most current healthcare models treat it as a short-term illness. Health insurance coverage for in-patient treatment can be limited; during this time, patients are advised to detox and attend 12-step meetings. If and when patients relapse, there's little recourse for help. Patients frequently end up back in the hospital, where they can rack up large medical bills.

Peer groups can help. The authors of a 2016 study published in the journal Substance Abuse and Rehabilitation, for example, found higher rates of abstinence, more satisfaction with treatment, and significant reductions in relapse rates among people who participated in peer groups for substance abuse compared with people who did not.

But peer groups aren't perfect — many lack the necessary structure to keep discussions on track and the oversight needed to ensure patient safety and security. Jain has some first-hand experience with this, having worked at a nonprofit called Thread where he served an advisory role with a group of underserved Baltimore high schoolers who'd experienced trauma and anxiety.

"When you put patients in peer groups they engage really well, but the problem is it's really hard to have oversight," Jain said.

On Reddit, for example, people will invade peer groups created for individuals with depression who are being monitored on suicide watch. At 12-step programs, drug dealers show up to take advantage of vulnerable attendees, and sexual harassment is widespread (there's even a name for it: "13th stepping"). Plus, there's little that peers can do when one of their members suddenly goes missing.

"It's not like high blood pressure or any other kind of disease where you can look at labs," Jain said. "There's no way to passively track a patient in the community short of tracking them down."

What Marigold offers that other text-based interventions don't

texting working lateMarigold offers a potential solution.

Its algorithm, which Shah designed, uses artificial intelligence and natural language processing to monitor group chats; a certified social worker regularly scans the group and looks at the data that the algorithm provides. All of this happens under the oversight of Geetha Jayaram, an associate professor of psychiatry at Johns Hopkins. The Marigold platform is HIPAA-compliant, so patient privacy is secured. 

"Coming from an engineering perspective, we were like, let's use natural language processing — not to build a bot — but to actually look at the sentiment in the messages and make it so a health care provider doesn't have to read every single message manually," Jain said.

Jain claims this will allow providers to see anywhere from seven to 10 times as many patients as they would without the tool. That's all thanks to Satya Bommaraju, Marigold's chief data scientist and a fellow former Johns Hopkins student who put his plans for a PhD on hold to work at the startup.

Jain and Shah plan to sell Marigold directly to health care providers and health plans. He says they'll be incentivized to cover the treatment because it will save them in the long-term on hospital readmissions, ER visits, and other medical bills that crop up when patients with addiction relapse.

Bill Evans, the CEO and managing director of Rock Health, one of Marigold's backers, agreed.

"For providers right now there's an opportunity to get reimbursement for something they want to provide but they don't have the tool," Evans told Business Insider.

Marigold could be one of those tools, he said.

DON'T MISS: DNA tests that cost as much as $750 claim to tell you which antidepressant is best for you, but scientists say they're not worth the money

SEE ALSO: Most rehabs don't offer a science-backed treatment for drug addiction. A new initiative aims to change that.

Join the conversation about this story »

NOW WATCH: Why babies can't drink water

Military work is a lightning rod in Silicon Valley, but Microsoft will sell the Pentagon all the AI it needs (MSFT)

$
0
0

Satya Nadella

  • Microsoft announced Friday that it plans to sell artificial intelligence technologies to the military.
  • The military is looking into using more artificial intelligence for its defense, as the Chinese government has set goals in surpassing the U.S. military.
  • In Silicon Valley, whether tech companies should become involved in projects with the military and federal law enforcement has flared up controversy among employees.

On Friday, Microsoft said it plans to sell artificial intelligence and any other advanced technologies needed to the military and intelligence agencies to strengthen defense, the New York Times reported.

Microsoft decision, which the Times said was announced in a small town-hall style company meeting on Thursday, contrasts sharply with the decision of its rival Google, which has said it will not sell technology to the government that can be used in weapons. 

"Microsoft was born in the United States, is headquartered in the United States, and has grown up with all the benefits that have long come from being in this country," Microsoft General Counsel Brad Smith was quoted in the report as saying. 

The debate about military AI among US tech companies comes as the Pentagon is in a race with the  Chinese government to develop next-generation security technologies. 

Employees within tech companies have protested against their companies' involvement in military and federal law enforcement work. For example, thousands of employees signed a petition, and some even resigned, after revelations that Google had sold artificial intelligence technology to the Pentagon to analyze drone footage.

Others, such as Oracle founder Larry Ellison and Amazon CEO Jeff Bezos, have shown their support for the U.S. military. In a recent interview, Oracle founder Larry Ellison said of Google, "I think U.S. tech companies who say we will not support the U.S. Military, we will not work on any technology that helps our military, but yet goes into China and facilitates the Chinese government surveilling their people is pretty shocking.”

Likewise, Amazon is seen as the forerunner for winning a cloud computing contract with the Pentagon. Meanwhile, Google recently dropped out of that same bid, saying it would conflict with corporate values. As for Microsoft, it’s also seen as a strong contender for that contract.

SEE ALSO: Google CEO Sundar Pichai bowed to Trump during the company's earnings call — here's why that should concern you

Join the conversation about this story »

NOW WATCH: Scorpion venom is the most expensive liquid in the world — here's why it costs $39 million per gallon

The 3rd most powerful supercomputer in the world was turned on at a classified government lab in California — Here’s what the 7,000 square foot ‘mini city’ of processing power is like up close

$
0
0

IMG_20181026_110534

  • The third most powerful supercomputer in the world has been completed and unveiled: the Sierra.
  • This supercomputer, which can do 125 quadrillion calculations in a second, will be used to create simulations that can test how safe and reliable nuclear weapons in the government stockpile is.
  • The Sierra is currently being used for scientific work, such as predicting the effects of cancer and mapping traumatic brain injury.

Covering 7,000 square feet and with 240 computing racks and 4,320 nodes, a classified government lab holds what looks like a futuristic mini city of black boxes with flashing blue and green lights.

This buzzing machine, called the Sierra supercomputer, is the third most powerful computer in the world. It was unveiled Friday at its home, the Lawrence Livermore National Laboratory (LLNL) in California, after four years in the making.

At its peak, Sierra can do 125 quadrillion calculations in a second. Its simulations are 100,000 times more realistic than anything a normal desktop computer can make. The only two supercomputers that are more powerful are China's Sunway Taihulight in second place and IBM's Summit in first.

"It would take 10 years to do the calculations this machine can do in one second," said Ian Buck, vice president and general manager of accelerated computing at NVIDIA.

Powering such a massive electronic brain takes about 11 to 12 megawatts of energy, roughly the equivalent of what's needed to power 12,000 homes — a relatively energy efficient level of energy consumption, according to Sierra's creators.

Right now, Sierra is partnering with medical labs to help develop cancer treatments and study traumatic brain injury before it switches to classified work.

Going nuclear soon

Many of the 4,000 nuclear weapons in the government's stockpile are aging. Once the Sierra switches to classified production in early 2019, it will focus on top secret government activities and it will use simulations to test the safety and reliability of these weapons, without setting off the weapons themselves and endangering people.

Besides assessing nuclear weapons, this supercomputer can create simulations to predict the effects of cancer, earthquakes and more. In other words, it can answer questions in 3D.

IMG_20181026_111310

The lab and the Department of Energy worked with IBM, NVIDIA and Mellanox on this project. Talks for Sierra began in 2012, and in 2014 the project took off. Now, it's six to ten times more powerful than its predecessor, Sequoia.

What makes the Sierra notably different is the NVLink, which connect Sierra's processing units and gives it more powerful memory.

"What's most fascinating is the scale of what it can do and the nature of the system that opens itself to the next generation workload," said Akhtar Ali, VP of technical computing software at IBM. "Now these systems will do the kind of breakthrough science that's pervasive right now.

The lab also installed another new supercomputer called Lassen, which will focus on unclassified work like speeding cancer drug discovery, research in traumatic brain injury, and studying earthquakes and the climate.

Sierra's not the last supercomputer the lab will build. They're already planning the next one: "El Capitan," which can do more than a quintillion calculations per second -- 10 times more powerful than the colossal Sierra.

The lab expects to flip the switch on El Capitan sometime in the 2021 to 2023 time frame. 

In case you're wondering, the supercomputers are all named after natural landmarks in California. 

And no, Lawrence Livermore National Laboratories spokesperson Jeremy Thomas says, there are no plans to use the Sierra supercomputer for bitcoin mining.

"While it would probably be great at it, mining bitcoin is definitely not part of our mission" Thomas says. 

IMG_20181026_110546

SEE ALSO: The 20 best smartphones in the world

Join the conversation about this story »

NOW WATCH: Why most people refuse to sell their lottery tickets for twice what they paid

Many companies are stumbling as they rush to adopt artificial intelligence — here's what's tripping them up

$
0
0

A man watches a data server at the booth of IBM during preparations for the CeBIT trade fair in Hanover, March 9, 2014.

  • Companies that are rushing to embrace artificial intelligence technologies are running into big problems with their data.
  • Some companies don't have enough data, others have it in disparate places, and still others don't have it in a usable format.
  • Because of such challenges, some early adopters have abandoned AI projects.

If there's one big thing that might thwart companies' headlong rush to adopt artificial intelligence for their businesses, it's data.

AI generally requires lots of data. But it needs to be the right kind of data, in very particular kinds of formats. And it often needs it to be "clean," including only the kind of information it needs and none of what it doesn't.

Paul Daugherty, chief technology and innovation officer of Accenture, as seen at Business Insider's offices in San Francisco on October 15, 2018.All of that adds up to a big problem for many businesses.

"The biggest challenge most organizations face when they start thinking about AI is their data," said Paul Daugherty, the chief technology and innovation officer of consulting firm Accenture, in an interview earlier this month. He continued: "Often we're seeing that that's the big area that companies need to invest in."

Corporations large and small and across multiple industries are enthusiastic about AI and related technologies such as machine learning. Many are already adopting it to do things such as improving their customer service, flagging suspect transactions, and monitoring employees' performance. Accenture considers AI the "alpha trend"— the most important trend in technology not only today, but for the next 10 to 20 years.

But for companies to really reap the benefits — to be able to detect trends, identify anomalies, and make predictions about future behavior — they're going to have to come to terms with their data.

And unfortunately, many companies aren't in good shape when it comes to data. In a recent survey by consulting firm Deloitte, a plurality of executives at companies that are early adopters of AI ranked "data issues" as the biggest challenge they faced in rolling out the technology. Some 16% said it was the toughest problem they confronted with AI, and 39% said it ranked in the top three.

Companies are facing multiple problems when it comes to data

Some companies don't have the data they need. Others have databases or data stores that aren't in good shape to be tapped by AI. Still others are dealing with issues related to trying to keep their data secure or maintain users' privacy as they prepare for it to be used by AI systems.

"Getting the data required for an AI project, preparing it for analysis, protecting privacy, and ensuring security can be time-consuming and costly for companies," Deloitte analysts Jeff Loucks, Tom Davenport, and David Schatsky said in the report. "Adding to the challenge is that data — at least some of it — is often needed before it is even possible to conduct a proof of concept."

Deloitte report on AI early adopters — chart on struggles faced in adopting artificial intelligence

One particular problem companies are facing on the data front is that it's often housed in different departments and disparate databases, noted the Deloitte analysts. Customer service data may be in one place, for example, while financial records may be elsewhere. The trouble for companies is that their AI systems will often need to tap into multiple data stores.

"AI creates a need for data integration that a company may have managed to avoid until now," Loucks, Davenport, and Schatsky said in their report. "This can be especially challenging in a company that has grown by acquisition and maintains multiple, unintegrated systems of diverse vintages."

Indeed some companies have run into such big problems in trying to get the data they needed for an AI effort that they've ended up abandoning or postponing the project, the Deloitte analysts said.

That's why it's crucial that companies assess the state of their data before embarking on AI projects, said Daugherty. It helps them set realistic expectations, he said.

"The big expectations factor for companies is really understanding the data — what shape the data's in to drive the right AI results," he said.

Now read:

SEE ALSO: The best way to avoid killer robots and other dystopian uses for AI is to focus on all the good it can do for us, says tech guru Phil Libin

Join the conversation about this story »

NOW WATCH: Christopher Wylie says he was pushed into traffic and assaulted after exposing Facebook's bombshell data scandal

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>