Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

This founder's startup developed a super-efficient chip to help self-driving cars 'see' the world around them. Here's the pitch deck it used to raise $25 million to get the chip in production.

$
0
0

Recogni Chief Business Officer Ashwini Choudhary

  • Recogni has designed a artificial-intelligence chip for self-driving cars that it says is hundreds of times more efficient than comparable processors.
  • The company is relying on a data-compression technique that allows it to store image data and digital models of objects on the chip itself, rather than in memory modules, allowing the chip to both run faster and use less power than those of rivals.
  • Recogni plans to start shipping production-ready samples in the second half of next year and just raised $25 million in a Series A round to gear up for commercialization of its technology.
  • It used the pitch deck below to raise that round.
  • Click here for more BI Prime stories.

If autonomous vehicles are ever going to replace human-driven cars and trucks, they're going to need to be able to recognize everything from a traffic light to a traffic cone in less than a blink of an eye.

But those cars are also going to need to process all that visual information efficiently, without consuming massive amounts of power — especially as increasing numbers of vehicles run on limited-capacity electric batteries.

A company that develops a chip that can do both — process lots of visual data both rapidly and using little power — could have a big opportunity in front of it. The folks behind Recogni, a San Jose-based startup, think they've done what no other company has to date.

"We saw this as a clear opportunity for us to solve the problem," Ashwini Choudhary, the startup's cofounder and chief business officer, told Business Insider in a recent interview.

Recogni has designed a chip that it says can perform 1,000 tera (which is to say, 1 trillion) operations per second (TOPS), an industry standard measurement. The chip can do that while only requiring 5 watts of power. By contrast, Tesla made a lot of noise earlier this year when it unveiled its own self-driving car chip that it says will offer a total of 72 TOPS (with two 36-TOPS artificial-intelligence processors) while consuming 72 watts, total, including its main and graphic processors.

Read this: Tesla claims it has made the 'best chip in the world' for self-driving cars at its autonomy day event

Other AI chips being designed for self-driving cars offer similar performance per watt, Choudhary said.

"We are at least 200 times more efficient," than those chips, he said.

Recogni is focusing on compression

Recogni has been able to make its chip both fast and efficient essentially by focusing on compressing both the data coming out of the digital cameras used to detect objects around the car and the models the chip uses to identify those object.

The reason why other AI chips for self-driving cars consume so much power is that they store the image data and the models they use to identify objects on a separate memory module, Choudhary said. Shuttling data back and forth between the chip and memory storage expends a lot of energy, he said.

By contrast, Recongi's system is able to shrink the image sensor data and the object models enough so that they can be stored on its AI chip, rather than on a separate memory module, he said. Recogni's system compresses the image data so much that it requires 16 to 20 times less space than other systems, he said. 

"We said that we cannot really do" what other chipmakers were doing, he said. "We said, 'This is not the way,'" he continued, "'We need to change the rules.'"

It thinks its system can make lidar unnecessary

The startup plans to sell its chips as part of a module that will include a bevy of image sensors — a monochrome one, an infrared one, and three color ones. Its system will also be able to extract and support data that comes from a radar or lidar system, Choudhary said.

But the company believes that its system will be able to provide autonomous vehicles with information about the distance of objects just from its image sensors, rather than having to rely on pricey lidar systems for such data. Other companies, including Tesla, are also working on self-driving car systems that don't require lidar, because the cost of the laser-based depth sensors can be tens of thousands of dollars far too expensive to use in a car targeted at a mass market.

"We're trying solve that problem," he said.

Recogni plans to start producing production-ready samples of its chips in the second half of next year, Choudhary said. It already has simulations of them running on Google's cloud.

The company now has the chance to make its vision a reality. It recently closed a $25 million Series A funding round that was led by GreatPoint Ventures. The company plans to use the new funds to bulk up its staffing, particularly in engineering, Choudhary said. Recogni has about 17 employees now; it expects to have around 60 by the end of the year, split between its Bay Area and Munich offices, he said.

In order for autonomous vehicles to make sense of the world around them, they're going to need to be able to process lots of image data quickly and efficiently.

"That's where we come in," Choudhary said.

Here's the pitch deck Recogni used to raise its $25 million funding round:

SEE ALSO: This VC firm managing $500 million in assets tries to invest in as few companies as possible. And it only wants startups with management teams looking for help.




















The 12 books Elon Musk says shaped his worldview and led him to business and personal success

$
0
0

Elon Musk

  • No matter how busy executives seem to be, they always seem to have time to prioritize reading.
  • Elon Musk follows this pattern. Instead of relying on self-help books or just nonfiction pieces, he reads across different genres. 
  • We explored Musk's previous interviews and social media to compile a list of 12 books that he's recommended.
  • Click here for more BI Prime stories.

Elon Musk, the CEO of SpaceX, Tesla, and other larger-than-life tech companies, somehow also seems to find time to read.

Musk has said that reading a variety of books — from epic works of fantasy like the "Lord of the Rings" trilogy to complex how-to books on building rockets — is crucial to his success.

We looked through Musk's past interviews and social media history to come up with a list of 12 books the billionaire entrepreneur thinks everyone should read.

Take a look below.

SEE ALSO: Ray Dalio says anyone who wants to understand today's world should read a 32-year-old book about empires

"The Lord of the Rings" by J.R.R. Tolkien

Musk had a nickname when he was a shrimpy, smart-mouthed kid growing up in South Africa: Muskrat.

The New Yorker reported in 2009 that "in his loneliness, he read a lot of fantasy and science fiction."

Those books — notably "The Lord of the Rings" by J.R.R. Tolkien — shaped Musk's vision of his future self.

"The heroes of the books I read ... always felt a duty to save the world,"he told The New Yorker.

For those who've already read the books and seen the movies but are still hurting for more Middle Earth, Amazon is working on a "Lord of the Rings" TV series.

Buy it here »



"The Hitchhiker's Guide to the Galaxy" by Douglas Adams

In this comedic sci-fi book, a supercomputer finds the "answer" to a meaningful life: the number 42.

To Musk, who read this as a young teenager in South Africa, the book was instrumental to his thinking. He was so enamored with it, in fact, that when he launched his Tesla Roadster into space in February, he put the words "Don't Panic!"— which graced the cover of some early editions of the book — on the car's center screen.

When asked in a 2015 interview about his favorite spaceship from science fiction, he said, "I'd have to say that would be the one in 'The Hitchhiker's Guide to the Galaxy' that's powered by the improbability drive."

Buy it here »



"Benjamin Franklin: An American Life" by Walter Isaacson

Musk has repeatedly described Benjamin Franklin, one of the US's founding fathers and an accomplished inventor, as one of his heroes.

Franklin was one of the first to prove that lightning is electricity in his famous kite experiment, which led to the invention of the lightning rod. He's also credited with inventing bifocals: glasses with two distinct optical lenses.

In this biography of Franklin, "you can see how he was an entrepreneur,"Musk said in an interview with Foundation, a platform for nonprofits working on climate-change issues. "He was an entrepreneur. He started from nothing. He was just a runaway kid."

Musk added: "Franklin's pretty awesome."

Buy it here »



"Structures: Or Why Things Don't Fall Down" by J.E. Gordon

When Musk started SpaceX, he was coming from a coding background. But he took it upon himself to learn the fundamentals of rocket science.

One of the books that helped him was "Structures: Or Why Things Don't Fall Down," a popular take on structural engineering by J.E. Gordon, a British material scientist.

"It is really, really good if you want a primer on structural design," Musk said in an interview with KCRW, a southern California radio station.

Because of his interest in rocket mechanics, Musk got intimately involved with the planning and design of SpaceX's Falcon Heavy rocket. He has served as the chief designer at SpaceX as well as CEO.

"The reason I ended up being the chief engineer or chief designer was not because I wanted to — it's because I couldn't hire anyone; nobody good would join," Musk said during a talk in 2017 about how he plans to colonize Mars.

Buy it here »



"Ignition: An Informal History of Liquid Rocket Propellants" by John D. Clark

In Musk's quest to learn and master complicated subjects, "Ignition" was crucial in helping him get a handle on rockets, he's said.

John D. Clark was an American chemist who was active in the development of rocket fuels in the 1960s and 70s. The book is an account of the growth of the field and an explanation of how the science works.

Musk took the book's lesson to heart when he was working on SpaceX's Falcon Heavy rocket system. SpaceX used cryogenically cooled RP-1, a type of kerosene used in jets, and liquid oxygen to combust the fuel used to launch the rocket.

While the book is hard to find, it's available online here.



"Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom

Musk has repeatedly warned against the dangers of unchecked artificial intelligence.

"We need to be super careful with AI,"he tweeted in 2014, adding that it's "potentially more dangerous than nukes."

In a documentary about artificial intelligence called "Do You Trust This Computer?" Musk said AI could be used to create an "immortal dictator from which we could never escape."

He added: "We are rapidly heading towards digital superintelligence that far exceeds any human. I think it's very obvious."

To find out why these risks are so scary, Musk says it's worth reading Nick Bostrom's "Superintelligence," which makes the daring inquiry into what would happen if computational intelligence surpassed human intelligence.

Buy it here »



"Our Final Invention" by James Barrat

"Our Final Invention" gives still more warnings about the dangers of artificial intelligence. Musk called the book a "worthy read" in a 2014 tweet.

Barrat takes a close look at the potential future of AI, weighing its advantages and disadvantages.

Barrat says on his website that the book is at least partly "about AI's catastrophic downside, one you'll never hear about from Google, Apple, IBM, and DARPA."

Musk agrees.

"AI doesn't have to be evil to destroy humanity — if AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings," he said in a documentary about artificial intelligence.

Buy it here »



The "Foundation" series by Isaac Asimov

In addition to the "Lord of the Rings" books, Isaac Asimov's "Foundation" series made up part of Musk's early interest in science fiction and fantasy.

The books center on the fall of the fictional Galactic Empire, which consists of millions of planets settled by humans across the Milky Way.

The stories may have had a huge influence on Musk's career trajectory. Here's what he said about the series in a 2013 interview with the Guardian:

"The lessons of history would suggest that civilizations move in cycles. You can track that back quite far — the Babylonians, the Sumerians, followed by the Egyptians, the Romans, China.

"We're obviously in a very upward cycle right now, and hopefully that remains the case. But it may not. There could be some series of events that cause that technology level to decline.

"Given that this is the first time in 4.5 billion years where it's been possible for humanity to extend life beyond Earth, it seems like we'd be wise to act while the window was open and not count on the fact it will be open a long time."

Buy it here »



"The Moon Is a Harsh Mistress" by Robert Heinlein

This award-winning science-fiction novel, published in 1966, paints a picture of a dystopia not too far in the future. It's exactly the kind of vivid fantasy world that would satisfy an active imagination like Musk's.

In the book, several people have been exiled from Earth to the moon, where they have created a libertarian society.

In the year 2076, a group of rebels — including a supercomputer named Mike and a one-armed computer technician — leads the lunar colony's revolution against its Earth-bound rulers.

In an interview at an MIT symposium in 2014, Musk said the book was Heinlein's best work.

Buy it here »



"Life 3.0: Being Human in the Age of Artificial Intelligence" by Max Tegmark

If you're sensing a theme among the books on this list, it's that Musk is really into exploring the future of artificial intelligence.

In "Life 3.0," the MIT professor Max Tegmark writes about how to keep artificial intelligence beneficial for human life and ensure that technological progress remains aligned with humanity's goals for the future.

It's one of the few books Musk recommends that deal with the possibility of AI as a force for good rather than evil.

Buy it here »



"Merchants of Doubt" by Naomi Oreskes and Erik M. Conway

"Merchants of Doubt"— now also a documentary— was written by two historians of science.

They make the case that scientists with political and industry connections have obscured the facts surrounding a series of public-health issues, including tobacco, pesticide use, and holes in the ozone layer.

Musk recommended the book at a conference in 2013 and later pointed to the book's key takeaway in a tweet, saying that the same forces that denied that smoking caused cancer were denying the danger of climate change.

Buy it here »



"Einstein: His Life and Universe" by Walter Isaacson

Musk is a big fan of Walter Isaacson's biographies.

In a 2012 interview, Musk recommended Isaacson's biography of Albert Einstein, a man who left a profound mark on science and human history.

The book is based on Einstein's personal letters and explores how he went from a young, frustrated patent officer to a Nobel Prize winner.

It's a story that likely inspired Musk.

Buy it here »



Everything you need to know about PyTorch, the world's fastest-growing AI project that started at Facebook and powers research at Tesla, Uber, and Genentech (FB, UBER)

$
0
0

Mark Zuckerberg

  • PyTorch, an artificial intelligence project started by Facebook engineers, has become the second fastest-growing open source project in the world, according to GitHub — and the fastest-growing AI project overall.
  • Within Facebook, PyTorch is used for text translations, accessibility features for the blind, and even for fighting hate speech.
  • PyTorch is now used at other companies like Microsoft, Toyota, Tesla, Uber, and Genentech.
  • It's been used for drug discovery, identifying cancer cells, making self-driving cars safer, building video games, powering apps, and more.
  • PyTorch is especially popular in the research community and used at top engineering schools like Stanford, Berkeley, and CalTech.
  • Click here for more BI Prime stories.

Many of the features that we take for granted on Facebook — language translation in Messenger, for example — were made possible with a powerful artificial intelligence project called PyTorch.

PyTorch, which was first built by Facebook engineers, helps power many of the social network's services at scale. At Facebook, PyTorch is part of essentially every AI feature. PyTorch trains the translation systems, powering 6 billion translations a day. It's used for making social recommendations and making suggestions in Messenger. It can provide category suggestions for listings in Facebook Marketplace. It's used for detecting hate speech and images that violate Facebook's policies.

PyTorch even helps Facebook provide accessibility features for the blind or visually impaired. On Facebook and Instagram, users can tap an image, and the app will describe the image to them, using phrases automatically generated by a PyTorch-powered system. 

"What PyTorch allows us to do is experiment very quickly," Srinivas Narayanan, head of Facebook AI Applied Research, told Business Insider. "It's showing incredible promise. What we are seeing, using these new modeling techniques, we are able to take some of the problems and experiment and deploy them into production in a very short time."

Since its release in 2016, PyTorch has spread at incredible speed. That's largely because it's available as open source, meaning that it's free for anyone to download, modify, or use as they please. PyTorch is just one of Facebook's many open source projects, which also include popular projects like React and Move. But PyTorch is the fastest-growing.

In fact, PyTorch has become the second-fastest growing open source project in the world, according to GitHub, the ubiqitious Microsoft-owned code-hosting site. This also makes it the fastest growing AI project, overall. And according to RISELab, research papers that mentioned PyTorch grew a massive 194% year-over-year, from January to June 2018 compared to January to June 2019.

The rise of PyTorch reflects the increased demand for AI technology. Over the last year, postings for AI jobs rose 29.1%, according to a report on job site Indeed. 

PyTorch is certainly popular in real, production software — Microsoft, Toyota, Uber, Tesla, and Genentech are among the companies using PyTorch to power some of their AI efforts. However, it's found a special appeal among researchers and academics, with educators at UC Berkeley, Stanford, CalTech, and even online platform Udacity using it as their preferred platform for teaching AI concepts. 

How it all began

Soumith Chintala, a Facebook AI software engineer and co-creator of PyTorch, says about every five to six years, he sees the trends in AI research trend. 

For example, the early 2010s were about deep learning — a method that the Google-created TensorFlow, another very popular project, uses. Starting around 2015 or so, he noticed that the focus had shifted to neural networks, or computing systems that are inspired by,and work similarly to, biological neural networks in the brain.

"Something we started noticing is people started trying crazier and crazier neural networks," Chintala said. "They came up with ideas that were harder and harder to do."

Chintala says that the team at Facebook felt that AI research was changing course. So, these engineers decided to build a project that makes use of this idea, mostly targeted at researchers. 

"There was a need for something like PyTorch right when it got released," Chintala said. "We built the right product at the right time. If we built the same product a couple of years early and a couple of years later, it would not have the same kind of success and growth."

Chintala recalls that when he first started working in AI, there were few tools to help him out. When he started at Facebook, he used Torch, another open source AI project, but found it limiting. 

"If you wanted to build AI ideas, do research, or try neural networks, it was a struggle not because you couldn't express your idea but because there's no tooling to express your ideas in a reasonable way," Chintala said. 

Eventually, he saw a need for a new version of Torch, but based on Python — one of the most popular programming languages in the world, and especially lauded for how friendly it is for novice programmers. This is important because today, Python is frequently used by data scientists, researchers, and developers for AI programming and training datasets. Hence, PyTorch was born. 

Now, PyTorch is lauded for how easy it makes it for developers to experiment with other AI research ideas like natural language processing, the field of computer science that studies how to help computers understand human language, as well as computer vision, which studies how to help computers "see".

Within Facebook, researchers are experimenting with PyTorch to fight hate speech through so-called classifiers, the way that AI systems sort and categorize data. 

"There's a lot of text content on Facebook," Joe Spisak, Facebook AI Product Manager for PyTorch, told Business Insider. "We want to build classifiers that help us understand the intent. We have classifiers that identify if certain posts are hate speech."

How other companies are using PyTorch

While Google's TensorFlow is often used in production, researchers and data scientists tend to gravitate towards PyTorch.

"If you're a researcher, if you come from a research background, PyTorch is better," Ali Ghodsi, cofounder and CEO of Databricks, told Business Insider. "It's more flexible. You can write custom code more easily. In the case of PyTorch, Facebook has evangelized it a lot and they pushed for it a lot. Researchers want something that's more flexible than TensorFlow."

Read more: Everything you need to know about TensorFlow, Google's own home-made AI software that's now helping NASA discover planets and beating champions at Go

Still, PyTorch is finding practical use. Genentech uses PyTorch for drug discovery and development projects. For example, it uses PyTorch to sift through millions of chemical structures to develop drugs, and it helps builds models about how patients will react to treatment. It's even being used in the development of cancer vaccines. 

And at Uber, research teams use PyTorch to investigate problems, like how long it takes for a rider to make a second trip after their first trip on the app. It's worth noting, however, that Uber uses both PyTorch and TensorFlow in conjunction to power its AI software. 

"PyTorch was open sourced pretty recently and we started using that pretty much since then," Alex Sergeev, staff software engineer at Uber told Business Insider. "We work with the community and also contribute back...For us, we are actually very excited about competition that these frameworks have with each other. This actually drives a lot of innovation."

PyTorch in education

Chintala says that PyTorch is the most-cited AI framework at academic conferences, and it's been used for identifying breast cancer, making self-driving cars safer, building video games, and similar.

And in late 2018, Facebook released a PyTorch training course on Udacity, citing the increased need for AI skills. 

"The predominant message that CEOs hear today is anything repetitive, the machines can learn in deep learning. It has people sitting in the office doing repetitive work," Sebastian Thrun, Udacity co-founder and president, told Business Insider. "They will have absolutely no difficulty finding jobs. It's a really hot field."

Roland Gavrilescu, engineering student at University College London, started his AI career with this course, for which he received a scholarship. In that course, he learned about neural networks, computer vision, and using open source AI tools like PyTorch.

This year, he landed a summer internship working in robotics.

"I have always been interested in getting started with AI and computer vision," Gavrilescu told Business Insider. "The announcement of the scholarship arrived just at the right moment for me. I thought this would be the easiest and most enjoyable way of breaking into the field. I thought gaining these skills would allow me to prepare the best I can."

A 'growing community'

PyTorch wouldn't be possible without its community of users and contributors – the major driving force behind its blockbuster growth. As an open source project, every line of code added to PyTorch since it was released by Facebook into the wild has come from a contributor in the community. 

Over time, this has allowed PyTorch to train larger and larger datasets, allowing users to build AI applications at a bigger scale. And recently, PyTorch launched features that make it easier for users to reproduce research results and see how other people are using PyTorch. The engineers behind PyTorch only expect this project to continue growing

"We are reaping the benefits of faster research prototypes," Chintala said. "We can be more productive. We can solve problems that are hard to solve."

Spisak says that Facebook dedicates its resources to building smaller features for PyTorch, while pulling in ideas from the community. He added that the team is planning to add new features, such as dictionary support to help recognize even more words. 

"We have a growing community that is on the cutting edge of things," Spisak said. "Just focusing on the user and users' needs kind of has this network effect of driving the community forward. A lot of the thought leaders in the space love PyTorch, and they use it."

SEE ALSO: Protesters blocked Palantir's cafeteria to pressure the $20 billion big data company to drop its contracts with ICE

Join the conversation about this story »

NOW WATCH: Super-Earths are real and they could be an even better place to live than Earth

MicrophoneGate: The world's biggest tech companies were caught sending sensitive audio from customers to human contractors. Here's where they stand now. (AMZN, AAPL, GOOGL, MSFT)

$
0
0

apple siri

  • Over the past several months, we've seen a slew of reports about how audio recordings captured by voice assistants like Amazon's Alexa, Apple's Siri, Google Assistant, and Microsoft's Cortana were sent off to human contractors for further evaluation.
  • In the case of Siri, for instance, contractors "regularly" heard recordings of people having sex, business deals, and private doctor-patient discussions, according to a July report.
  • Many of these companies have since suspended or halted these manual-review practices entirely, but it's important to know if you own any gadgets with microphones in them.
  • Visit Business Insider's homepage for more stories.

If you own a device with a microphone in it, chances are that audio snippets were recorded — with or without your knowledge — and sent off to other human beings for examination.

This year, we've seen a handful of reports all saying the same thing: The biggest tech companies in the world still need humans to evaluate the accuracy of their AI assistants, like Amazon's Alexa, Apple's Siri, Google Assistant, and Microsoft's Cortana, which still have issues recognizing speech. The way it works is those humans — often contractors, not full-time employees of these tech conglomerates — are responsible for quality control. They grade responses from voice assistants to see if they were actually helpful.

Read more: Apple contractors working on Siri 'regularly' hear recordings of sex, drug deals, and private medical information, a new report says

These tech companies often go to extreme lengths to ensure privacy and confidentiality, but contractors who wish to remain anonymous have said that it's not hard to identify who's talking when audio recordings often include names and addresses.

Since reports on some of these reports came out, many of these tech companies have decided to either suspend their voice-analysis practices, or halt them entirely.

Here's what we know about each of the tech companies, and how they currently handle your audio.

SEE ALSO: 5 reasons you should buy the new iPhone 11, which is likely arriving in September

Apple recently suspended its practice of sending Siri audio to contractors, saying it would not restart the program until it's been thoroughly reviewed.

This happened after a report from The Guardian shed light on how Apple's contractors regularly heard extremely sensitive information from Siri users, and an anonymous contractor went on the record to explain how there were "no specific procedures to deal with sensitive recordings," claiming they were vulnerable to misuse.

Apple suspended this program less than a week after The Guardian's report, and Apple will reportedly allow users to opt-out of Siri quality assurance in a future software update. According to the Irish Examiner, Apple contractors were reportedly listening to over 1,000 recordings every shift.



Google says it has suspended its practice of having contractors evaluate audio recorded by Google Assistant.

Previously, Google said only a "fraction" of audio recordings for Assistant were chosen for manual review, and Google told Business Insider that no recordings were associated with any personal identifying information. Now, though, Google offers a way to manage your voice-request history and opt out of human review.



Amazon reportedly has thousands of employees and contractors around the world manually reviewing and transcribing clips from Alexa users, but now allows people to opt-out of human review.

According to Bloomberg, Amazon workers were known to share audio snippets with each other — usually to help figure out a word, but sometimes to commiserate when they hear something distressing in a recording.



Microsoft had contractors that listened to audio recorded by people's Xbox game consoles, as well as some Skype calls, but the company said they stopped recording voice content "a number of months ago," and they have "no plans to re-start those reviews."

Reports from Vice and Motherboard shed light on how many Microsoft contractors often heard children's voices, and lots of accidental activations where people thought their Xbox assistants could control whatever was happening in the game.

Microsoft contractors were also surprised to hear voice recordings from Skype that contained intimate conversations with loved ones and the like.



Facebook recently confirmed that it was collecting audio from some voice chats on Messenger, but said it will no longer do this.

According to Bloomberg, Facebook's contractors were told to transcribe audio without knowing how it was obtained in the first place. The company said they "paused" this practice in late July.



An inside look at how research firm Morningstar is racing machine learning-powered model cars to help solve complex data problems

$
0
0

deepracer_AWS

  • Research firm Morningstar is using Amazon Web Service's DeepRacer, a machine learning-powered model race car, as a training tool. That comes as Wall Street firms have been talking more about the public cloud and artificial intelligence. 
  • James Rhodes, Morningstar's chief technology officer, told Business Insider that by using the cars, employees have in turn developed new ways to help analysts extract data from tables in financial statements. 
  • Chicago-based Morningstar created a company-wide league, and 450 employees have signed up. 
  • Morningstar now has permanent race tracks at four of its locations, and plans to send its company champion to compete at the AWS re:Invent conference in December.
  • Click here for more BI Prime stories.

Pop into Morningstar's Chicago headquarters and you might see something unexpected: A group of employees cheering on what appears to be a remote-controlled toy car racing around a track. 

It's not all fun and games, though. Morningstar, one of the world's biggest investment research companies, is turning to machine learning-guided cars to learn about ways to better pull and analyze data. 

Amazon Web Services has been using the DeepRacer cars to introduce clients on its public cloud services to machine learning technology. Wall Street firms, meanwhile, are talking more about wading into the public cloud and uses for artificial intelligence.

James Rhodes, Morningstar's chief technology officer, said he was introduced to the AWS DeepRacer when employees asked if they could spend their training stipends on the cars.

The model race cars use so-called reinforcement learning, an advanced form of machine learning that relies on trial-and-error instead of pre-set training data. Rhodes said that by training with the cars, his team has in turn developed ways to help analysts automatically extract information from tables in financial statements. 

"You get one, and it's like, 'Ok, this might be a personal interest of this person,'" Rhodes told Business Insider. "But when you start getting four, five, six, you start looking into it a little bit more." 

"I realized that it was a really effective tool that we could leverage to give our staff certain skills that we're looking for," he said. 

Read more:AI has the potential to radically transform financial services. But first banks need to get their data in order.

Rhodes said the cars offer experiences in three important areas: they give employees exposure to machine-learning techniques, and people get to test out programming languages they might not see in their day-to-day jobs, such as Python, which is commonly used in data science. They also get hands-on experience with the public cloud. 

"It's like the trifecta of training," Rhodes said. "I get to expose people to newer techniques. I get to expose people to programming languages I want, and I get to expose people to the infrastructure we're moving towards. And I get to do it in a fun way."

'All in' with tracks, racing leagues

AWS started a racing league in late 2018, which culminates in a championship at re:Invent, the cloud provider's conference in December. More than 150 customers are set to participate, with 10% from the financial services industry, an AWS spokesperson said. 

Morningstar "went all in" on the AWS DeepRacer, Rhodes said. It created an internal racing league for the cars, with a promise that the overall champion would get flown to Las Vegas to compete in re:Invent.

Despite offering no monetary incentives to participate, 450 employees signed up, forming almost 100 teams in groups of three to six that spanned 10 countries.

A majority came from Morningstar's tech team — about 35% have signed up — but people across the business are involved. The research team has also shown heavy interest, Rhodes said.

"It's the new way of people interfacing with data," said Rhodes of the general interest in machine learning. "The old way would be everybody dumped everything in Excel."

"Now, I'm seeing much more of a trend in non-technical groups of people leveraging things like Jupiter notebooks and Python," he added. 

Users can test and train AWS DeepRacers in a virtual environment, but Morningstar has pulled out all the stops with physical setups.

Four offices — Chicago, London, Mumbai and Shenzhen — have dedicated spaces for permanent physical tracks, which measure 26' by 17'. Morningstar's other six locations have temporary tracks.

"Those are some very interesting initial conversations with the facilities teams," Rhodes said about discussing office space needs for the race tracks.

See more:Wall Street is finally willing to go to Amazon's, Google's, or Microsoft's cloud, but nobody can agree on the best way to do it: 'If you pick a favorite and you're wrong, you're fired'

Employees quickly learned that there was a risk of stray cars shooting down hallways as people fine-tune their models. The world record for completing a single lap around the 59-foot course is 7.44 seconds, which is 5.4 MPH or about 100 MPH when scaled up to a real-size car.

Portable barriers were added to prevent accidents, Rhodes said. 

There are team logos and names, including "The Reinforcer," a nod towards reinforcement learning, "Need for Speed" and even "Can't Drive."

In Chicago, the company is also hosting another league that includes three other large financial firms. Rhodes declined to name who else was involved, but said the league will start in the coming weeks, and the plan is to create similar ones at other Morningstar locations. 

The internal league kicks off the first week of October, and winners in all 10 Morningstar locations will be crowned. In November, the company will invite the 10 regional winners to Chicago for the Morningstar championship. 

"It's very competitive. There's a fair amount of trash talking," Rhodes said.

"But in a friendly way," he quickly added.

Join the conversation about this story »

NOW WATCH: Most hurricanes that hit the US and Caribbean islands come from the same exact spot in the world

'It's the art of the possible': How Walmart and Target are harnessing AI to rocket past the competition

$
0
0

Target

  • Companies are investing heavily in artificial intelligence, and the adoption of the advanced technology promises to further divide successful firms from those struggling to adapt to the new digital world.
  • Second-quarter earnings at Target and Walmart, which are leading the field in AI adoption, exceeded expectations, providing early evidence to investors and analysts that investments in this field can help cut costs.
  • While some firms regularly tout the new tech offerings, the vast majority of companies are staying quiet on AI.
  • Click here for more BI Prime stories.

Legacy retailers like Target and Walmart are upping artificial-intelligence efforts to get the desired products into the customer's hands easier, cheaper, and faster.

The better-than-expected earnings for some firms — along with the uneven performance of others — demonstrates the potential that AI has to transform the retail industry and the treacherous road ahead to get there. 

Walmart, for instance, is rolling out new technology in thousands of its stores, with the goal of eliminating the "mundane" tasks done by store associates so they can spend more time with customers. 

"Pretty much everything that we focus on is just making things that you know and do today a lot easier," John Crecelius, Walmart's senior vice president of central operations, told Business Insider. "What makes this exciting and fun is the ecosystem you create. It's the art of the possible when you have several pieces of technology in the same store gathering data and interacting with each other."

Walmart and Target provide parallel examples of AI implementations. 

Investment in AI-based startups specific to the retail industry grew to $1.8 billion between 2012 and 2018, the marketing-intelligence firm CB Insights found. Global spending on AI is expected to reach $7.3 billion by 2022, based on estimates from Juniper Research. 

Walmart has implemented an automated process to unload shipments from trucks and speed the time under which new products reach the floor. The technology will be used in 1,800 stores this year, according to Crecelius. The company is also using its partner Bossa Nova's robots to scrub floors and restock shelves and cameras to monitor self-checkouts to curb theft. 

That emphasis on operational efficiencies helped underscore solid second-quarter earnings, including a better-than-expected 2.8% growth in same-store sales, according to a UBS report provided to Business Insider. 

Walmart doesn't view any single offerings as a key cost-cutting mechanism. Instead, they're "individual, small changes that add up to an ecosystem for our stores," Crecelius said. 

The firm is also testing a number of new applications in a store in Levittown, New York, including using cameras, sensors, and other hardware to inform when store employees need to restock certain items on the shelves. 

Meanwhile, Target reported second-quarter earnings this month that far exceeded expectations. Among other initiatives, the retailer is using new technology to help dictate employee tasks and improve shipping, according to John Mulligan, its chief operating officer.

Target has also explored using blockchain to better manage its supply-chain operations, initially experimenting with a database to certify its paper providers, according to a blog post earlier this year. 

The Minneapolis-based firm is also investing in additional automation in the backroom of its stores to "help them become more productive,"Mulligan recently told investors. 

But while AI might be a popular topic, executives are shying from touting the tech they're adopting.

Executives are typically coy on the actual initiatives under development, but trends around AI deployment have developed. Implementations generally focus on improving online operations, streamlining in-store inventory through better supply-chain management, analyzing vast amounts of consumer behavior to better match the customer with preferred products, and even using robots to clean the floors of their brick-and-mortar locations.

And while AI is a popular buzzword for some retailers, many companies have refrained from mentioning it publicly. Among more than 50 publicly traded firms, just nine retailers discussed strategies for using AI in their operations on earnings calls through the start of 2018, according to a recent analysis of 1,600 calls by CB Insights.

Executives remain hesitant to move too quickly to implement new AI-based applications.

Target and Walmart are approaching the adoption of AI cautiously, limiting investments to tech that can help reduce costs for the company and improve the shopping experience. Such returns are difficult to predict on a large scale, indicating that customers could be waiting a while before their preferred locations roll out the advanced offerings.

And while investors saw promising results from the two retailers, other companies have yet to show any material impact from their own AI-related initiatives. 

Last year, Nordstrom acquired two firms that would allow store associates to better communicate with shoppers when they are not physically in stores in hope of ultimately creating a more seamless shopping experience. Like other retailers, the firm is also experimenting with AI-based visual-search tools that allow customers to better find the desired items more quickly. 

But the department store's results remain lackluster. On Wednesday, Nordstrom reported uneven second-quarter earnings. Analysts didn't pinpoint AI as a factor, instead questioning the extent of price discounts implemented in the three-month period to counter slowing sales. 

The outcome, however, is evidence that simply ramping up technology within stores does not produce instant financial returns. 

"The decision to pursue AI does not imply immediate impact. It's more than just [getting] the right technology," David Simchi-Levi, a professor at the Massachusetts Institute of Technology and a consultant for several top retailers, told Business Insider. "You need the right processes, you need the right transformation, and you need the people with the right skills — and that's probably the main bottleneck."

Some firms have been successful without pivoting to AI, but experts say that's not the case for the majority of companies

Adding pressure to legacy retailers is the rise of upstart companies that are entirely AI-based. San-Francisco-based Stitch Fix offers personal-styling services online, using the technology to determine which clothes a user might like before shipping them. The firm then analyzes returns to craft a better profile of the customer. It now has over 3 million customers.

For the bulk of retailers, experts say it's no longer a question of whether to put resources toward AI-based applications. Still, implementing AI is not guaranteed to lead to major savings. Companies like T.J. Maxx continue to excel despite largely forgoing major investments in new technology. 

"You start learning and investing in this, or you will be behind the competition that surely is going to invest in this type of technology," Simchi-Levi added.

SEE ALSO: AI is going to change your career. IBM is showing how that can be a good thing.

Join the conversation about this story »

NOW WATCH: Taylor Swift is dropping a new album. Here's how the world's highest-paid celebrity makes and spends her $360 million.

Elon Musk says the difference between human intellect and AI is comparable to the difference between chimpanzees and humans

$
0
0

Elon Musk

  • Elon Musk is the CEO of three companies: Tesla, Neuralink, and SpaceX. The second of those three, Neuralink, is focused on human-computer interfaces for artificial intelligence in people.
  • There's a good reason for that: Elon Musk believes that AI will be "much smarter than the smartest human," and that puts human beings at a tremendous disadvantage.
  • In a conversation with Alibaba CEO Jack Ma at the World AI Conference in Shanghai, China, Musk explained the evolutionary step that AI represents: "Can a chimpanzee really understand humans? Not really. We just seem like strange aliens. They mostly just care about other chimpanzees. And this will be how it is, more or less."
  • Visit Business Insider's homepage for more stories.

What comes to mind when you hear the words "artificial intelligence"? 

Perhaps you think of Apple's digital assistant, Siri, or Rosie the Robot from "The Jetsons"? Perhaps you think of Haley Joel Osment as a robot boy in the 2001 film, "AI"?

Artificial Intelligence (movie) — Haley Joel Osment

Elon Musk thinks you're looking at it all wrong.

"I think generally people underestimate the capability of AI — they sort of think it's a smart human,"Musk said at a talk with Alibaba CEO Jack Ma at the World AI Conference in Shanghai, China this week. "But it's going to be much more than that. It will be much smarter than the smartest human."

For context, Musk compared the difference between AI and humans to the difference between humans and chimpanzees. 

Jane Goodall with chimpanzee

"Can a chimpanzee really understand humans? Not really," he said. "We just seem like strange aliens. They mostly just care about other chimpanzees. And this will be how it is, more or less."

Moreover, he couched that context in optimism: "In fact, if the difference is only that small, that would be amazing — probably it's much, much greater." It's this stark difference in intellectual capacity between AI and human beings that has Musk worried for the future of our species.

"What do you do with a situation like that? I'm not sure. I hope they're nice," he said. 

That's why, he said, he founded his company Neuralink. "If you can't beat 'em, join 'em — that's what Neuralink is about. Can we go along for the ride with AI?"

Check out the full video of the talk with Musk and Ma right here.

SEE ALSO: Video appears to show thieves stealing a locked Tesla in 30 seconds by tricking its computer into thinking they had the key

Join the conversation about this story »

NOW WATCH: 7 lesser-known benefits of Amazon Prime

Elon Musk says humans communicate so slowly with computers that it will sound like whale speech to future AI (TSLA)

$
0
0

Elon Musk

  • Elon Musk is the CEO of three companies: Tesla, Neuralink, and SpaceX. The second of those three, Neuralink, is focused on creating human-computer interfaces to connect artificial intelligence with the human mind and body.
  • Musk is focused on human-computer interfaces because he's worried about the human race getting left behind as AI gets better and better.
  • One major problem Musk pointed to in a talk on AI this week in Shanghai: Humans communicate data far, far more slowly than computers. "Human speech to a computer will sound like very slow tonal wheezing, kind of like whale sounds," he said.
  • Visit Business Insider's homepage for more stories.

There are major language barriers between human beings and computers.

For one, humans communicate with language and text and images — all inputs that are inherently slower at communicating information than straight-up data.

But also, crucially, computers perceive time differently than humans. That's because of their ability to process data at a far higher speed than humans can.

computer

To a computer, "a millisecond is an eternity, but to us it's nothing," Elon Musk, the CEO of Tesla, SpaceX, and Neuralink, said in a wide-ranging conversation about AI in Shanghai this week.

"Human speech to a computer will sound like very slow tonal wheezing, kind of like whale sounds. Because what's our bandwidth — a few hundred bits per second, basically, maybe a few kilobits per second if you're going to be generous?"

In so many words, Musk is saying that human forms of communication — speech, gestures, etc. — are built for communicating with other humans. When those inputs are applied to "speaking" to a computer, they become woefully inadequate.

Computers, however, can communicate data far, far more quickly — "at a terabyte level," Musk said.

apple watch siri shazam

And that communication difference could be a major issue for future AI-human relations, to the point where it would be similar to a tree trying to communicate with a human.

"The computer will just get impatient, if nothing else," Musk said. "It will be like talking to a tree — that's humans."

Check out the full video of the talk with Musk and Alibaba CEO Jack Ma »

SEE ALSO: Elon Musk says the difference between human intellect and AI is comparable to the difference between chimpanzees and humans

Join the conversation about this story »

NOW WATCH: Watch SpaceX's 'most difficult launch ever'


The Pentagon admitted it will lose to China on AI if it doesn't make some big changes

$
0
0

US Marines Afghanistan Night

  • The Pentagon admitted Friday that it faces certain disadvantages in the strategic competition with China to develop AI-enabled technologies and capabilities.
  • China's AI programs benefit greatly from its militaries integration with the private sector, something the US military has been struggling to cultivate.
  • "If we do not find a way to strengthen the bonds between the United States government and industry and academia, then I would say we do have the real risk of not moving as fast as China when it comes to" artificial intelligence, Lt. Gen. Jack Shanahan, director of the Joint Artificial Intelligence Center, explained to reporters Friday.
  • Visit Insider's homepage for more stories.

Major powers are rushing to strengthen their militaries through artificial intelligence, but the US is hamstrung by certain challenges that rivals like China may not face, giving them an advantage in this strategic competition.

Artificial intelligence and machine learning are enabling cutting-edge technological capabilities that have any number of possibilities, both in the civilian and military space. AI can mean complex data analysis and accelerated decision-making — a big advantage that could potentially be the decisive difference in a high-end fight.

For China, one of its most significant advantages — outside of its disregard for privacy concerns and civil liberties that allow it to gather data and develop capabilities faster — is the fusion of military aims with civilian commercial industry. In contrast, leading US tech companies like Google are not working with the US military on AI.

"If we do not find a way to strengthen the bonds between the United States government and industry and academia, then I would say we do have the real risk of not moving as fast as China when it comes to" artificial intelligence, Lt. Gen. Jack Shanahan said, responding to Insider's queries at a Pentagon press briefing Friday.

Shanahan, the director of the Pentagon's Joint Artificial Intelligence Center, said that China's civil-military integration "does give them a leg up," adding that the the Department of Defense will "have to work hard on strengthening the relationships we have with commercial industry."

China's pursuit of artificial intelligence, while imperfect, is a national strategy that enjoys military, government, academic, and industry support. "The idea of that civil-military integration does give strength in terms of their ability to take commercial and make it military as fast as they can," Shanahan explained.

china military flag

The Pentagon has been dealt several serious blows by commercial industry partners. For instance, Google recently decided it is no longer interested in working with the US military on artificial intelligence projects. "I asked somebody who spends time in China working on AI could there be a Google/Project Maven scenario," Shanahan said Friday. "He laughed and said, 'Not for very long.'"

Chairman of the Joint Chiefs of Staff Gen. Joseph Dunford sharply criticized Google earlier this year, accusing the company of aiding the Chinese military.

Shanahan acknowledged that the relationships between the military and industry and academia that helped fuel the rise of Silicon Valley have "splintered" due to various reasons, including a number of incidents that have shaken public trust in the government. "That is a limitation for us," he admitted.

"China's strategy of military-civil fusion does present a competitive challenge that should be taken seriously," Elsa Kania, a Center for New American Security expert on Chinese military innovation, wrote recently.

"Looking forward, US policy should concentrate on recognizing and redoubling our own initiatives to promote public-private partnership in critical technologies, while sustaining and increasing investments in American research and innovation."

The US is not without its own advantages.

US Soldiers Iraq

One important advantage for the US as it looks at not only what AI is but the art of the possible for use in the military is US warfighting experience, something China doesn't really have.

Shanahan told reporters at the Pentagon that China has "advantage over the US in speed of adoption and data," but explained that not all data is created equal. "Just the fact that they have data does not tell me they have an inherent strength in fielding this in their military organizations," he said.

China can pull tons of data from society, but that, Shanahan explained, is a very "different kind of data than full-motion video from Afghanistan and Iraq," which can be carefully analyzed and used to develop AI capabilities for the battlefield.

The Department of Defense is looking closely at using AI for things like predictive maintenance, event detection, network mapping, and so on, but the next big project is maneuvering and fire.

Shanahan said "2020 will be a breakout year for the department when it comes to fielding AI-enabled technologies," but what exactly that big breakout will look like remains to be seen.

Join the conversation about this story »

NOW WATCH: Here's what 'Narcos' and 'Sicario' get wrong about Mexican drug cartels

U-2 spy planes have lurked all over the world for 64 years — here's how the Dragon Lady keeps an eye on the battlefield

$
0
0

U-2 Dragon Lady California Sierra Nevada

  • The U-2 spy plane has been operating all over the world for more than 60 years.
  • The Dragon Lady's mission has remained the same over that time, but how it does it has changed considerably, and the Air Force is always looking for ways to gather more information and distribute it faster.
  • Visit Business Insider's homepage for more stories.

The 64th anniversary of the U-2 spy plane's historic, and accidental, first flight came in early August.

While much about the Dragon Lady has changed in the past six decades — most of the 30 or so in use now were built in the 1980s, and they no longer do overflights of hostile territory, as in the 1960 flight in which Francis Gary Powers was shot down over the Soviet Union — the U-2 is still at the front of the military's intelligence, surveillance, and reconnaissance mission, lurking off coastlines and above battlefields.

The U-2 is probably best known for what pilots call "the optical bar camera," Maj. Travis "Lefty" Patterson, a U-2 pilot, said at an Air Force event in New York City in May.

"It's effectively a giant wet film camera," about the size of a projector screen, that fits in the belly of the aircraft and carries 10,500 feet of film, Patterson said during a panel discussion about the U-2 and its mission.

The camera has improved greatly since the 1950s. "What we can do with that, for instance, in about eight hours, we can take off and we can map the entire state of California," Patterson said. "The fidelity is such that if somebody is holding a newspaper out ... you can probably read the headlines."

Air Force U-2 Dragon Lady

The aircraft's size and power allow it to carry a lot of hardware, earning it the nickname "Mr. Potato Head."

"We can take the nose off, and we can put a giant radar on the nose, and you could actually image ... out to the horizon, which, if you think about it, from 70,000 feet, is about 300 miles," Patterson said. "So if you're looking 360 degrees, you can see 600 miles in any direction."

Another option is "like a big digital camera," Patterson said. "It's got a lens about the size of a pizza platter, and it has multiple spectral capabilities, which means it's imaging across different pieces of the light spectrum at any given time, so you can actually pull specific data that these intel analysts need to actually identify what is this material made out of."

"We also carry what's called signals payloads, so we can listen to different radars, different communications," Patterson said. "We have a number of antennas all across the aircraft [with which] we're able to just pick up what other people are doing."

"Some of these sensors can see hundreds and hundreds of miles, so even if we're not overflying, you can get a real deep look at what you actually want to see," Maj. Matt "Top" Nauman, also a U-2 pilot, said at the event.

'Just a sensor'

U2 U-2 Dragon Lady pilot crew

The U-2 is "just a sensor in a broader grid that the United States has all over the world ... feeding data to these professionals," Patterson said.

Whether it's radar imagery or signals intercepts, "We bring all that on board the aircraft, and we pipe it over a data link to a satellite and then down to the ground somewhere else in the world where we have a team of almost 300 intel analysts," Patterson said.

"So while we're sitting by ourselves over a weird part of the world doing that ISR mission, all the information we're collecting is going back down to multiple teams around the globe," he added. "They're ... distilling it, turning it into usable reports for the decision makers, and [getting] that information disseminated."

Capt. Joseph Siler, the chief of intelligence training with the 492nd Special Operations Support Squadron, was tasked leading those efforts.

"I loved talking to the [U-2] pilots, and ... having that pilot [who] is actually understanding the context of where they're at and is able to dynamically change direction and help us, it just brings something to the fight," especially when sudden changes require a new plan, Siler said at the same event, during a panel discussion about the mental and physical strain of Air Force operations.

U2 U-2 Dragon Lady pilot

"I got more of the quick-time, actionable intelligence" from U-2s, Siler said. "It's all going into this common picture, but that's where they fit into it."

That doesn't mean the U-2 can't play a role in the action on the ground as it unfolds.

"We have multiple radios on board," Patterson said. "So let's say you're flying a mission over a desert somewhere and we have troops on the ground that are in contact. We'll be talking directly to them sometimes, providing imagery."

That imagery isn't going straight from the U-2 to the troops, but "they can tell me what they need to listen to, where they need to look, and we'll move the sensors to that spot, snap an image, kick it back over whatever data links we need to get it to the intel professionals," he said. "They will do their rapid analysis and send that, again, to the forward edge, where those folks can take a look at it."

"You can see troop movements. You can see things like that," Patterson said. "We've spent a lot of time looking for [improvised explosive devices] and providing [that information] real-time to convoys and things like that. I've done that personally."

'Constant, constant stress'

U-2 U2 Dragon Lady pilot crew

Patterson analogized the relay of information to a game of telephone.

It's on "the airmen that are receiving that to be able to make that decipherable and useful," Siler said of intelligence gathered by U-2s. "When I was in there, in that environment, receiving all that information and how that work, it's just such a weird place. It's different from traditional conflict."

The waves of incoming information are a source of "constant, constant stress," added Siler, who has spoken about his recovery from post-traumatic stress disorder.

"I'm getting information from the U-2. I'm getting information from satellites. I'm getting information from an MQ-9, and I have an Army task force that's about to go in, and there's people's lives that are going to be tested," Siler said.

"What the intelligence community does is we look at all the information we can get, from whatever sensor it is, we pipe that together, and then we say, 'All right, based upon what the U-2 is saying and what the Global Hawk is saying and what the satellites are saying, we believe this is the best route, this is the best time.'"

Final decisions about when and where to go are made by operators. But, Siler said, "you can imagine the sense of responsibility that these young airmen, 19, 20 years old, feel as they make those calls, and we say, 'is that the bad guy or is that his 16-year-old son?'"

'Algorithmic warfare'

Air Force U-2 U2 spy plane landing chase car

The reason the U-2 funnels that intelligence back to crew members on the ground is that "it's so much data that we just simply can't process all of it on board," Patterson said.

A U-2 pilot can key on an interesting signal picked up by a sensor, sending imagery to intelligence analysts on the ground. Those analysts can decide to look into it, routing a satellite to take a look or sending a drone to get photos and video.

The process can run the other way as well. A tip from social media can lead an analyst on the ground to send in a U-2 to gather photos and other imagery. If necessary, assets like a drone or an F-16 with video capability can be sent in for a closer look.

"As you start networking [these assets], using these algorithms and using these processing capabilities, if I hear a signal here, and somebody hears the same signal but they're over here, you can instantly refine that" if the assets are in sync, Patterson said. "We're able to map down some pretty interesting stuff pretty quick."

U2 Spy

But the goal is do it quicker, and the Air Force has been looking at artificial intelligence and machine learning to sort through all the data gathered by U-2s and other aircraft and sensors and make sense of it.

Integrating that into the broader intelligence, surveillance, and reconnaissance mission is still in its "infancy," Nauman said.

"We know the capability's there. We know the commercial sector is really doing a lot of development on that. They're ahead on that frankly," Nauman said. "We're trying to figure out, A) how to catch up and be as good, and then Part B is what do we do with that, how do we make ourselves more effective with that."

"Processing is getting really good, really fast, so there are a number of efforts to actually take a lot ... of the stuff that we collect, running it through an algorithm at ... what we call the forward edge — like right on board the aircraft — [and] disseminate that information to the fight real-time, without having to reach back, and those some of the projects that we're working right now," Patterson said, describing what senior leaders have called "algorithmic warfare."

"It's easier to put racks and racks of servers and [graphics processing units] on the ground, obviously, to do the processing, but how do we take a piece of that and move that to the air?" Nauman said. "I think that's going to be kind of the follow-on step."

SEE ALSO: Here's what it takes to pilot the U-2 spy plane as it soars 13 miles above the earth

Join the conversation about this story »

NOW WATCH: The U-2 spy plane is so hard to fly pilots have to perform a 'controlled crash' just to land it

Accenture's head of artificial intelligence shares the 4-step plan every company should consider before investing in AI

$
0
0

Athina_Kanioura_primary headshot

  • Artificial intelligence can result in cost savings and increased market share, and it can free up employees to tackle more complex tasks — but investing in the tech isn't necessarily the right choice for all companies.
  • Executives can follow four steps from Accenture Applied Intelligence's Athina Kanioura to determine whether to invest in and how best to implement AI to automate processes like customer-service interactions.
  • At Accenture, Kanioura works with clients to roll out preprogrammed AI platforms that have helped companies like San Francisco's Golden State Warriors and Colombia's Avianca Airlines.
  • Click here for more BI Prime stories.

Artificial intelligence is one of the buzziest terms in corporate America, but organizations don't need to pursue a tech upgrade as a solution for every problem, Athina Kanioura said.

Instead, Accenture's chief analytics officer says that companies — and the executives that run them — should meditate on the internal and external pressures that are forcing them to consider their investment.

Together, these pushes and pulls form a firm's "AI narrative," said Kanioura, whose 3,000-data-scientist-strong Applied Intelligence arm at Accenture has consulted for organizations from the Golden State Warriors to Carnival Cruises. A financial institution facing trouble growing its user base, for example, may opt to automate initial customer interactions to cut costs.

To vet your own AI narrative, you might want to consider the below factors:

Weigh the tech investments you've made so far and what return you are hoping to get by implementing applications that rely on AI.

For Kanioura, the first step in considering investing in AI is determining the appropriate return you are seeking and whether it will complement other tech priorities, like cloud computing or data analytics.

"Any investment in AI is incremental to existing investments," Kanioura said. "Every company should assess what investments they have made so far … and then see what is needed on top of that to drive a specific mindset change within the enterprise." 

One limitation she urges firms to consider is the large up-front cost, particularly since profits will likely not come until after the first year, given most of that time is used to recoup the initial investment. Kanioura also advises companies to decide whether being an early adopter of the technology is critical.

While there can be value in capturing the "first-mover" label — namely differentiating yourself from competitors — there are also pitfalls. A hasty switch to AI can disrupt global operations that may be centralized in one location but are used in many different markets, Kanioura said. Global corporations also tend to have very long business processes and are not as agile as smaller rivals, which can make implementation across the organization all at once more difficult.

Instead, she advises companies to identify areas of the business that are considered "no-regret" moves, meaning there is room for experimentation without fear of any significant losses. One consumer brand that Kanioura worked with, for example, wanted to use AI to increase its market share in a specific category to 15%. It was fine with losing a portion of those sales initially in hopes of eventually reaching that goal.

Pick a section of the business to deploy AI that can provide easily testable results.

It's best to select areas that are more likely to produce tangible outcomes, Kanioura said.

Consider customer service: Companies like AT&T and LG Electronics use so-called conversational AI to address user inquiries through automated responses. Human agents can be looped in to the conversation when necessary, allowing employees to focus on more complex tasks. The hope is that the use of the technology will reduce response times and ultimately cut costs.

"You can get a lot of value by reducing costs because now you deflect clients to the online channels and use that to fuel your growth," she said.

Another early-use case, according to Kanioura, is using AI to better target prospective customers, a tactic she argues produces clients that are more likely to purchase a specific product.

The technology, for example, gives companies the ability to tailor discounts to specific clients by quickly analyzing mounds of behavioral data, including what type of purchases they make and at what time of day. In one example that Accenture was not involved in, Best Western used information from an ad campaign that asked users to respond to specific questions to provide more catered travel recommendations.

Roll out the technology in a staged approach and rely on both external and internal experts. 

Automating operations like customer-service centers will have a profound day-to-day impact. For one, it can displace the daily job requirements of many workers. To manage that, Kanioura said companies needed to hire experts who guide corporations through periods of major change that understand the implications, as well as make sure they have in-house employees who are knowledgeable about the specific technology being implemented.

"That protects the company from massive losses in technology investments. It also protects the business stakeholders from making any decision that could be fatal," she told Business Insider.

Monitor results on a monthly basis, but aim to recoup the investments after one year.

Monthly costs are likely to change once artificial-intelligence-backed services are introduced. More tailored advertising, for example, could gradually reduce the overall marketing spend by eliminating those users that are unlikely to purchase a given product.

"If you don't start getting results within the year," Kanioura said, "then there's a problem."

But the annual evaluation doesn't have to focus solely on direct financial improvements. Reducing how many people are focused on customer-service issues, for example, can allow employees to pursue initiatives that in the long run result in revenue and profit boosts.

To Kanioura, the return on investment for artificial intelligence is twofold: achieving cost reductions through automation and creating new growth opportunities by freeing employees up to focus on higher-value initiatives. 

Those benefits have the potential to grow significantly as AI matures and newer applications are able to replace more tasks done by human workers. It remains to be seen, however, how companies will respond to that from a workforce perspective. But it's likely Accenture — which is one of the world's largest firms consulting on the technology — will play a role in figuring that out.

SEE ALSO: AI is going to change your career. IBM is showing how that can be a good thing.

Join the conversation about this story »

NOW WATCH: Here's what airlines legally owe you if you're bumped off a flight

Healthcare companies staffing up on top tech talent should use Obama's leadership structure as a guide, says first-ever White House CTO

$
0
0

Aneesh Chopra

  • Healthcare firms that are staffing up on top technology talent have important decisions to make on how to structure the various roles within the C-Suite. 
  • The industry should mirror the Obama administration's leadership structure, which split key tasks between the chief technology and chief information officers, argues Aneesh Chopra, the first-ever White House CTO. 
  • Organizations should also avoid choosing the "title of the month" and let the ultimate goals of the role dictate the title instead, says Laura Merling, an independent consultant who helps organizations with their digital transformations.
  • Click here for more BI Prime stories.

Former-President Barack Obama's administration should be a model for healthcare firms weighing how to divide digital responsibilities in the C-Suite, according to his past chief technology officer.

The industry is increasingly tapping experts from Silicon Valley and other industries to help lead their tech transformations. Those initiatives can be as simple as launching new online systems for patients to book appointments or moving data storage to the cloud, to eventually more advanced applications that can help hospitals know of potential health issues even before someone sets foot inside a hospital.

Figuring out how those new executives will fit into the existing corporate structure can be difficult, but it's vital to the success of the tech overhaul.

Aneesh Chopra — the country's first chief technology officer, appointed in 2009 by Obama — argues that companies should split responsibilities between the CTO and the chief information officer like the White House did at the time. The tech head, he says, should be focused on how organizations can use data to improve operations and the CIO should manage the security of the information.

"I saw first hand the need to decouple infrastructure from applications in use," he told Business Insider. "My focus was on the use of technology, data, and innovation to solve problems. We had a separate CIO whose job was to make sure our networks were secure, were uptime and efficient. These are two new muscles."

At the White House, Chopra held the top technology spot during the implementation of Obamacare, efforts to open up more of the federal government's data for use by the public sector, and a $30 billion initiative to digitize medical records. He is currently the president of CareJourney, a company that aims to use machine learning to better match patients to the correct course of treatment. 

Read more:Meet the 30 young leaders who are transforming the future of healthcare and disrupting a $3.5 trillion industry

Analyzing and protecting data are 'fundamentally different' and require two executives to manage

For chief information officers, the most important part of their job is protecting the company's data, says Chopra, who called it a "firing offense" if there is an incident. There's good reason for that. Since 2009, there's been over 2,546 data breaches among healthcare firms, according to federal statistics.

But the role also requires ensuring the platforms are running, known as uptime. Perhaps most importantly, the individual must be able to accomplish those two tasks in a cost-effective manner. Unlike other departments, the IT sector is almost always considered a drain on company resources because no revenue is generated and the technology is often very expensive.

"The best CIOs have found a way to accomplish the uptimes goals and the security goals all within a budget that the institution can afford. That's your primary job," Chopra said.

The job of chief technology officer instead should focus on bringing all that data together and analyzing it to determine, for example, how best to treat a patient or whether an individual is prone to regularly miss appointments.

The role of the CTO is "more about putting the data to use, not about securing it or uptime or efficiency," says Chopra.

Sometimes the demands of the job may require the creation of an entirely different role, like in the case of Mt. Sinai Hospital, a company Chopra recommends firms look to for guidance.

The hospital chain recently tapped Andrew Kasarskis as its first chief data officer. Kasarskis previously served as the vice chairman of the genetics department at the provider's Icahn School of Medicine. In the new role, he'll work to harmonize the clinical, financial, and administrative data to help Mt. Sinai then analyze the information to improve patient care.

It's a different tact from other healthcare firms like Providence St. Joseph Health, which hired B.J. Moore from Microsoft to serve as its chief information officer.

"We're seeing more out of industry people with tech experience coming into healthcare," Chopra said. "But the more fundamental story is 'what's the job to be done and how does one organize the talent pool to get that job done to the maximum level.'"

Define the goal of the position first and 'don't choose the title of the month because it sounds good'

The first step in establishing roles and titles is figuring out the ultimate goal of the position, according to Laura Merling, a senior advisor at McKinsey & Company who helps organizations with their digital transformations.

Chief information officers, for example, are not typically attuned to managing financial reports and shouldn't be in a role that requires new revenue generation or cost-cutting.

"It's all about what you are trying to achieve and, and then that'll determine the role," Merling said in a recent interview. "Don't choose the title of the month because it sounds good."

Companies should also keep in mind under what leader they put top talent. When Merling was at Ford Motor Co., for example, she served as vice president of connected vehicle commercialization within the IT department, a structure she says "failed miserably" because the sector is not traditionally one that drives profits and instead is known to require large amounts of funding.

"Having a revenue number tied to the CIO role is really horrible because it's a cost center," said Merling, referencing the strong likelihood that the business unit will not create new income. 

As more healthcare firms hire top tech talent, they are likely to take different approaches to the job titles and how to divide up roles. But while the outcome will vary by company, every organization needs to ensure their data is protected and utilized efficiently for the digital transformation to be successful. 

SEE ALSO: Accenture's head of artificial intelligence shares the 4-step plan every company should consider before investing in AI

Join the conversation about this story »

NOW WATCH: Taylor Swift is dropping a new album. Here's how the world's highest-paid celebrity makes and spends her $360 million.

A fake interview with Vladimir Putin demonstrates how convincing deepfakes could be created in real-time in just a matter of years

$
0
0

vladimir putin deepfake

A recent tech conference held at MIT had an unexpected special guest make an appearance: Russian President Vladimir Putin.

Of course, it wasn't actually Putin who appeared on-screen at the EmTech Conference, hosted earlier this week at the embattled, Jeffrey Epstein-linked MIT Media Lab. The Putin figure on-stage is, pretty obviously, a deepfake: an artificial intelligence-manipulated video that can make someone appear to say or do something they haven't actually said or done. Deepfakes have been used to show a main "Game of Thrones" character seemingly apologize for the show's disappointing final season, and to show Facebook CEO Mark Zuckerberg appearing to admit to controlling "billions of people's stolen data."

Read more: From porn to 'Game of Thrones': How deepfakes and realistic-looking fake videos hit it big

The Putin lookalike on-screen is glitchy and has a full head of hair (Putin is balding), and the person appearing on-stage with him doesn't really try to hide the fact that he's truly, just interviewing himself:

 

However, the point of the Putin deepfake wasn't necessarily to trick people into believing the Russian president was on stage. The developer behind the Putin deepfake, Hao Li, told the MIT Technology Review that the Putin cameo was meant to offer a glimpse into the current state of deepfake technology, which he's noticed is "developing more rapidly than I thought."

Li predicted that "perfect and virtually undetectable" deepfakes are only "a few years" away.

"Our guess that in two to three years, [deepfake technology] is going to be perfect," Li told the MIT Technology Review. "There will be no way to tell if it's real or not, so we have to take a different approach."

As Putin's glitchy appearance shows, deepfake technology has yet to perfect real-time believable deepfakes. However, the tech is advancing quickly: One example is the Chinese deepfake app Zao, which lets people superimpose their faces into those of celebrities in really convincing face-swaps.

The advancement of AI technology has made deepfakes more believable, and it's now even more difficult to decipher real videos from doctored ones. These concerns have led Facebook to pledge $10 million into research on detecting and combatting deepfakes.

Additionally, federal lawmakers have caught onto the potential dangers of deepfakes, and even had a hearing in June about "the national security threats posed by AI-enabled fake content."AI experts also have raised concerns that deepfakes could play a role in the 2020 presidential election.

SEE ALSO: People are roasting Apple for trying to make 'slofies' happen

Join the conversation about this story »

NOW WATCH: 5 things wrong with Apple's lightning cable

A deepfake pioneer says 'perfectly real' manipulated videos are just 6 months away

$
0
0

vladimir putin deepfake

  • Deepfake artist Hao Li, who created a Putin deepfake for an MIT conference this week, told CNBC on Friday that "perfectly real" manipulated videos are just six to 12 months away. 
  • Li had previously said that he expected  "virtually undetectable" deepfakes to be "a few years" away.
  • When asked for clarification on his timeline, Li told CNBC that recent developments, including the emergence of the wildly popular Chinese app Zao, had led him to "recalibrate" his timeline. 
  • Visit Business Insider's homepage for more stories.

A deepfake pioneer said in an interview with CNBC on Friday that "perfectly real" digitally manipulated videos are just six to 12 months away from being accessible to everyday people. 

"It's still very easy you can tell from the naked eye most of the deepfakes," Hao Li, an associate professor of computer science at the University of Southern California, said on CNBC's Power Lunch. "But there also are examples that are really, really convincing."

He continued: "Soon, it's going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions."

Li created a deepfake of Russian president Vladimir Putin, which was showcased at an MIT tech conference this week. Li said that the video was intended to show the current state of deepfake technology, which is developing more rapidly than he expected. He told the MIT Technology Review at that time that "perfect and virtually undetectable" deepfakes were "a few years" away.

When CNBC asked for clarification on his timeline in an email after his interview this week, Li said that recent developments, including the emergence of the wildly popular Chinese app Zao, had led him to "recalibrate" his timeline.  

Read more:Viral Chinese deepfake app Zao lets people superimpose their faces onto celebrities like Leonardo DiCaprio and it is terrifyingly convincing

"In some ways, we already know how to do it," he said in an email to CNBC. "[It's] only a matter of training with more data and implementing it."

The advancements in artificial intelligence are enabling deepfakes to become more believable, and it's now more difficult to decipher real videos from doctored ones. This has raised alarm bells about spreading misinformation, especially as we head into the 2020 presidential election.

SEE ALSO: There's a terrifying trend on the internet that could be used to ruin your reputation, and no one knows how to stop it

Join the conversation about this story »

NOW WATCH: Steve Jobs left Apple to start a new computer company. His $12-million failure saved Apple.

The managing partner of Andreessen Horowitz explains why his firm is investing in a budding technology that 'will be applied in almost every area'

$
0
0

Scott Kupor

  • Scott Kupor is the managing partner of Andreessen Horowitz, the venture-capital turned financial-services firm that was an early investor in tech successes including Facebook, Slack, and Airbnb.
  • At the recent CNBC Institutional Investor Delivering Alpha Conference, he explained why he disagreed with the notion that the venture-capital space is "frothy," and discussed one area of tech that the firm is bullish about.
  • Click here for more BI Prime stories.

At CNBC's recent Delivering Alpha conference, the opening question to a panel of tech investors was predictably about valuation.

The question could not have been timelier given that WeWork, the shared-workspace provider, had just postponed its initial public offering after investors closely scrutinized its business model. Uber and Lyft were also on investors' minds after taking significant haircuts on the public market in less than six months of trading.

It seemed like a great time for any cautious tech investor to speak up about the market's supposed frothiness.

But Scott Kupor, the managing partner of Andreessen Horowitz, the venture-capital turned financial-services firm, begged to differ.

He is not in the camp that's screaming about a new tech bubble on the verge of bursting. In fact, he's of the view that tech valuations have improved over the last couple of years.  

"If you look at where valuations are today relative to probably three or four years ago, we used to have a much bigger disconnect between where private and public companies are," Kupor said. 

He added: "Over the last four years, public market valuations have definitely gone up. But you see much greater parity at least between private and public market values."

An early investor in companies like Facebook and Slack, Andreessen Horowitz is still hunting for budding success stories. 

One fertile ground is artificial intelligence. Kupor reckons that AI will be almost as core to technological infrastructure as databases over the next 10 years. 

"The way we view AI is it's a very broad, foundational technology," he said. "It will not be a category of its own. I think we will see AI applied in almost every area."

Read more:Automated trading has already upended markets. Now AI could shake up stock picking and investment advice.

More narrowly, he is interested in internet-of-things companies that use sensors to gather useful data from the physical environment. He cited the private company Samsara as an example.

Samsara uses sensors to collect data that can improve safety and operations in a variety of industries from trucking to oil and gas. For example, Samsara's products can help delivery drivers identify the points along their routes that consistently cause delays, and redirect them to more efficient circuits.

Earlier in September, Samsara announced that it raised $300 million in fresh funding from existing investors including Andreessen Horowitz and two new investors. The round valued Samsara at $6.3 billion, according to a company statement.

The statement added that over the past year, Samsara has grown its revenue at a 200% rate, more than doubled its customer base, and expanded into 10 new countries.

Public-market investors who want to participate in the AI boom could consider the Global X Internet of Things exchange-traded fund.

SEE ALSO: GOLDMAN SACHS: These 20 beaten-down stocks are perfectly primed for a huge comeback — and you should buy them for cheap right now

Join the conversation about this story »

NOW WATCH: Animated map shows where American accents came from


There are no laws regulating the use of AI in the hiring process, and it's setting back how companies recruit. Here are the people trying to change that.

$
0
0

Frida Polli

  • Advocates, including companies that make AI-based hiring platforms, were pushing California lawmakers to pass a resolution they say would have encouraged companies to further use technology to eliminate bias in recruitment. 
  • The failure to enact it, however, highlights a growing concern among executives who worry that use of the tech could expose their company to legal liability.
  • The CEO of one firm who offers an AI-based hiring platform says usage would not only help reduce bias, but also spur economic benefit for firms that would be able to better match prospective employees to open roles. 
  • Click here for more BI Prime stories.

California recently failed to take a key step toward trying to govern the use of artificial intelligence in hiring. While the effort didn't garner many headlines, it signals the looming battle facing governments and businesses as the technology becomes more widespread.  

The first-of-its-kind measure would not have codified anything into law. Instead, it simply would have directed lawmakers in the state to craft a set of standards to oversee AI-based recruitment tools. Supporters, which include companies that offer these type of platforms, are hoping to continue to push the resolution once California resumes its legislative session next year. 

While initial attempts have been made at legislating on AI-related issues, the lack of any actual laws on the subject threaten to delay more widespread adoption of high-powered recruitment tech that backers, like Pymetrics CEO and cofounder Frida Polli, say will lead to more diversity in hiring.

"Under the current legal system, one could use technology to reduce bias, show year-over-year diversity gains, and yet still be sued for discrimination," Polli, whose company offers an AI-based recruitment platform, told Business Insider. 

Companies still skeptical of using AI in hiring

Increasingly, companies are automating aspects of hiring like the initial review of resumes. Many organizations, however, remain hesitant. In a survey of mid-sized, privately-held firms, Deloitte found that 55 percent of respondents were concerned that using AI in the recruitment process could be in violation of regulatory or legal requirements.  

Supporters say successful passage of the California resolution would have begun to ease those worries and laid the groundwork for future efforts to ensure the technology operates appropriately. It was also viewed as the first step in what backers hope will eventually become a national movement.

And for good reason. Alongside New York, California is often looked to as a first-mover on measures to ensure equality in the workplace. Adoption of measures in those two states often lead to a cascade effect across other cities and states in the US. 

If resolutions or laws relating to AI-based recruitment tools are enacted, Polli argues the result would not only be a boon to diversity in the workforce, but also produce an economic benefit for firms. 

"Once you remove that bias altogether and you just look at the numbers, you're able to have people included in the [hiring] process that previously would have been excluded," she said. 

The Pymetrics platform blocks out information like where a candidate went to school or their gender. Instead, it asks individuals to play a series of games that ultimately measure over 70 mental and emotional traits, including memory and risk-taking. The goal is to encourage more diversity in hiring. 

From 1990 to 2017, white applicants with nearly identical resumes were 36 percent more likely to advance in the hiring process compared to black applicants and 24 percent more likely than Latino candidates, according to a meta-analysis— or study of studies — in the prestigious Proceedings of the National Academy of Sciences. 

Other studies have shown that every open position in a corporation receives an average of at least 250 applications. Even with the large number of resumes, the ultimate hire "fails 50 percent of the time," according to Polli. For companies, employee turnover on average costs 33 percent of that individual's yearly salary.  

"If there was any other business unit where that was the outcome, someone would get fired," she said. 

'We always lead with the business problem' 

When they first started marketing the tool, Polli and her team didn't initially view it as a diversity program and instead touted it as an assessment tool that would better match prospective employees to the job best suited for them. It wasn't until they began meeting with companies that they realized the platform came with the added benefit of reducing bias. 

"We always lead with the business problem that we are trying to solve," Polli said. 

But the company soon came upon a problem: businesses were worried they could open themselves up to legal liability by using the tool. According to Polli, executives would tell them: "Our lawyers are saying there is a fear because we don't want to get sued or we don't even want to open up the case where someone would sue us."

The concern, legal experts say, is that the algorithms used by the AI-based platforms could also have inherent biases that influence decision-making. Google, for example, faced criticism when a program to pinpoint hate speech online began flagging online comments that included African-American vernacular.   

California supporters plan to plow ahead after loss 

Polli's team knew when creating the platform they were entering a highly regulated space and worked to start finding ways to alleviate any legal concerns. They turned to California initially given its history of strong workplace protections. 

The state, for example, passed a hallmark law that would prohibit companies from labeling some employees as contractors, a tool that critics say was used as an end-run to avoid providing benefits like healthcare. 

Read more:Uber and Lyft just took a major blow in California, and now they're gearing up for war

Despite the failure of the resolution on AI in hiring, its key sponsor doesn't plan to give up the push and is hoping to even introduce formal legislation in future sessions — which would carry the weight of law if enacted. California's legislative period ended earlier this month and restarts in January 2020.

"Even now, I'm getting really good input on what people expect me to look at as we move forward," California Assemblymember Reggie Jones-Sawyer, a Democrat, told Business Insider earlier this month. 

Why job concerns could stymie progress

One concern among his fellow lawmakers, however, is that writing the resolution into law could lead organizations to eliminate corporate recruiters in favor of the AI-powered technology platforms. That's a common worry among skeptics of the technology: that it could lead to massive job losses

Jones-Sawyer argues that is not the case. "We still need human input in hiring. This is just a tool to help human resource officers be able to get the best qualified people," he said.

Polli and her team could face a similar challenge in New York City, where they've begun preliminary meetings with activist groups. The hope is that successful passage in California and New York will lead to swift adoption of similar measures in other cities and states across the U.S. 

While the effort from Polli and others is still in the relatively nascent stages, it signals the broader legislative and legal battle around artificial intelligence that will likely pickup steam as the technology matures.

SEE ALSO: If we closed the gender gap by 2025, the global economy could see a $28 trillion windfall

SEE ALSO: Women are now seen as just as competent as men, but less ambitious — and it's a good and bad thing

Join the conversation about this story »

NOW WATCH: Taylor Swift is dropping a new album. Here's how the world's highest-paid celebrity makes and spends her $360 million.

A CTO who jumped from the grocery business to high fashion reveals how tech leaders can push a data-driven agenda to achieve big changes in any industry

$
0
0

Arpan Nanavati

  • In his new role as chief technology officer of Moda Operandi, Arpan Nanavati is hoping to use consumer data to solve one of the industry's core problems: understanding fashion taste.
  • The role requires Nanavati to figure out how to better personalize recommendations for customers and deliver products more quickly, both of which are underscored by advanced technology like machine learning.
  • Nanavati pushes a culture of "extreme ownership" under which employees feel empowered to make decisions without explicit approval from management. 
  • Click here for more BI Prime stories.

At face value, it may seem that groceries and luxury fashion have nothing in common. But for Arpan Nanavati, the opportunity to take the digital skills he honed at Walmart and try to disrupt the fashion industry at Moda Operandi was a no brainer.

While they are ultimately peddling different products, the job of executives in both sectors is anticipating and managing customer expectations. More commonly, those two tasks are achieved through technology like machine learning and artificial intelligence. So when Nanavati came on as chief technology officer at Moda, he knew a key step to solving a pressing problem in the fashion industry was harnessing consumer information. 

"Tech is driving that strategy to optimize and monetize the data," he told Business Insider. The core problems, "which have not been solved before, is how we can leverage data and tech to understand fashion taste."

Nanavati previously served as the director of engineering at Walmart Labs, where he managed the company's online grocery platform. Now at Moda, he plans to use the customer-centric mindset he honed at the retail behemoth's technology arm to create personalized shopping experiences. He'll focus on tackling two key challenges in his new role: how to better match customers with fashion recommendations and deliver that product in the quickest way. Underscoring both of these areas is technology, which makes the role of chief technology officer even more critical.

Moda allows customers to purchase clothes shown in runway shows directly from the designers, a shift from the historic paradigm in which major retailers like Barneys would decide which items to put on the store floor.

In his first interview since joining, Nanavati shared what he believes are the top tasks of chief technology officers and why a culture of "extreme ownership" is critical to achieving them.

Hire the right people and localize decision-making

Increasingly, technology is becoming a more central part of the organization as industries like financial services and retail pursue digital overhauls. But making it a priority comes with its own challenges, particularly for industries that have traditionally viewed IT as an outpost to solve issues like computer problems. 

"Tech-driven strategy is very common when it's a tech company," Nanavati said. That's different from consumer-focused companies like retailers, where tech goes from being an "institutional arm to being a strategy arm." 

To help navigate that shift, Nanavati tries to instill a culture of "extreme ownership" in his team. That means thinking about problems and solutions from the lens of the business owner and, in Moda's case, taking an approach that many software engineers and other tech workers may not be used to.

"In our case, it is truly cross-functional because you have to be able to understand the fashion designers, your merchandising team, and not just have an engineering mindset," he said. "You have to think like an owner, and you have to be able to connect the dots from the consumer perspective and the business-owner perspective."

Marrying engineering prowess with customer insights

An increasing barrier to digital-transformation efforts is teaching employees with traditional tech backgrounds to use what is often anecdotal evidence — like a desire among customers for quicker shipping — from other business units to develop solutions.

And the impediment can spell doom for digital efforts. A recent survey of companies that pursued major tech initiatives found that just 14% of the respondents said their attempts resulted in sustained performance improvement. Culture remains a key reason for the struggle.

To try to overcome those issues, Moda localizes decision-making. The company forms what it calls "squads," or small teams that include representatives from different parts of the business, including sales and IT. Those groups physically sit together in the office and are empowered to make decisions and act on them without explicit buy-in from the brass.

Companies will take different approaches to changing the culture to one focused on big data and advanced tech. Ultimately, however, executives need to find ways to marry the expertise of the engineering team with the customer insight that is often siloed in other parts of the business to craft solutions that actually improve the costumer experience. 

SEE ALSO: Rise of the CTO: What social media startup Tsu's latest hire says about a changing power shift in the C-suite

Join the conversation about this story »

NOW WATCH: Ray Dalio shares what he's learned from his succession plan at the world's largest hedge fund

The head of IBM's Watson walks us through the exact model tech leaders can use to build excitement around any AI project

$
0
0

Rob Thomas

  • Chief technology, data, and innovation officers are often managing enterprise-wide digital upgrade efforts.
  • To succeed, the leaders need tech chiefs within respective business units — like logistics or human resources — to implement the overall digital strategy, argues IBM's Rob Thomas, a model he refers to as "hub-and-spoke." 
  • But overcoming cultural barriers within organizations remains a challenge in adopting artificial intelligence and other advanced tech. One way executives can spur excitement around the platforms is by running and participating in internal AI contests.
  • Click here for more BI Prime stories.

Technology leaders are increasingly spearheading efforts at top companies to push a new digital-first agenda. But often, they can't do it alone and need counterparts within specific business units to help advance the strategy.

Despite efforts to adopt artificial intelligence, machine learning, and other data-heavy initiatives, many projects still fail. One key reason is the difficulty in changing the company culture. Often, sectors like the IT department and sales are not used to coordinating with each other. But breaking down those silos is key to advancing applications that actually serve the customer or the organization and lead to cost-cutting or new revenue generation.

One way to ensure projects advance is to appoint leaders within each respective business unit to help support the chief technology, data, or innovation officers, argues IBM's Rob Thomas, a system he refers to as the "hub-and-spoke" model because the structure resembles one in which a central point is connected to several secondary points.

"You need somebody that has a seat at the table at the top that's saying it's important to the company," he told Business Insider. Organizations also "need somebody in those business units that owns this day-to-day, but is accountable back to the company strategy."

It's the model Thomas uses at IBM, where he is the general manager of data and Watson artificial intelligence — a role that gives him authority over investments, sales and marketing, and product development relating to the company's signature AI product. 

Direction at the top, execution at the bottom

Under Thomas' model, the leader at the top — which could be a chief technology, data, or innovation officer — works with the CEO to figure out what areas of the business are best suited for a tech overhaul, whether that be supply chain, talent acquisition, or another sector.

Once the strategy is established, the tech leads within each unit are responsible for experimenting with applications based upon the data they have available to meet that goal. It's a way to solve the problem of top-down mandates.

Such a system doesn't work, argues Thomas, since often the C-suite is unfamiliar with exactly what information the sectors have stored. Data is needed to power the AI-based applications, and without intimate knowledge of what is available, it is difficult to determine what ideas can actually be executed on. 

Technology chiefs have to empower unit heads to "go figure out which use cases you want to target under the umbrella of the strategy and the standards that were set," he said.

Changing the culture for AI: Without support it's 'just a science project'

While investments in tech upgrades sometimes total in the tens of billions of dollars, many projects still fail as a result of resistance within the workforce. That's why it's so imperative for executives to take steps to change the internal culture.

One way to get staff more engaged is to host an "AI challenge," or a contest where employees can submit their best ideas and judges choose a winner. It's a tactic Thomas employs and one he argues every CEO should pursue.

"The fact that I reviewed submissions, the fact that I was a judge as people were presenting, all those things indicate a level of importance," he said. "It shows you care."

Read more: Large CPG companies are under tremendous pressure to keep up with the pace of innovation. M&M's and Snickers maker Mars is investing in 2 accelerator programs to stay ahead.

Another is to pair chief data officers or other tech leaders within business units with other AI advocates on the operational-side of the organization. In a manufacturing firm, for example, that could be the supply chain lead. At a financial services firm, it could be the retail banking manager. That champion can help push projects with coworkers who may not have the technical chops to build the product, but have a vested interest in the outcome — like reducing costs or driving profits.

"AI projects happen when the business team comes together with the technology team," said Thomas. I've never seen "a pure technical project that's led to a meaningful business outcome because there's no context to it. It's just a science project."

Organizations should also celebrate the small victories to empower employees to try ideas out themselves. Many companies, argues Thomas, don't even realize the successes they have. "Success begets more risk-taking," he said. "It starts to snowball in a positive way."

Given the large investments behind many digital transformation efforts, it's imperative companies take into consideration the internal reporting structures and the roles that will lead the initiative. Doing so can help mitigate potential problems like cultural resistance. 

SEE ALSO: Accenture's head of artificial intelligence shares the 4-step plan every company should consider before investing in AI

Join the conversation about this story »

NOW WATCH: Stewart Butterfield, co-founder of Slack and Flickr, says 2 beliefs have brought him the greatest success in life

Atlassian's former engineering head explains why he left his post to join the AI startup Algorithmia

$
0
0

Ken Toole, vice president of platform engineering at Algorithmia

After over a year of working as Atlassian's engineering head, Ken Toole wanted the opportunity to build something from scratch.

Atlassian was actually a first step for him. Prior to Atlassian, Toole had spent over 13 years at Adobe and also six years at Microsoft. At those companies, he enjoyed building organizations and teams from the ground up. He wanted to continue doing that, but at a smaller scale, so he took a job at Atlassian and relocated to Australia. 

But after over a year there, he hoped to move back to Seattle with his family and work somewhere smaller. 

"This is really a continuation of that trajectory," Toole said. "I knew that was the direction I wanted to go. What I wanted was not quite there. I had undershot in how early stage in development I wanted to be at. I'm excited to be at a company that's right at that stage."

On Thursday, the startup Algorithmia announced that Toole joined as its vice president of platform engineering. Algorithmia creates artificial intelligence products and algorithms for companies to use for data science applications.

"What I really like is that it's a low-ego but high-energy kind of environment," Toole said. "There's a lot of clarity around what the company is trying to accomplish and a huge focus on making our customers successful. Those are things that come through immediately."

A 'different level of inertia'

Toole liked that Algorithmia is building artificial intelligence products for its customers, including the United Nations.

"Algorithmia's vision of bringing machine learning to the masses and making it an everyday reality is compelling," Toole said.

Read more: Here's how tech companies like Atlassian, Microsoft, and Red Hat are revamping their interview process for developers today

He says that with his past experiences at Atlassian, Adobe, and Microsoft, he managed engineering teams that delivered software to "very demanding" customers. He plans to bring this to Algorithmia as well.

"I was very much involved in the early process of how do we build the solutions necessary at a fairly large scale," Toole said. "From a cloud engineering perspective, I had seen many of the challenges and difficulties."

Toole says that a challenge will be to "always keep an eye on the customer," while balancing it with growing Algorithmia's team and hiring more engineers.

Got a tip? Contact this reporter via email at rmchan@businessinsider.com, Telegram at @rosaliechan, or Twitter DM at @rosaliechan17. (PR pitches by email only, please.) Other types of secure messaging available upon request. You can also contact Business Insider securely via SecureDrop.

Join the conversation about this story »

NOW WATCH: What El Chapo is really like, according to the wife of one his closest henchman

Walmart has 1,500 data scientists and is hiring more amid a push to adopt artificial intelligence. The retailer's chief data officer recently shared the 3 questions that guide all its AI projects.

$
0
0

Walmart

  • Walmart is leading retailers in adopting artificial intelligence and machine learning, but the world's largest retailer still runs into cultural issues that undermine the push to implement the advanced technology. 
  • The company currently has roughly 1,500 data scientists, according to chief data officer Bill Groves, and is hiring more, including a role to develop voice-activated shopping applications. 
  • Walmart has three core questions that guide all of its AI and machine-learning projects. If the answer to any of them is "no" then the initiative is shelved immediately. 
  • Click here for more BI Prime content.

<iframe id="noa-web-audio-player" src="https://app.newsoveraudio.com/embed/article?key=lk1j4d&id=https://www.businessinsider.com/walmart-ai-leader-uses-3-questions-to-guide-its-efforts-2019-10&color=185F7D&bgColor=fff&simplified=true" width="100%" height="95"></iframe>

Walmart is a leader in the push to adopt artificial intelligence and machine learning. Still, the world's largest retailer runs into many of the problems other organizations experience when pursuing the advanced technology

The company currently employs roughly 1,500 data scientists throughout the enterprise, according to chief data officer Bill Groves, who directly oversees a smaller staff of 100 tech workers. Groves also said the company employs around 50,000 software engineers, though a Walmart spokesperson later said the numbers is closer to 10,000.

Those employees help support the over 100,000 different machine learning or AI-based projects the organization currently has in production. Among the applications that Walmart is currently rolling-out are AI-powered cameras to monitor for theft. 

"I do more work in the AI and [machine learning] space then I have ever done in my life," Groves said at the MLOps NYC conference two weeks ago.

"We're involved in robotics, we're involved in micro-personalization, we're involved in probably the biggest supply chain in the world," he added.

And it's continuing to build-out that staff. Among the positions Walmart is hiring for is a data scientist to help develop voice-activated shopping applications. The company already uses the technology in grocery pickup and deliveries. Overall, Walmart has 67 data science openings, 43 software development vacancies, and 90 available data analytics jobs, according to its careers website.

But the success rate for artificial intelligence or machine-learning projects is still just 75 percent, Groves said at the event. One way Walmart is aiming to address that is through its core three tenets that guide all the high-tech initiatives. 

"If the answer is 'no' to any of these three, we'll typically put a stop to the project immediately so that way we aren't spending money that we shouldn't spend," he said.

1) Why are you doing it?

One of the first questions that Walmart employees have to answer when deciding whether to pursue an AI-based project is: will someone pay for it? That can include the company itself, or a vendor that may purchase the application from Walmart.   

Groves forbids what he refers to as "cool projects," or those that might be fascinating to pursue but produce no tangible benefit for the company. 

"If nobody will pay for, then why am I doing it?," he said. "The business has to see the value, the business has to want it." 

While that's a relatively simple question to answer, Groves says it "doesn't happen often enough." One key reason is the lack of communication across teams. 

2) Can you explain it?

A problem that companies routinely run into is how to break down the organizational barriers between sectors. That encourages more collaboration between technology teams and those on the operational side of the business, like managing the supply chain.

Key to that succeeding, however, is software engineers or data scientists being able to explain tentative or pending applications to other business units. "If I cannot explain to an executive what I'm doing, then why am I doing it," Groves said.

Often those in departments like human resources think of problems from an analytical mindset, like how to make it easier to onboard new employees, and may not be as knowledgeable on the underlying technology. That poses a challenge to data scientists and engineers who are accustomed to outlining projects from a technical standpoint.

At Walmart, Groves knows an AI-driven project is "going to fail" if his team of data scientists and engineers discuss it and "the business isn't even really part of the conversation," he says.

Read more: Walmart's artificial intelligence-powered 'store of the future' might sound like hype, but AI has big potential for retailers big and small

3) Can you implement it?

While the tech teams are in charge of creating and managing the AI-based applications — like using cameras and sensors to help determine when shelves need to be restocked — the business side must be able to implement it to ultimately drive down costs or improve profits.

"It's a massive challenge just due to the size and scale we have," Groves said. "Money is being thrown out the window."

That's a key reason why communication between the teams is essential. A major impediment, however, is also price, particularly given the immense amount of projects Walmart already has in production. So initiatives must have a plan in place to go from development to production in an affordable way, and it must have buy-in from all parties involved.

"The data scientists talk to the business, they came up with an idea, they didn't include the business or the technology team with the implementation," said Groves. "They come back, they have a model that stands no chance of ever making it into production with the systems you have. Definitely not cost affordable."

While relatively basic questions, Walmart's approach to AI exemplifies just how critical cross-collaboration is for advanced tech initiative to succeed — and how quickly they can fail it there isn't broad support internally. 

SEE ALSO: Walmart has cracked the code for merging AI rollouts with employee feedback to produce buzzy (and cost-saving) new tech

Join the conversation about this story »

NOW WATCH: We did a blind taste-test of KFC and Popeyes fried chicken — here's the verdict

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>