Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

The CTO of one of the biggest consulting firms says CEOs and directors are beating a path to his door to get up to speed on the latest tech trends

$
0
0

Bill Briggs, the global chief technology officer of Deloitte

  • Deloitte's technology consultants used to meet primarily with clients' chief technology and chief information officers.
  • But lately, they've been meeting much more frequently with CEOs and other top-level leaders who don't work in corporations' technology departments, Bill Briggs, Deloitte's global chief technology officer, told Business Insider.
  • The top leaders have come to understand how important cutting-edge technologies, such as artificial intelligence, are to the future of their businesses.
  • CEOs and board members also want to be up to date on the latest tech trends and how digital innovations are transforming the business landscape so they can ask informed questions and make considered decisions, he said.

It used to be that when Bill Briggs was giving an overview of the latest technologies making their way into corporate America or the technological trends that are likely to affect enterprises in coming years, his audience was mostly chief information and chief technology officers.

These days, though, Briggs, who is the chief technology officer of auditing and consulting giant Deloitte, is spending as much time talking about the digital transformation of business with clients' CEOs and board members as he is with their technology leaders.

"The biggest story for me is that technology has been elevated into the heart of business strategy," Briggs said in a recent interview with Business Insider in advance of his firm's release of its latest tech-trends report. Corporate directors and top executives, he continued, are "recognizing how critical this is."

Deloitte consultants have been meeting with CEOs and top business leaders for decades. Occasionally they'd want to meet with Briggs and his team of technology consultants too. But in the last two or so years, Briggs and his team have been meeting with CEOs and board members with increasing frequency, he said.

The CEOs and directors want to understand new technologies

The meetings typically aren't just one-hour briefs or surface overviews of the landscape, he said. Instead, the nontechnology executives and leaders want to roll up their sleeves and explore the topics Briggs covers in depth, he said.

Corporate leaders outside the information-technology department have started to understand that new technologies, such as artificial intelligence, blockchain, 5G wireless technology, the internet of things, and virtual and augmented reality, are crucial to their futures, Briggs said. They've also started to see the need to be as familiar with such innovations as their technology teams, he said.

Read this:5G wireless service is coming, but don't expect it to super-charge your smartphone's internet anytime soon

Directors and CEOs want to be able to ask informed questions of their company's consultants and vendors about the products and services they're recommending and selling, Briggs said. They also want to be able to educate themselves on what's possible and real so they can push their organization to embrace change, he said.

Top corporate leaders are saying, "'Even though we need more out of our technology organizations that help lead us, we also, collectively, have the responsibility to understand deeply what's possible and what it's going to take for us to get there,'" Briggs said.

Deloitte hosted directors at CES

Deloitte's technology team isn't meeting with CEOs and directors just in corporate boardrooms and executive suites. Earlier this month, Deloitte hosted about 40 corporate directors at CES, the annual technology convention in Las Vegas. Most hadn't ever attended the show before, and Deloitte helped show them some of its highlights, focusing in particular on the convergence of consumer electronics and enterprise technologies, Briggs said.

"We did an outreach and said, 'We'll help you navigate through the chaos, but we'll also help translate [it] into what it means to be a board director and what it might mean to the entities that you represent,'" he said.

Briggs said he got something out of the event as well — a renewed sense of wonder about the giant trade show and all that's on display there.

"It's fun to see CES through first-timers' eyes again," he said.

SEE ALSO: Most companies using AI say their No.1 fear is hackers hijacking the technology, according to a new survey that found attacks are already happening

Join the conversation about this story »

NOW WATCH: Apple forever changed the biggest tech event of the year by not showing up


Salesforce CEO Marc Benioff calls artificial intelligence a 'new human right' (CRM)

$
0
0

Marc Benioff

  • Artificial intelligence is a "new human right" that everyone should have access to, Salesforce CEO Marc Benioff said at the World Economic Forum on Wednesday.
  • Otherwise, Benioff said, the world will see another tech divide; those with access to A.I. will be smarter, healthier, and richer, while those without access will be weaker and poorer.
  • Debate has already emerged on the ethical uses of AI, and Salesforce recently hired a chief ethical and humane use officer to help guide its efforts.
  • Benioff also spoke about how privacy breaches and the misuse of data have led to a growing distrust of tech companies.

The next tech divide could be between those who have access to artificial intelligence and those who do not, Salesforce CEO Marc Benioff warned on Wednesday.

Artificial intelligence is becoming a "new human right," and everybody will need access to it, Benioff said in a speech at the World Economic Forum in Davos, Switzerland.

"Today, only a few countries and only a few companies have the very best artificial intelligence in the world," Benioff said. "Those who have the artificial intelligence will be smarter, will be healthier, will be richer, and of course, you’ve seen their warfare will be significantly more advanced."

By contrast, those who do not have access to AI will be "weaker and poorer, less educated and sicker," he said.

"We must ask ourselves, is this the kind of world we want to live in?" Benioff said. "This can be seen right where I live in San Francisco, where we truly have a crisis of inequality."

Salesforce hired a new executive to guide its AI efforts

Debate has already emerged about the ethics of AI and the possibility that it can be used for harm, for example, in warfare. To tackle these questions and to guide its use of the technology, Salesforce in December hired its first chief ethical and humane use officer.

Read this:Salesforce is hiring its first chief ethical and humane use officer to make sure its artificial intelligence isn't used for evil

"AI is technology like none of us have ever seen, and none of us can truly say where it’s going," Benioff said. "But we do know this: Technology is never good or bad. It’s what we do with the technology that matters."

AI isn't the only divide the tech industry is grappling with. Companies in the industry are facing a crisis of trust in the wake of privacy breaches and the mishandling of data, Benioff said.

Benioff has becoming increasingly vocal about economic and other inequalities. Last fall, he was a major proponent of a measure in San Francisco that would tax large companies to help the city deal with its growing homelessness problem.

Some other tech figures followed his lead in making pledges to benefit the homeless, while other CEO's disagreed with the approach.

SEE ALSO: Here's why Walmart is betting on Microsoft's AI to challenge Amazon in online and physical retail

Join the conversation about this story »

NOW WATCH: I went on Beyoncé's 22-day diet — and I lost 15 pounds

Amazon’s cloud is slowly addressing one of its biggest criticisms as it extends another olive branch to open source developers (AMZN)

$
0
0

Andy Jassy, CEO of Amazon Web Services, or AWS, the retail giant's cloud-computing business.

  • Amazon Web Services announced a new open source project on Thursday called Neo-AI, which helps developers bring artificial intelligence to hardware like security systems or maybe even self-driving cars. 
  • Amazon has historically had a reputation for using a lot of open source software, without giving much back. But Neo-AI could be a sign that it's ready to be a bigger part of the open source community.
  • Neo-AI is Amazon Web Services' second-ever open source project, after Firecracker, which it launched last November.

Amazon Web Services has a certain reputation for taking a lot from the open source software community, without giving much back.

Now, the cloud giant taking another baby step away from that image by making some of its own artificial intelligence code available for anyone to use for free. It's only the second open source project out of Amazon Web Services, which is largely considered the number-one player in the cloud computing market.

Last November, Amazon Web Services announced a machine learning feature called SageMaker Neo that allows users to train and run artificial intelligence programs on Amazon's cloud. Now, AWS is making much of the SageMaker Neo code  available as open source under the name Neo-AI.

This new project will help developers to program hardware platforms — like home security systems, or even perhaps self-driving cars — to use machine learning models like TensorFlow, the mega-popular AI technology originated at Google. Since Neo-AI is available as open source, anyone can use, download or modify the code for free.

An olive branch from Amazon

Beyond the technology itself, Neo-AI could be an important effort from Amazon Web Services to mend bridges with the open source world.

In recent months, Amazon Web Services has come under fire for taking open source code and reselling it to customers as a paid service. Doing so is completely legal — open source software, by its nature, can be used for any purpose, even commercial use. But Amazon's reputation for not contributing back to open source projects has worked against it, as it's percieved as happy to profit from the software, but not to contribute to making it. 

Indeed, some studies have suggested that Amazon contributes very little code to open source projects compared to fellow tech giants like Microsoft, Google, Red Hat and IBM.

Neo-AI isn't the first wave in Amazon's open source charm offensive. Firecracker, announced in November, was taken by developers as a sign that AWS was finally ready to contribute significant projects to the open source world. With Neo-AI, Amazon is releasing even more of its internally-developed technology as open source.

Read more: As tensions with smaller software companies run high, Amazon is extending an olive branch with a new open-source project

Neo-AI helps make hardware smarter

Normally, developers may need to spend weeks or months manually adjusting the program so that it works on whatever hardware device they're using — different types of gadgets have different levels of computing power and even battery life, making for a lot of variables that need to be fine-tuned. 

Not only that, but the software on the device might also be a mismatch with the software the developer is using. Neo-AI eliminates these compatibility issues by converting these programs into a common format, and it also makes these programs run more efficiently on the hardware.

Neo-AI supports hardware platforms from Intel, NVIDIA, and ARM, and in the future will support Xilinx, Cadence, and Qualcomm.

Join the conversation about this story »

NOW WATCH: China made an artificial star that's 6 times as hot as the sun, and it could be the future of energy

The Pentagon hired the world's best poker bot to work for the US army

$
0
0

poker chips

  • The Pentagon has hired an AI bot designed to win poker games.
  • Libratus, which became well-known after winning $677,000 against a group of poker experts, has been hired for two years at $10 million.
  • The bot managed to defeat four experts in "no limit Texas Hold'em".

Around two years ago, an AI bot became a sensation after managing to defeat several expert poker players in no-limit Texas Hold'em, managing to bag $677,000 in play money from the poker champions by calculating how they might respond to its decisions.

Two researchers from Carnegie Mellon University built Libratus using a technology called computational game theory.

For an AI, poker is more difficult to learn than chess, for example. This is due to it not being a "perfect information game," in the sense that while players can see their own cards, they can't see those of their rivals.

With poker being a game of skill, factors like body language and deception come into play, which required the software to come up with betting strategies and even to demonstrate the ability to bluff.

The US army has now hired the robot

One of the developers, Tuomas Sandholm, founded a startup called Strategy Robot to adapt the technology to government needs.

Remarkably, the robot has now been hired by the Pentagon for $10 million.

Trump US troops military Iraq

According to a Wired report, Libratus will now spend the next two years working "in support of" a Pentagon agency called the Defense Innovation Unit.

Some are concerned about the military's interest in AI

With China recruiting children to build bots for the military, the US army is far from the only military group interested in AI; Russia is also exploring the potential military applications of AI, with President Putin saying whoever leads in AI "will become the ruler of the world."

Read more:Robots aren't coming, they're here — these 21 jobs are what humans could be doing in 10 years instead

The military's growing curiosity as regards AI has previously been troubling for many of those working on developing the technology, with a number of Google's AI researchers having joined thousands of employees in protest against the company's work on Project Maven, a program set up to exercise commercially available AI techniques on US missions.

Sandholm, however, believes that the concerns about military interest in AI are misplaced, saying: "I think AI's going to make the world a much safer place."

SEE ALSO: AI will match human intelligence by about 2062

Join the conversation about this story »

NOW WATCH: This tiny building in Wilmington, Delaware is home to 300,000 businesses

Novartis is betting that AI is the 'next great tool' for finding new, cutting-edge medicines. Here's how the $220 billion drug giant is using it. (NVS)

$
0
0

AI drug development pharma

  • Artificial intelligence could improve the expensive, slow process of developing drugs.
  • Novartis is investing in AI, which executive Jay Bradner calls the "next great tool" for making new medicines. But the technology isn't quite there yet, Bradner said.
  • Bradner, who leads Novartis's drug research and development, spoke with Business Insider at the World Economic Forum in Davos, Switzerland last week.
  • Here's how the $220 billion Swiss drug giant is already using AI, and where the technology could be most promising.

DAVOS, Switzerland — Developing new medications is a notoriously expensive, slow, and failure-ridden process.

Using artificial intelligence could change all that.

AI is the "next great tool" in drug development, Novartis executive Jay Bradner told Business Insider, and the $220 billion Swiss drug giant is "extremely organized" around deploying it. As president of the Novartis Institutes for BioMedical Research, or NIBR, Bradner leads the company's drug research and discovery.

But, speaking at the World Economic Forum in Davos, Switzerland, last week, Bradner also acknowledged that the technology isn't quite there yet, noting that "examples of AI-based drugs, at present, are few and far between."

"Amazon knows just what ad to put at the bottom of the page when I order my squash racket. But the organic chemist in the fume hood at NIBR doesn't have access to a massive repository of insight when choosing the next site on the molecule to fluorinate," Bradner said, referring to a common process intended to influence a drug's properties.

Using data science to hunt for new treatments

Data science is already an entrenched part of Novartis's drug-development work. NIBR has invested in the space, employing about 400 data scientists alongside 6,000 drug hunters.

It's so key to the process that a drug-hunting team typically consists of a chemist, biologist, clinician, and data scientist, Bradner said.

Of pharma companies, Novartis is most explicit about its shift to being a data company, Peter Lee, the corporate vice president of Microsoft Healthcare, told Business Insider in another conversation at Davos.

But all drugmakers are now thinking that way, Lee said, and trying to use data to improve everything from drug development and clinical trials to pricing and selling medicines.

Novartis is placing bets especially on the promise of AI. It employs a dozen or so in-house machine-learning experts, Bradner said, but also works strategically with external experts in that area, "of which there are many, many more."

Read more:$225 billion drug giant Novartis is taking a fresh approach to cancer treatment, and it could help prepare it for a 'doomsday' scenario

A twist on a strategy once used for Viagra

The company has identified about 12 places where AI could make drug development faster and better, according to Bradner. AI could help researchers go from a lack of clarity on a disease to having a target for medicines, he said, and transition from the starting point of a computer simulation to a promising chemical compound.

Right now, though, finding new uses for drugs is where AI can be most helpful, Bradner said. That's called "drug repositioning,"

The strategy has long been used by pharmaceutical companies. It even created the profitable erectile-dysfunction medication Viagra, which was originally developed for chest pain.

Technology, though, has allowed for more firepower. AI can process large amounts of data from clinical trials to see if, say, a drug for heart health could also benefit patients with a rheumatic disease — "drug repositioning on steroids," as Bradner put it.

Read more:One of the biggest drugmakers in the world thinks it has 26 billion-dollar drugs in the pipeline — here's what they aim to treat

Novartis has used this approach both to open and close doors while developing drugs, he said.

Most exciting to Bradner, though, is an even more specific prospect: Using AI to crunch huge amounts of data, including DNA, data from images, and measurements in clinics, to make connections between potential new drugs and subpopulations of patients with a disease.

Knowing exactly what kinds of patients drugs could work best in could make the costly process of drug development much more productive, and perhaps faster, helping them get to patients quicker but also giving companies like Novartis a leg up over their competitors.

Join the conversation about this story »

NOW WATCH: Bed bug infestations are only getting worse — here's why they're so hard to kill

The head of healthcare at Microsoft lays out the 3 ways AI will actually transform healthcare

$
0
0

microsoft research peter lee

  • Peter Lee, the corporate vice president of Microsoft Healthcare, has a pretty straightforward thesis when it comes to tech's involvement in healthcare.
  • "For me, it's fundamentally a question of what right do we have as a company like Microsoft to be participating in healthcare," Lee said.
  • With that in mind, he sees three areas in healthcare where artificial intelligence can benefit the work being done.

DAVOS, Switzerland — For Peter Lee, the bar for big tech players like Microsoft to be in the healthcare business has to be really high.

"For me, it's fundamentally a question of what right do we have as a company like Microsoft to be participating in healthcare," Lee, the corporate vice president of Microsoft Healthcare, told Business Insider's executive editor Matt Turner at the World Economic Forum's annual meeting in Davos, Switzerland.

Microsoft for its part is using its cloud-computing service, Microsoft Azure, as a place where healthcare companies can store all the health data they're gathering, ranging from X-rays and other types of medical images to individuals' genetic profiles.

Read more:Microsoft's head of healthcare thought it was a 'career-ending move' when Satya Nadella offered him the job. Here's why he says he's now completely sucked in.

Microsoft could also use its technology to crunch through that data and use it to find new solutions with artificial intelligence. When asked what the opportunity was to use AI in healthcare, Lee replied, "It's sort of like saying, what's so great about sunshine?"

In the near term, however, Lee said there are three areas in particular where AI could come in handy.

Translating early signs of disease

The first way is through using machine learning to sift through information about proteins and the immune system. To that end, Microsoft has partnered with Seattle-based Adaptive Biotechnologies.

The partnership is centered on mapping out the immune system, with the hope of finding new ways to diagnose and treat diseases. To do that, Adaptive's producing more than 1 trillion points of data that the company and Microsoft can use to train machine-learning algorithms.

Ultimately, the hope is to use that data to inform early diagnostic tests for cancer or autoimmune conditions like multiple sclerosis or celiac disease, as well as infectious disease. Lee said the partnership is particularly focused on Lyme disease.

Read more:How Microsoft's top scientists have built a big business in hacking healthcare — and helped a lot of people along the way

To sift through that information, Microsoft's using the same skills as applying machine learning to translate languages.

"It just tickles the computer scientists that we're using the same code that we use for Skype Translator," Lee said.

Aiding the doctor-patient experience

A nearer term use of AI in healthcare, Lee said, is using it to make the relationship between doctors and patients better.

Right now, doctors have a lot of information they need to write down during a patient visit. Companies are building Alexa-like assistants to help fill out paperwork on behalf of doctors.

That's one way AI could cut down on the workload: by using the language and observational tools AI has to automate the process and catching errors the doctors might make when filling out information for insurance claims.

Lee said that's going to be a competitive area of interest, with Microsoft's competitors like Amazon or IBM investing heavily in it.

'Nuts and bolts'

On the less exciting end of the spectrum, AI could be used effectively to help with what seems like the basics, such as aggregating health data and presenting it in a way that preserves patient privacy. Right now, that's not as simple as it might seem.

"It's just not obvious how to take a gigantic amount of, let's say, clinical notes out of the EMR system and ensure that data about you and your identity aren't revealed inadvertently," Lee said.

Join the conversation about this story »

NOW WATCH: An exercise scientist reveals exactly how long you need to work out to get in great shape

JPMorgan is in the middle of a 'massive process' of cleaning up thousands of databases, and it's hoping to unleash AI once it's finished

$
0
0

Daniel Pinto

  • JPMorgan co-president Daniel Pinto spoke to Business Insider recently about the complexity of making usable the massive amounts of data that the bank collects from customers. 
  • Artificial intelligence can only be unleashed once data is made usable in a clean and consistent way, Pinto said. 

Wall Street's dream for artificial intelligence is running into the hard facts of what's needed to bring it to life. 

At JPMorgan, the largest US bank, there are thousands of databases that still need to be cleaned and made usable before AI or machine learning techniques can be fully unleashed, according to co-President Daniel Pinto, who spoke with Business Insider on the sidelines of the World Economic Forum in Davis, Switzerland.

Chart that workload across dozens of large banks, not to mention investment firms, and the scale of the work ahead for the industry is a staggering reminder that the robot revolution is still years away. 

Sign up here for our weekly newsletter "Wall Street Insider," a behind-the-scenes look at the stories dominating banking, business, and big deals.

For years, JPMorgan built databases for particular purposes in one system only to build another for a different purpose in a second system, according to Pinto. For sophisticated AI techniques to be most effective, that data needs to be assigned the same name and migrated into the same system, or at least housed in interconnected systems. That project is just one of many being covered by JPMorgan's $11 billion in annual technology spend. 

"We are in a massive process of making that data usable, in a very clean, consistent way," Pinto said. "We have plenty of data across multiple systems that was developed over time, so often the same thing is called X here and Y there. It takes time, money and effort to really clean up all of that."

JPMorgan isn't alone. For decades, banks were at the forefront of data collection, hoovering up information about stock and bond trades, credit-card transactions and mortgage loans. But for most of that time, firms were content to take in the data and store it, with few spending much time thinking about how it might be retrieved or compared to other datasets nestled in other parts of the firm. 

Read more: There's a subtle shift underway at JPMorgan, and it shows how Amazon's influenced the Wall Street giant

According to a July 2016 McKinsey article, about 50% of the time spent by employees in finance and insurance is used for collecting and processing data. That and the large amounts of data involved in the industry make it one of the area's most ripe for disruption, according to the consultant. 

At Credit Suisse, the bank has focused on ensuring that any data that gets fed into AI tools is of the highest possible quality, according to the Swiss firm's Chief Technology Officer Laura Barrowman. For an AI-based tool to be efficient, the data it analyzes needs to be complete and accurate. While that might seem like a basic request, Barrowman said it's a critical one and not easily achievable in a company the size of Credit Suisse.

"Making sure that your basics are right is a fundamental for everything," said Barrowman, also speaking on the sidelines of the Davos event.

Read more: Credit Suisse's CTO says that AI could create huge opportunities on Wall Street and that banks haven't even scratched the surface

If a company is able to create consistencies across the data sets that sit within its organization, the opportunities for what AI can do at Wall Street firms are huge, she said. And they will undoubtedly provide banks revenue opportunities they might not have realized previously.

As banks wade into that new realm, they'll have to be careful to protect customer data. While many clients like the analytics banks are providing around their own data, customers are incredibly uneasy about having their own data shared with others, according to Pinto. The topic is fraught with drama in the wake of consumer-data breaches at tech giants like Facebook.

“You need to be very careful to protect client privacy," Pinto said. "A lot of clients don’t want their data used elsewhere, even in aggregate.”

Join the conversation about this story »

Artificial intelligence startup Databricks is now worth $2.75 billion after raising $250 million from Andreessen Horowitz and Microsoft

$
0
0

alighodsi

  • Artificial intelligence startup Databricks announced Tuesday that it has raised $250 million in a round led by Andreessen Horowitz.
  • Microsoft and other firms also participated in this round, and this startup is now worth $2.75 billion.
  • Databricks' close ties with academia, and UC Berkeley in particular, have helped it gain traction in the A.I. space.

Companies are eagerly lining up to use artificial intelligence, but the problem is, they don't know how to use it.

That's what the buzzy artificial intelligence startup Databricks aims to do. At the mention of "artificial intelligence," robots and autonomous cars come to mind. But Databricks helps companies solve problems like analyzing massive data sets and building a database for genetic diseases and drug discovery.

"Many companies are excited to AI, but they're struggling. It's because the problems they're trying to solve is different from sexy A.I.," Ali Ghodsi, CEO and co-founder of Databricks, told Business Insider. "We're the only company that focuses on how can you do the boring things and A.I. together. We don't see any vendors out there that try to do that."

And the demand is growing.  Databricks has generated over $100 million in annual recurring revenue, and its subscription revenue has tripled during the last quarter of 2018. And on Tuesday, Databricks announced it has raised $250 million in a round led by Andreessen Horowitz. Microsoft and other firms also participated. The startup is now worth $2.75 billion.

Read more: A $1 billion data-crunching startup that was initially rejected by investors aims to do for drug companies what it did for Netflix

In total, Databricks has raised $498.5 million. With the funding, Databricks plans to expand its quickly growing presence in Asia, Europe and the Middle East. It wants to expand into the health, fintech, media, and entertainment sectors. Meeting this demand has been Databricks' biggest challenge, Ghodsi says.

Nasdaq's chief technology officer told Business Insider this week that Databricks could be suited for an IPO in the near-future.

The UC Berkeley connection

Databricks has a close relationship with academia. Ghodsi is an adjunct assistant professor at UC Berkeley, and the technology behind Databricks started with academic research. Many graduate students work closely with Databricks.

Databricks is also known for its early project, Apache Spark, which started at UC Berkeley. Although it's still a key ingredient at the company, now it's only a small part of what Databricks does. Databricks has shifted its focus to machine learning, it has seen over 100,000 downloads of its new open source machine learning project MLFlow.

"As an academic in a university, you always want your research to have an impact. You publish the paper, write the software, but most of the time, people don't pick up the software and it doesn't have the impact you want," Ghodsi said. "It wasn't until we started Databricks that we get closer to that."

Although Ghodsi has to work extra hours as both a CEO and a professor, he says Databricks has benefitted from exposure to both worlds.

"The A.I. space is evolving so fast," Ghodsi said. "You have to stay on top of that innovation cycle. It would be advantageous if more companies were more closely collaborating with universities."

Databricks' secret sauce

Ghodsi says that Databricks has been able to grow so fast because there aren't many other companies doing the same thing. Databricks has a platform that supports both machine learning algorithms and data. Meanwhile, he says, most companies only focus on one or the other, leaving companies to do the manual work of stitching both together.

"Frankly speaking, the market is yearning for AI solutions for these enterprises," Ghodsi said. "There isn't much out there to help them. These companies really want to move the needle. There's incredible demand. We've been trying to ramp up our efforts to get our product to as many customers as possible."

Databricks started with all its web services on Amazon, but now more customers are using multiple clouds or a hybrid cloud.  Just last year, it announced a partnership with Microsoft Azure by starting Azure Databricks, and now Microsoft has become Databricks' newest investor.

As a cloud, artificial intelligence and open source company, Ghodsi recalls that in Databricks' early years, people were skeptical on all three fronts. However, the market has changed dramatically in the last two to three years.

"All three of those are now central to everyone's strategy," Ghodsi said. "We're just really lucky to be at the center of those three trends."

SEE ALSO: Microsoft has a chance at beating Amazon for the 'most important cloud deal ever,' and it could change the balance of power in the cloud wars

DON'T MISS: A $1 billion data-crunching startup that was initially rejected by investors aims to do for drug companies what it did for Netflix

Join the conversation about this story »

NOW WATCH: How Apple went from a $1 trillion company to losing over 20% of its share price


I've been living with the Google Home Max and Apple's HomePod side by side for almost 6 months, and they both have one major problem (GOOG, GOOGL, AAPL)

$
0
0

Google Home Max HomePod

  • I've been testing the GoogleHome Max and Apple'sHomePod side by side for nearly six months. 
  • Both devices have their pros and cons, but they share a larger issue: They both listen in and speak out of turn constantly. 
  • At this point, I like both devices equally, but if you're concerned about your privacy, don't buy either one. 

For going on six months, I've been conducting an experiment at my apartment: I have Google's Home Max and Apple's HomePod living side by side in an attempt to find out which one is better. 

The Google Home Max was announced in September 2017. I've been using it for a while, and I've documented my love for the device time and again

The HomePod — Apple's answer to the Home Max, the Echo Plus, and other high-end smart speakers on the market — was announced earlier, in June 2017, but got a delayed start and didn't make it to market until last February.  

The two speakers cost about the same ($350 to $400), bear a similar look and feel (rounded, relatively blob-like, available in two colors), and do mostly the same things (play your music, answer questions, control your smart home). I was curious what they'd be like side by side, if one would far outpace the other in terms of usefulness and effectiveness.

I expected a clear winner, or at least some very steep competition. 

But what I found over the course of several months was something a bit more disturbing. 

SEE ALSO: Amazon has 10,000 employees dedicated to Alexa — here are some of the areas they're working on

Before we get to the more serious issues, I want to talk about the surface-level pros and cons. Let's start with the Google Home Max, and all the things I love and hate about it.

With the Google Home Max, there are plenty of positives I could say. Despite a steep initial price ($400!) I've found that it's held up after about a year. I still love it just as much as the day I set it up, so anyone who shells out for it likely won't regret it, even 12 months later. 

The best things about the device are the sound quality (excellent), the intelligence of the Google Assistant (high), and the general ease of use. It's rare that the Home Max doesn't work how I want it to. More recently, I got the hang of multi-room audio using the Max and my Home Minis, and it's been game-changing. 

However, I do have some reservations about the Google Home Max.

I don't like that using the device forced me to switch over from Apple Music, which I loved, to Spotify, which I do not love. (Apple Music doesn't work with Google Assistant, so I would have missed out on the main selling point of the Home Max if I hadn't switched.) 

The other issue is that the Max doesn't easily connect as a soundbar for my TV. Sure, there are workarounds, but it's not a seamless experience. I know the device isn't quite meant for that, but if you're investing $400, it would be nice if it could do more. 



Now, for the HomePod. There are plenty of positives and negatives about that device, too.

Let's start with the good.

I'm surprised to say that after several months of living with the HomePod, I love the look and feel of it. Its shape is unassuming, even a little comforting, like a marshmallow. I also appreciate that when you say, "Hey Siri," the top lights up with a swirling rainbow orb. Design-wise, the HomePod is an A+ for me. 

Plus, the HomePod is a great device for Apple TV users. I paired my HomePod with my Apple TV, and it totally changed my TV viewing experience. Now, instead of my garbage TV speakers, I listen to all my movies and TV through the HomePod. Not only do I no longer need to crank the volume just to hear anything, watching movies and shows on my TV is simply more enjoyable. 

But there are a few downsides.

The HomePod sounds solid. I hesitate to say excellent, but I think that may have something to do with the placement: My HomePod lives on my TV stand near a wall, so the 360-degree sound is sort of lost on me. The Google Home Max almost always sounds better, since the sound is coming at me straight-on. It's not the HomePod's fault, exactly, but I can't imagine everyone who wants a HomePod will have the right setup to thoroughly take advantage of the design. 

There's also one more glaring downside: Siri just isn't that smart. If I have a random question, want to play a specific song or album, or basically do anything besides alter the volume, I do not ask Siri to do it. That's fine — I wouldn't have bought the HomePod for its smarts, anyway — but it's a bit frustrating that in 2019, Apple's smart assistant still hasn't improved much. 



Now for the real issue: Both the Google Home Max and the HomePod are starting to really freak me out.

I have been a proponent of smart speakers for a while now. I've never been overly concerned about them listening to me, or spying on me, or recording my conversations and sending them to people in my contacts. Those fears are valid, but they're not my fears. 

Lately, though, I've been growing more concerned. 

Both the HomePod and the Google Home Max have been randomly speaking — or lighting up and listening in without being prompted — several times a week. Both devices will listen in when anything is said that remotely resembles their wake words, or they'll just jump in on conversations, uninvited.

A prime example: A few weeks ago, my boyfriend and I were sitting on the couch watching a new episode of "True Detective." In one scene, the main characters are poking around the yard of a murder suspect while he isn't home. It was a tense — and more importantly, quiet — scene, if not a particularly scary one. 

All of a sudden, without warning, Siri quietly said, "Hi." 

It took my boyfriend and I a full beat to realize it didn't come from the show itself. Siri had just made her presence known for no reason at all. 

This isn't unusual. Siri will speak, unprompted, all the time. One time, she literally spoke in tongues while my sister and I were chatting across the room. Other times, Siri will suddenly say things like, "I'm listening," which I know is standard when Siri is activated and then you don't say anything, but is not exactly reassuring to hear, especially when I didn't prompt her in the first place.

If you think Google Assistant is any better, though, you'd be wrong. The Google Home Max listens in all the time. It doesn't wait for a pesky wake word — in fact, it appears to have totally emancipated itself, and it now listens in whenever it feels like it, at strange and sometimes inopportune moments. 

For example, in that same episode of "True Detective" a few weeks back, Mahershala Ali's character says something truly heinous that I cannot repeat here (if you've seen season 3, episode 2, you may know what I'm talking about). For some reason, Google Assistant chose that particular moment to tune in. I can promise you, what he said definitely didn't resemble "Hey Google." 

After that, we've become deeply concerned about what sort of information Google has tied to my account, and I'm a little worried about what my search results may start looking like. 

This happens all the time. About 50% of the time someone says "Hey" or "OK" in my apartment, the Google Home Max starts listening. While it's supposed to wait for the full phrase — "Hey Google" or "OK Google"— it often ignores that mandate in favor of listening in on what we're saying. The device doesn't talk out of turn as often as the HomePod does, but the Google Home Max has a major eavesdropping problem. 



See the rest of the story at Business Insider

President Trump orders all hands on deck to keep the US ahead of China in the AI arms race

$
0
0

Axios

  • President Donald Trump called on the US to prioritize advancements in artificial intelligence in a new executive order. 
  • Trump, however, did not allocate any funds for the effort, but told aides to budget the costs required to maintain a lead in AI.

After months of pushing China to retreat from its strategy to dominate the technologies of the future, President Trump today ordered US agencies to prioritize keeping the US ahead in the development and deployment of artificial intelligence.

He did not allocate specific sums of money — and it will be expensive to match Chinese spending — but told aides to tally up what it will cost to maintain the lead, and to budget it.

Trump's executive order comes amid tense brinkmanship between the US and China, driven by a trade war declared by the US

  • The order brings new focus to the core of US unhappiness: Beijing's strategic plan "Made in China 2025" and its goal of capturing the commanding heights in AI, quantum computing, biotechnology and more.
  • The bottom line: This may be an attempt by Trump to signal deeper resolve ahead of coming new talks with Chinese leader Xi Jinping, possibly in March.

Simply signaling an all-hands push by the White House on AI is valuable, says Michael Allen, of Beacon Global Strategies and a former member of President George W. Bush’s National Security Council.

  • "This has a galvanizing effect and elevates AI as a critical national priority," Allen tells Axios.
  • "I read [the order] as a demand for the federal agencies to give the White House specifics for what steps they are going to do to make AI a priority and what resources they need to make those steps a reality," says Gregory C. Allen, an adjunct senior fellow at the Center for a New American Security. "Overall, this [order] is great news."

The billion-dollar question is how the government's new priorities will be funded.

  • Trump set aside no new money in his executive order. When Axios asked how the initiative will be funded, a senior administration official said that money is the purview of Congress.
  • While true that Congress is in charge of appropriating funds, the White House can move existing money around, says William Carter, a technology policy expert at the Center for Strategic and International Studies.
  • "If they can find $5 billion for a border wall, they should be able to find a few billion for the foundation of our future economic growth," says Carter.

What the plan does do, however, is tee up civilian agencies to make AI investments, and encourages them to do so.

So far, US funding for AI has been anemic.

  • An analysis from Bloomberg Government found that the Pentagon's R&D spending on AI has increased from $1.4 billion to about $1.9 billion between 2017 and 2019. DARPA, the Pentagon's research arm, has separately pledged $2 billion in AI funding over the next five years.
  • It's hard to put a number on the entire federal government's AI spend, says Chris Cornillie, a Bloomberg Government analyst, because "most civilian agencies don't mention AI in their 2019 budget requests." (The new executive order would keep better track of civilian agencies' AI funding.)

These numbers pale in comparison to estimates of Chinese spending on AI. Exact numbers are hard to come by, but just two Chinese cities — Shanghai and Tiajin — have committed to spending about $15 billion each.

One element of funding is building and maintaining talent superiority, and education is a pillar of Trump's executive order.

  • A key issue is whether threats to slow down immigration and make it more difficult for foreign students to attend US schools will detract from US competitiveness, says Elsa Kania, an adjunct senior fellow at CNAS.

Join the conversation about this story »

NOW WATCH: The history behind duct tape and what makes it a handy solution for just about anything

Trump's executive order on artificial intelligence is a drop in the bucket compared to the $150 billion China's spending on AI

$
0
0

Trump in China

  • Artificial intelligence is advancing rapidly, and the two countries at the forefront of AI are the United States and China.
  • The US government invests only $1.1 billion in non-classified AI technology, far lower than the $150 billion being committed by China over the next decade.
  • To address these concerns, President Donald Trump announced an executive order for a national strategy called the "American AI Initiative."
  • What is unclear in the president's new initiative, however, is how much AI funding is being provided and how it is being implemented.

Artificial intelligence is advancing rapidly. It is powering autonomous vehicles and being applied in areas from health care and finance to retail sales and national defense.

As noted in a 2018 Brookings Institution report, "AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking."

Yet most of the current AI impetus in the United States comes from the private sector. America has many of the most innovative technology firms in the world and our talent pool is quite strong. Our system of higher education is the envy of the world and thousands of foreign students come to the United States every year to learn science, math, and engineering.

donald trump white house meeting cabinet roomBut those achievements should not lull Americans into a false sense of complacency, or a false sense of security. There is considerable concern the US federal government is not doing enough to support AI research and deployment.

The US is outgunned when it comes to AI spending

As an illustration, OpenAI co-founder Greg Brokerman has testified before Congress that our national government invests only $1.1 billion in non-classified AI technology. That is far lower than the $150 billion being committed by China over the next decade.

It also is worth noting that in addition to this Chinese investment and as a result of the 19th Party Congress, President Xi Jinping has called for China to surpass the US technologically by 2030. That kind of strategic vision, continuity of leadership, and nearly unlimited resources do not augur well for us.

Read more:President Trump orders all hands on deck to keep the US ahead of China in the AI arms race

These are not just idle promises on the part of the Chinese. Scientists in that country have made remarkable progress over the past decade in AI research. According to a 2018 Tsinghua University report, "China leads the world in AI papers and highly cited AI papers." It also has "more AI patents than [the] US and Japan."

trump china visitThe traditional interpretation that China lags American ingenuity needs to be updated in light of recent scientific advances.

The 'American AI Initiative' offers vague solutions to a complex problem

To address these concerns, President Donald Trump yesterday announced a new executive order for a national strategy called the "American AI Initiative." Michael Kratsios, the deputy assistant to the president for technology policy at the White House, has explained that America needs a national AI strategy.

In a Wired op-ed, he argues "under the American AI Initiative, federal agencies will increase access to their resources to drive AI research by identifying high-priority federal data and models, improving public access to and the quality of federal AI data, and allocating high-performance and cloud computing resources to AI-related applications and R&D."

Read more:The Pentagon hired the world's best poker bot to work for the US army

The initiative outlines several welcome steps. It seeks to increase access to federal data, provide financial support for R&D, enhance digital infrastructures, and improve workforce development. Those are all noble goals where the United States needs to do better. For example, having better access to government data would strengthen the training of AI algorithms and help software overcome the inherent biases of incomplete or misleading information.

And having faster broadband, more ubiquitous mobile networks, and faster computers is vital for AI deployment. New advances in autonomous vehicles, remote surgery, streaming videos, and national security require improvements in computing capacity. It will be impossible to take advantage of the full capabilities of AI without this type of progress.

A related competition also is taking place in regard to 5G networks. As noted recently by Brookings Fellow Nicol Turner-Lee, this is the high-speed mobile communications technology that will enable enhanced communications and advanced technology solutions. China has invested billions there and this is just one more manifestation of the technology competition that the US needs to take seriously.

A demonstration of 5G wireless speeds, seen at the Ericsson booth at CES on January 6, 2016.What is unclear in the president's new initiative, however, is how much AI funding is being provided and how it is being implemented. Sometimes, new announcements such as the one formed to combat the opioid crisis have been introduced with great fanfare, but shown little impact. Without additional funding for research, workforce development, and infrastructure, the new initiative likely will fall flat.

On the implementation front, there also are question marks regarding the executive order. It is one thing to call for inter-agency cooperation and coordination, and another to develop effective mechanisms that do that. Agencies need to coordinate, but they have incentives to pursue their own vision, not that of the White House or Office of Management and Budget.

The Department of Defense is ahead of the game

One promising sign is the US Department of Defense already has announced its implementation plan. In a press release put out this week and reported in the news, it said, "The impact of artificial intelligence will extend across the entire department, spanning from operations and training to recruiting and health care. The speed and agility with which we will deliver AI capabilities to the warfighter has the potential to change the character of warfare. We must accelerate the adoption of AI-enabled capabilities to strengthen our military, improve effectiveness and efficiency, and enhance the security of our nation."

United State Cyber Command security attacksBut the DoD is far ahead of many of the US domestic departments. It has set up a new AI Center, hired AI experts, and set aside money for AI deployment. Along with similar actions from federal intelligence-gathering agencies, these steps go beyond what has happened in the non-defense part of government.

In those places, there has been little action, limited funds targeted on bringing AI into the government, and few people employed with AI skills.

Moving forward, it will be important to bring the domestic agencies into the AI era. In the private sector, AI is improving efficiency, boosting productivity, and bringing creative digital solutions into organizations. We need to do the same thing across all of government so the public sector can show the same type of progress seen outside of government.

Making progress on vision, funding, and infrastructure is important for national competitiveness and national security. Adversaries are deploying autonomous weapons systems, AI-based intelligence-gathering and assessment, and remote imagery based on satellite and drone data.

Having systems that are faster, smarter, and more coordinated is necessary in order to protect the country. Until we see the details for the comprehensive implementation of the national AI initiative, it will be hard to assess the long-term impact and effectiveness of the president's AI executive order and whether it can redress the lengthening gap between the US and China on AI, 5G, and other emerging technologies.

Join the conversation about this story »

NOW WATCH: There are serious health reasons why you shouldn't eat your boogers

3 things we learned from Facebook's AI chief about the future of artificial intelligence

$
0
0

Yann LeCun facebook ai

  • Giving machines "common sense" to learn about the world through data will be a big area of research in artificial intelligence over the next decade, says Facebook's chief AI scientist Yann LeCun.
  • Such an achievement could enable machines to make more complex decisions with more context, which could help firms like Facebook detect hate speech more accurately.
  • But to improve the performance of artificial intelligence, hardware needs to evolve. 

In recent years, many of the worlds biggest tech companies — from Google to Facebook and Microsoft — have been fixated on artificial intelligence and how it can be incorporated into nearly all of their products. For example, Google even rebranded its Google Research Division as Google AI ahead of its developers conference this year, during which AI was featured front and center. Mark Zuckerberg also explained how Facebook is using AI in an attempt to crack down on hate speech on its platform during its F8 conference in May.

The AI market is also booming as companies continue to invest in cognitive software capabilities. The International Data Corporation indicates global spending on AI systems is expected to hit $77.6 billion in 2022, more than tripling the $24 billion forecast for 2018. 

But the industry still has a long way to go, and much of its progress could depend on whether academics and industry players will succeed in finding a way to empower computer algorithms with human-like learning capabilities. Systems powered by artificial intelligence, whether you’re referring to the algorithms Facebook uses to detect inappropriate content or the virtual assistants made by Google or Amazon that power the smart speakers in your home, still can’t infer context like humans can. Such an advancement could be critical for Facebook as it ramps up its efforts to detect online bullying and identify content related to terrorism on its platforms.  

“There are cases that are very obvious, and AI can be used to filter those out or at least flag for moderators to decide,” Yann LeCun, chief AI scientist for Facebook AI Research, said in a recent interview with Business Insider. “But there are a large number of cases where something is hate speech but there’s no easy way to detect this unless you have broader context ... For that, the current AI tech is just not there yet.”

A key element in advancing the field of artificial intelligence, particularly when it comes to deep learning, will be ensuring that there’s hardware capable of supporting it. That’s the big topic LeCun is addressing at the International Solid-State Circuits Conference on Monday, where he’s discussing a new research paper outlining key trends that will be important for chip vendors and researchers to consider over the next five to 10 years. “Whatever it is that they build will influence the progress of AI over the next decade,” he said.

Ahead of the conference, LeCun spoke with Business Insider about where the field of artificial intelligence is headed, what it could mean for the devices we use in everyday life, the state of AI today, and the biggest challenges that lie ahead. Below are key takeaways from our conversation.

Machines have to get much better at power consumption in order for AI to improve.

Imagine a vacuum that’s not only smart enough to map your living room so that it doesn’t clean the same spot twice, but is also capable of detecting obstacles before bumping into them. Or a smart lawnmower that can intelligently avoid flowerbeds and branches as it trims your lawn. For gadgets like these to work and become prevalent — in addition to technologies that companies like Facebook and Google parent Alphabet are investing in, like augmented reality and self-driving cars — LeCun says more power-efficient hardware is needed. Such an advancement isn’t just necessary for technologies like these to thrive, but also for improving the way companies like Facebook identify the content of photos and videos in real time. Understanding what’s happening in a video, transcribing that activity into text, and then translating that text into another language so that people around the world can understand it in real time requires “enormous" amounts of computing power, LeCun says.

We’ll continue seeing AI advancements in smartphones in the near term before improvements appear elsewhere. 

In the next three years, LeCun believes most smartphones will have AI built directly into the hardware through a dedicated processor, which would make features like real-time speech translation more prevalent on phones. This likely isn’t a surprise to those who have been paying close attention to the smartphone industry in recent years, as companies such as Apple, Google, and Huawei have been incorporating AI more closely into their mobile devices, which LeCun says will enable “all kinds of new applications.”

Giving machines “common sense” will be a big focus for AI research in the next decade.

While humans often learn about the world through general observations, computers are typically trained to perform specific tasks. If you want to design an algorithm that can detect cats in photos, for example, you’d have to help it understand what a cat looks like by exposing it to a large trove of data, which could include thousands of photos labeled as including cats. But the Holy Grail in the next decade to push AI forward lies in perfecting a technique known as self-supervised learning, according to LeCun. In other words, enabling machines to generally learn about how the world works through data rather than just learning how to solve one particular problem — like identifying cats.

“If we actually train [algorithms] to do this, there is going to be significant progress in the ability of machines to capture context and make decisions that are more complex,” says LeCun, who added that this technique currently only works reliably for text but not videos and images. Such a breakthrough could be what companies like Facebook need to improve content moderation on their platforms, although there’s no telling when that solution will come, LeCun says: “This is not something that’s going to happen tomorrow."

Join the conversation about this story »

NOW WATCH: How Apple went from a $1 trillion company to losing over 20% of its share price

This robot that can instantly find Waldo might be my favorite use of artificial intelligence yet

$
0
0

where's waldo robot ai

  • Everyone loves a good "Where's Waldo?" puzzle — unless you can't actually find Waldo, then it starts to get frustrating.
  • Don't feel too bad: One company actually built a robot that can find Waldo faster than most humans probably could.
  • The robot, called "There's Waldo," uses computer vision and machine learning to spot Waldo — it's been able to find the hidden character in as little as 4.5 seconds.
  • Here's how "There's Waldo" works:

Creative agency redpepper built a camera-mounted robotic arm and connected it to Google's machine learning service AutoML, which analyzes the faces on any given page to find Waldo.



If "There's Waldo" can find a face in the puzzle with 95% confidence or higher, it will move its mechanical arm to point at any and all Waldos on the page with its creepy rubber hand.



The robot arm, which is controlled by a tiny Raspberry Pi computer, was coded with Python to extend and take a photo of the "Where's Waldo" puzzle.



See the rest of the story at Business Insider

Microsoft releases new apps that make augmented reality way more helpful to businesses (MSFT)

$
0
0

Microsoft HoloLens

  • Microsoft announced new Dynamics 365 artificial intelligence and mixed reality products on Thursday, and they will be released in April.
  • Microsoft's new mixed reality applications help front line workers collaborate remotely and help buyers visualize products they want to buy with different colors or enhancements.
  • Microsoft's new artificial intelligence products help with automation, reducing fraud, and providing customer insights.

Microsoft is betting that its big push into mixed reality can make a difference in businesses like manufacturing, automotive, sales, energy, and more.

Mixed reality is similar to virtual reality, in that it allows you to view 3D computer-generated images. The difference is that users can use a device, like Microsoft's own HoloLens headset, or even a smartphone camera, to see and interact with these digital objects in the real world. 

On Thursday, Microsoft announced new AI and mixed reality applications to be released in April as part of Dynamics 365, its subscription-based line of business applications. 

Alysa Taylor, corporate vice president of Microsoft Business Applications and Global Industry, says that Microsoft created these applications because customer in those specific industries asked for them. 

"It really is about giving customers the set of applications they need to transform," Taylor told Business Insider. "It's really about enabling organizations take all that high value data that help them move their business forward."

The new apps

One of Microsoft's new mixed reality applications is called Remote Assist, which allows technicians to use their smartphone or PC to remotely dial in to a HoloLens headset and see what the wearer sees. The viewer can overlay arrows or even scribbles to call the remote worker's attention to something. 

The other new tool is Product Visualize, which allows sellers, especially those in manufacturing, healthcare, and automotive, to showcase and customize their products. For example, a car salesperson can show a customer a car and display different features and colors in real-time, helping them visualize their purchase.Dynamics 365 Remote Assist

Read more: Here's why Walmart is betting on Microsoft's AI to challenge Amazon in online and physical retail 

"These are really designed to help sales, remote workers, and people doing retail or space planning to bring that digital environment in with how their organization operates," Taylor said.

Microsoft itself has already started using its mixed reality applications to design its flagship stores' floor plans and analyze customers' foot traffic.

"We wanted to bring in mixed reality to have new experiences in a hands-free environment but also bring the marriage of that physical data with the application data," Taylor said. "That's fundamentally why we got into the space."

Companies are already using Microsoft's mixed reality technology in their business. Toyota has started using the tech to figure out where to safely lay out equipment on the manufacturing floor, and to create augmented reality training programs. Chevron has also been using the app to survey oil rigs and pinpoint problems to communicate back to its headquarters. This can reduce costs and safety risks.

Dynamics 365 Product Visualize

Microsoft's new AI applications are part of a line of Dynamics 365 AI applications that were first announced in the fall, which will help customers to automate customer service, detect fraud, and other aspects of their business. Microsoft has been investing in its AI technology and making acquisitions to supplement it, such as its planned acquisition of XOXCO.

Join the conversation about this story »

NOW WATCH: Roger Stone explains what Trump has in common with Richard Nixon

The AI tech behind scary-real celebrity 'deepfakes' is being used to create completely fictitious faces, cats, and Airbnb listings

$
0
0

deepfake steve buscemi jennifer lawrence

  • A new crop of websites shows the disturbing potential of deepfake technology
  • The sites present pictures of faces, cats and buildings that are completely fake but look incredibly real.
  • One of the site's creators says even people without computer programming experience can use freely available tools to create fake pictures in a couple of hours. 
  • The Uber engineer behind another one of the sites says he made the site to "raise public awareness" about the new AI technology.

Deepfake technology has caused a stir with the eerily realistic but completely fake depictions it can produce of celebrities, such as Scarlett Johansson appearing in porn videos and former President Barack Obama calling Trump a "dips---."

Now, a crop of websites have emerged that highlight just how pervasive and consequential the technology is likely to become. 

ThisPersonDoesNotExist.com serves up a rotating gallery of pictures of different faces — but each face is completely fake and computer-generated.

The site can create these AI-based faces using something called a generative adversarial network (a GAN).  As The Next Web explains, these GANs pit two algorithms against each other — a generator and a judge. The generator creates fake depictions of something and attempts to fool the judging algorithm into believing it's legit. Each item that the GAN spits out is an iteration of where the generator was successful in beating the judge.

However, ThisPersonDoesNotExist.com uses a specific algorithm called StyleGAN, developed by AI company Nvidia. The code was first published in a research paper, but is publicly available for use on GitHub. (Nvidia declined to comment because the paper is currently under peer review, during which it can't talk about it with the media "according to submission rules.")

Several other sites have used StyleGAN to develop similar sites showing fake cats, fake anime characters and even fake Airbnb listings.

A developer behind one of these websites explained that he took on the project in order to demonstrate an important point about AI and neural networks: This technology can be used to easily fool people into believing fake and doctored images. Experts have raised concerns that these sophisticated tools could be weaponized for furthering fake news and hoaxes.

"This means that just about anyone with a couple hours to kill could create something just as compelling as I did," Chris Schmidt writes on his website, ThisAirbnbDoesNotExist.com. "[AI is] now sufficiently advanced that they can often fool folks, especially if they’re not looking very hard."

Check out all the different ways the technology is being used to create fake pictures that raise troubling questions about our perception of reality: 

SEE ALSO: Behind on Jeff Bezos' beef with the National Enquirer? Here's the complete timeline of the feud to catch you up

ThisPersonDoesNotExist.com

How it works: Every time you refresh the website, the StyleGAN creates a new AI-generated face. The generator uses a dataset of faces from Flickr.

Created by: Philip Wang, former Uber software engineer. He shared the website in a public Facebook group about artificial intelligence and deep learning.

"I have decided to dig into my own pockets and raise some public awareness for this technology," Wang wrote in his post in the Facebook group.

Link:ThisPersonDoesNotExist.com

 



TheseCatsDoNotExist.com (or ThisCatDoesNotExist.com)

How it works: Nvidia's code on Github includes a pretrained StyleGAN model, and a dataset, to apply the code to cats. 

Created by: Two websites have since emerged.

One version, ThisCatDoesNotExist.com, was created by Wang, the same person behind ThisPersonDoesNotExist.com. Like the original website, this site shows one AI-generated cat per page refresh.

The other version, TheseCatsDoNotExist.com, was created by Australian developer Nathan Glover. He posted the link to his site on Twitter, and wrote he had generated over 30,000 fake cats. The website shows rows of these cats at the same time, but they change each time the page is refreshed.

Links: ThisCatDoesNotExist.com, TheseCatsDoNotExist.com



ThisAirbnbDoesNotExist.com

How it works: Each time the page is refreshed, the website shows a new fake Airbnb listing, complete with AI-generated room pictures, name and face of the host, and listing description. "They are all fevered dreams of computers,"the website says.

Created by: Christopher Schmidt, an engineer working on open source code at Google. In the "About" section on his Airbnb listing's site, Schmidt writes that he was able to produce StyleGAN content without any "real experience with neural networks" or his own "fancy computing resources."

"This means that just about anyone with a couple hours to kill could create something just as compelling as I did," Schmidt writes. "While there are parts of the experience that are weak, overall, I think that it works: the listings are often dubious, but typically plausible enough that they would survive a quick glance."

Link: ThisAirbnbDoesNotExist.com



See the rest of the story at Business Insider

IBM's Watson Anywhere lets customers run AI on any cloud they want, and it's a sign that IBM is pulling back from plugging its own cloud (IBM)

$
0
0

ginni rometty

  • On Feb. 12, IBM announced Watson Anywhere, which allows customers to run IBM's flagship artificial intelligence service on any cloud they want or even on their own equipment.
  • Analysts believe this is an effective strategy and shows that IBM is focusing on its customers who want to use multiple clouds or a hybrid cloud, rather than pushing on its own cloud.
  • But analysts are also skeptical about whether Watson has significant advantages over the artificial intelligence services provided by bigger clouds, like Amazon Web Services or Microsoft Azure.

Instead of losing its breath trying to keep up with the top three leaders in the cloud race, IBM is now taking a more flexible approach, analysts say.

On Feb. 12, IBM announced Watson Anywhere, which allows people to use the company's Watson artificial intelligence on any cloud they want, whether it's a public cloud, a private cloud, or hybrid cloud -- a combination of cloud and data centers.

Watson Anywhere is optimized for IBM's cloud, but the fact that it can also run on any other cloud is a sign that IBM is looking to capitalize on important market trends, such as customers who want to use multiple clouds, says Sid Nag, research director at Gartner.

"IBM is saying, we're not going to compete with the usual suspects," Nag told Business Insider. "We're doing one better where we're going to take our technologies and overlay that not just on IBM cloud, but also the other cloud providers like Amazon, Google, Azure and others. That's their strategy creating more velocity around IBM cloud ecosystem."

Since analysts say it's unlikely that IBM's cloud will reach the scale of Amazon Web Services, Microsoft or Google anytime soon, they see it as an effective strategy that shows IBM is responding to what customers need.

A "very good step"

John Roy, lead analyst at UBS, says allowing customers to use Watson wherever they want is a "wise decision." Otherwise, if IBM kept requiring people to use Watson on IBM's cloud, it would not be sustainable.

"Watson Anywhere is certainly a very good step," Roy told Business Insider. "I think making it available in whatever platform the end user wants to use it on is a very good strategy...You want your core software products used in as many places as possible."

An AI service like Watson that only works on IBM's cloud would be a useful strategy if IBM's cloud had reached sufficient scale, like Amazon or Microsoft's clouds. Currently, AWS and Microsoft Azure offer services that work exclusively on their clouds.

Dave Bartoletti, vice president and principal analyst at Forrester, says Watson Anywhere is somewhat similar to Google's approach of making its AI services, like TensorFlow, available to run anywhere.

"The IBM public cloud has never reached the scale of AWS or Azure, so IBM can’t afford to limit the potential of Watson to its own cloud," Bartoletti told Business Insider. "IBM’s betting that Watson can compete with native public cloud AI services well enough to generate revenue, and that it doesn’t make sense anymore to tie Watson to IBM public cloud."

On the downside, Nag questions whether customers will choose to use Watson, instead of artificial intelligence services that are already provided by the cloud they're using, such as Amazon Rekognition.

The question clients may have, Nag says, is "Why would I use IBM's AI functionality over the major functionalities around AI that my provider already has?"

Read more: IBM dazzled investors with its first annual growth in 7 years, but some doubters aren't buying the comeback story

"IBM's Watson functionalities works on multiple clouds, so that's definitely an advantage, but it's going to be a decision making process on behalf of the buyer," Nag said.

One thing Watson has going for it is its ability to understand natural languages, but some analysts are skeptical about whether it signals progress.

"Watson Anywhere isn't a competitive advantage yet, even though it holds some promise in the future," Clement Thibault, senior analyst at Investing.com, told Business Insider. "I believe this isn't enough to tip the scales in IBM's favor when it comes to cloud providers at the moment."

A hybrid cloud strategy

The fact that Watson can run on hybrid cloud is an advantage, Roy says. Right now, many companies still have to keep some workloads in their in-house data centers due to regulations, and the only top 3 cloud provider that has a generally available hybrid cloud service is Microsoft. This means hybrid cloud customers can use Watson instead of AWS or Google's AI services.

As for its own cloud, IBM will focus its energies on its upcoming acquisition of Red Hat to enhance its hybrid cloud.

In the near term, analysts sees IBM focusing on its software and consulting services that help customers manage different clouds, rather than pushing its own cloud forward. And company trust is on IBM's side — customers still see IBM as a strong player in the enterprise.

"IBM is saying, 'We're going to meet the customer where they are, give them choices and gain more revenue to the service rather than build a public cloud,'" Nag said. "That's their strategy."

SEE ALSO: Google Cloud's first major launch under new CEO Thomas Kurian is a tool to take on Amazon and Microsoft and win larger customers

Join the conversation about this story »

NOW WATCH: Amazon will pay $0 in federal taxes this year — here's how the $793 billion company gets away with it

I've owned an Amazon Echo for over three years now — here are my 19 favorite features (AMZN)

$
0
0

amazon echo

Amazon's family of Echo speakers are some of the most popular gifts right now, so many people are activating their Echo units for the first time.

Here's what you need to know: These speakers, which can respond to either "Alexa,""Amazon," or even "Computer" (for those "Star Trek" fans out there), are extremely quick to respond, and understand your commands far better than any other device I've used.

Thanks to its excellent audio system, with seven microphones for listening and a 360º omni-directional audio grille for speaking, Amazon Echo works exceedingly well wherever I am in my home. I can hear it — and it can hear me — almost perfectly.

Amazon Echo has completely transformed the way I live in my apartment. There's just so much you can do with Echo. Take a look.

SEE ALSO: Apple's 2016 report card: Grading all the new hardware Apple released this year

"Alexa, what time is it?"

Honestly, the best use cases for Amazon Echo are the simplest ones. With the Echo, I don't need to bother searching for my phone just to get the time — you can ask for the time from anywhere in your house and get the answer immediately. It's a small thing, but it totally makes a difference when you're rushing in the morning.



"Alexa, how's the weather outside today?"

Again, it's a simple task, but it's way quicker and better than pulling out your phone and opening your favorite weather app. Amazon Echo will not only tell you the current temperature, but also the expected high and low temperatures throughout the day, and other conditions such as clouds and rain.



"Alexa, set a timer for 10 minutes."

Amazon Echo is the perfect cooking or baking companion because it's totally hands-free. When the timer's up, a radar-like ping will sound until you say "Alexa, stop."



See the rest of the story at Business Insider

Satya Nadella has steered Microsoft into the cloud computing 'catbird's seat' and it could make the company untouchable (MSFT, AMZN)

$
0
0

Microsoft CEO Satya Nadella speaks to guests at an Economic Club of Chicago dinner on October 3, 2018 in Chicago, Illinois.

  • Wedbush analyst Dan Ives added Microsoft's stock to the bank's 'best ideas' list Monday.
  • Ives cited the growth of the company's cloud services business — and the potential of much more to come.
  • Microsoft's network of development partners and large customer base puts it in a prime position to benefit as the cloud market evolves, he said.
  • Ives thinks Microsoft also now has a 50% chance of winning the US government's Joint Enterprise Defense Infrastructure (JEDI) contract; securing that contract could transform its cloud business and the broader market, he said.

Microsoft's cloud story has become so compelling that Dan Ives is convinced it's turned the company stock into a best buy.

Ives, a financial analyst who covers the company for Wedbush, added Microsoft's shares to the firms' best ideas list, arguing that growth in the company's cloud business will push its market capitalization up past $1 trillion this year. The cloud market is shifting in Microsoft's direction, and company CEO Satya Nadella has positioned the software giant to benefit from that evolution, he said.

"Microsoft remains in an enviable position heading into the next 12 to 18 months," Ives said in a research note on Monday. The company, he continued, "is only in the early innings of a transformational cloud story poised to play out over the coming years."

Read this:Microsoft's cloud transformation has it on track to be the next $1 trillion company

Under Nadella, Microsoft has turned the cloud market into a two-horse race with early leader Amazon, Ives said. With enterprises quickly moving their computing workloads to the cloud and to hybrid architectures that merge cloud computing resources with proprietary servers, the software giant is in position to see its cloud revenues surge.

Some 30% of enterprises' application loads are now in the cloud, Ives estimated. By the end of this year, 38% should be in the cloud or in hybrid architectures, and that portion will grow to 55% by 2022, he said.

Microsoft is in a prime position in the cloud market

Microsoft, though, could benefit disproportionately from that shift. Cloud customers are increasingly demanding the ability to tap into artificial intelligence and machine learning services, Ives said. Those are areas in which Microsoft has invested heavily and where it appears to have a competitive advantage over Amazon Web Services, the ecommerce giant's cloud business, he said.

"Nadella & Co are in the catbirds seat to get more of these complex workloads," Ives wrote.

What's more, Microsoft has built up a network of some 70,000 partners that are developing applications for its cloud service and customizing the service for end customers, Ives said. That's more partners than Amazon, Google, and Salesforce have combined, he said. That network, plus Microsoft's collection of resellers, also put it in a good position to win new customers and increase it share of the cloud market, he said.

AWS and Microsoft "remain the clear leaders ... in converting enterprise customers in the shift to cloud, with our data points indicating that partners are playing a more vital role driving this decision going forward, a dynamic that disproportionately benefits [Microsoft] especially among the all-important [small and medium-sized businesses]," Ives said.

The company's cloud has other things going in its favor, he said. Microsoft already has a "massive" customer base, to which it can market its cloud services, he said. Both business and consumer customers are already converting over to the cloud version of its Office 365 productivity software from the older licensed download versions. And the company still has plenty of opportunity to upsell its customers on its newer cloud and online offerings, most notably LinkedIn, said Ives, who reiterated his outperform rating and $140 price target on Microsoft's shares in the report.

"This combination of dynamics should enable Nadella to further transform [Microsoft] into a cloud behemoth over the coming years," he said.

Microsoft has a 50% chance to win the JEDI contract

And there's one more thing Ives is bullish about in terms of Microsoft's cloud effort — the company now stands an even 50-50 chance of winning an important and lucrative new defense contract, he said. As recently as a year ago, Amazon's chances of winning the Joint Enterprise Defense Infrastructure (JEDI) contract — worth $10 billion over the next decade — stood at about 80%, he said. Both companies have advocates and detractors in Washington, D.C., but Microsoft has made up ground on Amazon in the competition over the last six months, Ives said, citing unnamed sources in the nation's capital.

If Microsoft gets the contract, it would be huge for the company's cloud business, for the larger cloud market — and for the software giant's stock, he said.

"It would be a transformative win that would propel the company's cloud ambitions throughout the government and enterprise circles," Ives said, adding that it would have "a major ripple impact for the coming years currently not built into [Wall] Street estimates."

In late afternoon trading, Microsoft's stock was up 69 cents, or about 1%, to $111.66.

SEE ALSO: Microsoft has all the right ingredients to top Amazon in the cloud, says the company's former Silicon Valley chief

Join the conversation about this story »

NOW WATCH: Here's how to use Apple's time-saving app that will make your life easier

China's Huawei has big ambitions to weaken the US grip on AI leadership

$
0
0

artificial intelligence

  • Ren Zhengfei, the reclusive founder and CEO of China’s embattled tech giant, Huawei, is defiant about American efforts to impede his company with lawsuits and restrictions.
  • "There is no way the US can crush us," Ren said in a rare recent interview with international media.
  • "The world cannot leave us because we are more advanced," he said.
  • Huawei is also a rising player in the next-generation 5G wireless networking market, as well as the world’s second-largest smartphone maker behind Samsung (and ahead of Apple).

Ren Zhengfei, the reclusive founder and CEO of China’s embattled tech giant, Huawei, is defiant about American efforts to impede his company with lawsuits and restrictions.

“There is no way the US can crush us,” Ren said in a rare recent interview with international media. “The world cannot leave us because we are more advanced.”

It might sound like bluff and bluster, but these words carry a measure of truth. Huawei’s technology road map, especially in the field of artificial intelligence, points to a company that is progressing more rapidly—and on more technology fronts—than any other business in the world.

Apart from its AI aspirations, Huawei is an ascendant player in the next-generation 5G wireless networking market, as well as the world’s second-largest smartphone maker behind Samsung (and ahead of Apple).

“The [Chinese] government and private sector approach is to build companies that compete across the full tech stack,” says Samm Sacks, who specializes in cybersecurity and China at New America, a Washington think tank. “That’s what Huawei is doing.”

But it’s Huawei’s AI strategy that will give it truly unparalleled reach across the whole of the tech landscape. It will also raise a host of new security issues. The company’s technological ubiquity, and the fact that Chinese companies are ultimately answerable to their government, are big reasons why the US views Huawei as an unprecedented national security threat.

In an exclusive interview with MIT Technology Review, Xu Wenwei, director of the Huawei board and the company’s chief strategy and marketing officer, touted the scope of its AI plans. He also defended the company’s record on security.

Read more:Don't buy a foldable smartphone in 2019

And he promised that Huawei would seek to engage with the rest of the world to address emerging risks and threats posed by AI.

Xu (who uses the Western name William Xu) said that Huawei plans to increase its investments in AI and integrate it throughout the company to “build a full-stack AI portfolio.” Since Huawei is a private firm, it’s tricky to quantify its technology investments. But officials from the company said last year that it planned to more than double annual R&D spending to between $15 billion and $20 billion.

This could catapult the company to between fifth and second place in worldwide spending on R&D. According to its website, some 80,000 employees, or 45% of Huawei’s workforce, are involved in R&D.

Huawei’s vision stretches from AI chips for data centers and mobile devices to deep-learning software and cloud services that offer an alternative to those from Amazon, Microsoft, or Google. The company is researching key technical challenges, including making machine-learning models more data and energy efficient and easier to update, Xu said.

But Huawei is struggling to convince the Western world that it can be trusted. The company faces accusations of intellectual-property theft, espionage, and fraud, and its deputy chairwoman and CFO (and Ren’s daughter), Meng Wanzhou, is currently under house arrest in Canada, awaiting possible extradition to the US.

America and several other countries have banned the sale of Huawei’s devices or are considering restrictions, citing concerns that Huawei’s 5G equipment that could potentially be exploited by the Chinese government to attack systems or slurp up sensitive data.

Xu defended the company’s reputation: “Huawei's record on security is clean.”

But AI adds another dimension to such worries. Machine-learning services are a new source of risk, since they can be exploited by hackers, and the data used to train such services may contain private information. The use of AI algorithms also makes systems more complex and opaque, which means security auditing is more challenging.

Read more:The $6.5 billion acquisition that everyone hated a year ago was the only thing everyone loved about Salesforce's latest quarter

As part of an effort to reassure doubters, Xu promised that Huawei would release a code of AI principles in April. This will amount to a promise that the company will seek to protect user data and ensure security. Xu also said Huawei wants to collaborate with its international competitors, which would include the likes of Google and Amazon, to ensure that the technology is developed responsibly.

It is, however, unclear whether Huawei might allow its AI services to be audited by a third party, as it has done with its hardware.

“Many companies across the industry, including Huawei, are developing AI principles,” Xu told MIT Technology Review. “For now, we know at least three things for certain: technology should be secure and transparent; user privacy and rights should be protected; and AI should facilitate the development of social equality and welfare.”

Boston Dynamics mule robot google military

Stacking up

As Huawei advances in AI and progresses toward its aim of becoming a “full stack” company, however, it may increasingly seem too powerful for many in the West.

Already, it boasts a dizzying array of offerings. Last year, Huawei launched an AI chip for its smartphones, called Ascend, that is comparable to a chip found in the latest iPhones, and tailor-made for running machine-learning code that powers tasks like face and voice recognition. The technology for the chip came from a startup called Cambricon, which was spun out of the Chinese Academy of Sciences, but Huawei recently said it would design future generations in-house.

Huawei also sells a range of AI-optimized chips for desktops, servers, and data centers. The chips lag behind those offered by Nvidia and Qualcomm (both US companies) in terms of sophistication, but no other business can boast such a range of AI hardware.

Read more:Your iPhone keeps a detailed list of every location you frequent — here's how to delete your history and shut the feature off for good

Then there’s the software. Huawei offers a cloud computing platform with 45 different AI services—similar in scope to offerings by Western giants like Google, Amazon, and Microsoft. In the second quarter of 2019, Huawei will also release its first deep-learning framework, called MindSpore, which will compete with the likes of  Google’s Tensorflow or Facebook’s PyTorch.

AI is also woven into Huawei’s ambitions to provide the 5G equipment that will connect everything from industrial machinery to self-driving cars. “We need to use AI to reduce maintenance costs,” Xu said. “Telecom networks are becoming more and more complex—70% of network failures are caused by human errors, and if we use AI in network maintenance, over 50% of potential failures can be predicted.” 

Standard-bearers

Xu’s statements on AI ethics are also, in a sense, part of an effort to lead the world’s AI development. Ensuring ethical AI will mean crafting technical standards, which will be important to shaping the future of the technology itself. The United States has exerted an outsize influence over its development of the internet through technical standards. 

To that end, the Chinese Association for Artificial Intelligence, a state-run organization, set up a committee earlier this year to draft a national code of AI ethics. Several of China’s big tech companies, including Baidu, Alibaba, and Tencent, also have initiatives dedicated to understanding the impact of AI.

Agreeing on AI ethics and standards could prove a challenge as tensions between East and West escalate, however. A number of national governments, as well as organizations like the EU, are also seeking to set the rules of the road. “AI brings value as well as problems and confusions,” Xu told MIT Technology Review. “Global collaboration is needed to address these problems.”

And international collaboration is not exactly a forte of the US right now. Indeed, outside of its own borders, the American government can do only so much to hamper Huawei. Some allies are apparently tiring of US strong-arm tactics; the UK and Germany both seem increasingly unlikely to ban Huawei from supplying 5G equipment and other products and services. 

The company’s interest in ingratiating itself with wary countries also has its limits. In recent comments its CEO, Ren, contended that the international picture is changing, at least in technological terms. “If the lights go out in the West, the East will still shine,” he said. “And if the North goes dark, there is still the South. America doesn’t represent the world. America only represents a portion of the world.”

Either way, there will be Huawei.

SEE ALSO: This San Francisco startup gives employees five 'inner work days' a year to stay home and self-reflect

Join the conversation about this story »

NOW WATCH: The wives of high-level cocaine traffickers reveal how their husbands took down 'El Chapo'

Google is staking its claim in the next big thing after cloud computing with a new line of AI-powered hardware for developers (GOOG, GOOGL)

$
0
0

google coral

  • Google introduced Coral, a line of hardware to help hackers build and experiment with AI-powered gadgetry. 
  • It's similar in principle to popular minicomputers like the Raspberry Pi, but with some Google special sauce — it uses a custom Google processor, customized for AI, and is designed to run the Google-created TensorFlow AI software.
  • This could help Google spread the word of the already-popular TensorFlow, while also staking its claim in edge computing.
  • Edge computing refers to the concept of putting more intelligence on a device, rather than in the cloud. Indeed, some believe that edge computing could be a larger market than cloud computing.

Google has quietly launched Google Coral, a line of relatively cheap hardware aimed at helping developers experiment with building gadgetry powered by artificial intelligence. 

On its website, Google Coral has product listings for a $150 motherboard, a $75 USB device to bring AI to existing systems, and a $25 camera that slots into the board. The listings were first spotted by the Verge

"Coral offers a complete local AI toolkit that makes it easy to grow your ideas from prototype to production," writes Google in a blog post announcing Coral

In theory, it's more than a little bit like the Raspberry Pi, the pioneering $35 minicomputer, which is mega-popular among hackers as an easy and cheap way to build experimental hardware and other oddities. 

In practice, the Coral lineup appears to come with lots of Google special sauce.

The processor on the Google Coral developer board is an Edge TPU, a chip specifically designed by the search giant to bring AI to low-powered devices like cameras and home appliances. It's also designed to run TensorFlow Lite, a version of Google's very popular open source AI framework designed, again, for low-powered devices. 

It's important to note that these devices aren't actually much good at training AI algorithms — as the Verge notes, you'll need much more powerful hardware for that. Rather, these are good for putting those algorithms to work, and helping gather the real-world data to refine them.

And this may be the real significance of Google Coral, as the company looks to stake its claim in so-called edge computing, the market that many industry insiders believe could be bigger than the cloud. 

Read more: The CEO of Hewlett Packard Enterprise tells us why the company is 'under-appreciated' and how it can beat Amazon in a market that's bigger than cloud computing

The big idea behind edge computing is to bring more intelligence to devices like phones, TVs, appliances, factory robots, and even self-driving cars and other vehicles. While the cloud brings unprecedented levels of supercomputing power to anything with an internet connection, there's a serious latency problem; you don't want your self-driving car waiting to get a response from the server while it figures out whether to stop at a traffic light.

The solution, then, is to give the device (or car, or robot) enough computing power to make decisions on its own. The massive processing power of the cloud can help formulate, analyze, improve, and generally fine-tune the algorithm, while the device itself has enough AI to run the algorithm quickly and accurately. 

Hack away and spread the gospel of TensorFlow

Cloud players like Microsoft Azure and Amazon Web Services both already have their own plays for edge computing, while legacy companies like Intel and Hewlett Packard Enterprise see the opportunity to gain ground after largely losing out in cloud computing. Indeed, Intel offers its own cheap AI hardware to developers.  

For Google's part, TensorFlow and its Lite variant — open source projects that are free to use — have basically become the standard software for powering artificial intelligence, with the Facebook-created PyTorch as its primary competition. In mid-2018, Microsoft even bought a startup powered by the Google-created TensorFlow.

On Wednesday, Google also announced that TensorFlow had been downloaded 41 million times as of November, and that TensorFlow Lite is running on 2 billion phones and other mobile devices. Google itself uses TensorFlow Lite to run the Google Assistant, Google Photos, and even Google Search on phones.

Which is a very long way to come back around to Google Coral. By reaching out to developers with tools that make it relatively cheap and easy to hack away at new hardware projects, it could very well spread the gospel of TensorFlow and the Edge TPU. 

That's good for Google in the long haul, because while TensorFlow might be free software, Google Cloud offers developers plenty of services for powering these devices on the backend. Indeed, Google says in its blog entry that Coral is made to integrate nicely with Google Cloud's internet of things (IoT) backend services. 

That, in turn, only stands to boost Google Cloud's reputation as the best place to run TensorFlow apps, which could help it build its credibility in both AI and edge computing — a plus as it pushes against the leading Amazon Web Services and second-place Microsoft Azure clouds. 

It's a playbook that's worked for Google before: Kubernetes, a very popular open source tool for managing large-scale cloud infrastructures, became a cloud standard because developers love it so much. If developers come to love Google Coral, too, it could make Google Cloud a more attractive place for developers in the next big thing.

Meanwhile, Google's rivals are doing their own kinds of outreach to AI developers. Microsoft recently resurrected the Xbox's failed Kinect accessory as a $400 AI-powered camera for developers, while Amazon is letting developers program their own self-driving toy cars

SEE ALSO: VCs say these 19 startups for open-source software developers will blow up in 2019

Join the conversation about this story »

NOW WATCH: Why Tesla's Model X was the first SUV to receive a perfect crash-test rating

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>