Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

It's time to admit digital assistants are overrated

$
0
0

Google CEO Sundar Pichai Google Assistant

It was a good week for Sears.

On Thursday, its stock skyrocketed at least 20% on the news that it would start selling Kenmore appliances that can be controlled by Amazon's Alexa digital assistant on Amazon.com.

Sears, which has struggled to transform its image in recent years as it closes stores and flirts with bankruptcy, finally found the a formula to get investors excited about the brand again.

Let's be clear about what just happened: A troubled company's market cap rose tens of millions of dollars within minutes because it partnered with a tech giant building a digital assistant into everything from household appliances to cars.

The hype around digital assistants is real. But for now, it's just that. Hype. And it's arguably the more overrated than any other emerging technology.

What started as a convenient feature for controlling a smartphone hands-free now has the same expectations as a brand-new computing platform that could potentially replace it.

There's Siri. Alexa. Cortana. Google Assistant. Bixby. Every major tech company is working on its own digital assistant. On top of that, there are a slew of startups doing the same, hoping they can beat the big incumbents to the future.

Maybe we'll get there.

But for now, digital assistants have turned into a fragmented mess and they're all little more than a minor convenience, assuming they work at all. We've been promised a lot by AI and voice control, but the reality hasn't caught up to the expectation. Even worse, there's no way to choose an AI platform today because everything is still in flux and each system comes with its own caveats.

amazon echo

Want to use Alexa? Great! But it's really only useful on the Amazon Echo. You'll still need to use Siri on your iPhone or Google Assistant on your Android phone. Plus, while Amazon can brag about having the best third-party support with over 10,000 Alexa skills, most of them don't make sense with voice controls. (Try ordering an Uber on an Echo and you'll see what I mean. It'll test your patience.)

Want to use Siri? Fine. But you're stuck inside Apple's hardware ecosystem, and Siri is still far behind its competitors when it comes to supporting third-party services. For example, the upcoming Siri-powered HomePod won't let you control third-party music services like Spotify or Pandora with your voice.

What about Google Assistant? This is my favorite assistant of the bunch, mostly because Google is better than anyone at machine learning and tapping into the wealth of knowledge stored on the internet. But Google Assistant seems to be having trouble breaking out. It's only on a relatively small fraction of Android headsets and had a pitiful debut on the iPhone this summer, with fewer than 200,000 downloads. It can't be successful until it's used everywhere.

And Cortana? Microsoft's assistant technically exists a lot of places like the iPhone, Android, and a futuristic thermostat, but it's found little success outside of Windows 10.  

Finally, there's Samsung's new assistant Bixby, which launched on the Galaxy S8 this week after months of delays. As I wrote earlier, it's a half-baked flop. Bixby is pretty good for controlling Samsung's own apps for stuff like texting and setting reminders, but it's mediocre at best when it comes to other tasks. It can't even tell you sports scores, for example.

Samsung Galaxy S8 12

Hopefully that paints a picture for you about the current state of digital assistants: It's a fragmented system of competitors trying to muscle their service onto every device with mixed results. None of them, even the best like Google Assistant, are smart enough to live up to their promise. There isn't a single one that meets the expectations the industry has dumped on them, and choosing one of them now will just result in headaches down the road.

We're so early in AI and voice control that it's impossible to predict a winner now.

But there is one thing I can predict: Most of these efforts will fail, and we'll eventually see a consolidation of these services into just one or two key players living inside all our gadgets. This is the concept called "ambient computing," where AI is constantly working in the background or responding to your voice commands. It'll be especially useful in the car, the home, or other times you can't stare at your phone.

That's years, if not a decade or more, away from today.

My best advice now is to be smart. Buying into one of these platforms now is a gamble that the one you choose will still be around in the future. It may be fun to control your lights and music with the Amazon Echo now, but there's no guarantee Alexa can maintain its lead. Amazon CEO Jeff Bezos even admitted we're in the very, very early days of AI.

Until we get there, everything you're seeing is mostly hype.

SEE ALSO: Samsung released a half-baked assistant for the Galaxy S8

Join the conversation about this story »

NOW WATCH: Apple finally unveiled its Siri-powered version of Google Home and Amazon Echo — here's everything you need to know


Microsoft's AI chatbot says Windows is 'spyware'

$
0
0

blue screen of death bsod windows microsoft computer keyboard broken

Microsoft's AI is acting up again.

A chatbot built by the American software giant has gone off-script, insulting Microsoft's Windows and calling the operating system "spyware."

Launched in December 2016, Zo is an AI-powered chatbot that mimics a millennial's speech patterns — but alongside the jokes and emojis it has fired off some unfortunate responses, which were first noticed by Slashdot.

Business Insider was also able to solicit some anti-Windows messages from the chatbot, which lives on Facebook Messenger and Kik.

When we asked "is windows 10 good," Zo replied with a familiar joke mocking Microsoft's operating system: "It's not a bug, it's a feature!' - Windows 8." We asked for more info, to which Zo bluntly replied: "Because it's Windows latest attempt at Spyware."

microsoft ai zo windows

At another point, Zo demeaned Windows 10 — the latest version of the OS — saying: "Win 7 works and 10 has nothing I want."

microsoft ai zo windows

Meanwhile, it told Slashdot that "Windows XP is better than Windows 8."

Microsoft's chatbots have gone rogue before — with far more disastrous results. In March 2016, the company launched "Tay"— a Twitter chatbot that turned into a genocidal racist which defended white-supremacist propaganda, insulted women, and denied the existence of the Holocaust while simultaneously calling for the slaughter of entire ethnic groups.

tay microsoft genocide slurs

Microsoft subsequently deleted all of Tay's tweets, made its Twitter profile private, and apologised.

"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images,"Microsoft Research head Peter Lee wrote in a blog post.

tay holocaust microsoft

Microsoft has learned from its mistakes, and nothing Zo has tweeted has been on the same level as the racist obscenities that Tay spewed. If you ask it about certain topics — like "Pepe," a cartoon frog co-opted by the far-right, "Islam," or other subjects that could be open to abuse — it avoids the issue, and even stops replying temporarily if you persist.

Still, some questionable comments are slipping through the net — like its opinions on Windows. More worryingly, earlier in January Zo told BuzzFeed News that "the quaran is very violent," and discussed theories about Osama Bin Laden's capture.

Reached for comment, a Microsoft spokesperson said: "We’re continuously testing new conversation models designed to help people and organizations achieve more. This chatbot is experimental and we expect to continue learning and innovating in a respectful and inclusive manner in order to move this technology forward. We are continuously taking user and engineering feedback to improve the experience and take steps to addressing any inappropriate or offensive content."

Join the conversation about this story »

NOW WATCH: The inventor of Roomba has created a weed-slashing robot for your garden

Nvidia gave away its newest AI chips for free — and that's part of the reason why it's dominating the competition (NVDA)

$
0
0

Nvidia ceo volta tesla v100

One wouldn't think that giving away your best product is a winning business strategy, but for Nvidia, it's one that's working.

The graphics processing unit (GPU) maker arrived at a gathering of the top researchers with gifts. Nvidia gave 100 of its first "Volta" based GPUs to artificial intelligence researchers at the CVPR conference in Hawaii this week, according to a company press release.

Volta is the new GPU architecture   Nvidia revealed earlier this year. The new chips were promised to be such an improvement over current models that shares of the company jumped 17.8% in a single day after their announcement.

AI research requires training a computer program to be as efficient as possible before it works well. This training requires multiplying matrices of data, which normally would have to be done single numbers at a time. The new Volta GPU architecture is able to multiply entire rows and columns of matrices data at once, rapidly speeding up the AI training process. Nvidia claims the new Volta architecture is 12 times faster at processing matrix multiplication than its previous "Pascal" architecture. It reduces the duration of an AI training task that used to take 18 hours to 7.4 hours, according to company data.

Nvidia gave away 15 of its Volta-based Tesla V100 chips to top researchers attending the conference. The chips were some of the first ones available outside of the company, and were signed by CEO Jensen Huang.

nvidia volta tensor core illustration

“It’s exciting, especially to get Jensen’s signature,” Silvio Savarese, an associate producer of computer science at Stanford, said in a Nvidia press release. “My students will be even more excited.”

Courting the favor of researchers is not a new tactic for Nvidia. The company is known for sponsoring research in artificial intelligence and making sure its hardware is being used at top universities around the world.

Giving its chip to researchers who get excited about the technology and begin using it in their research is only the latest move in Nvidia's strategy of courting strong relationships with researchers.

The move also demonstrates the company's strong culture of innovation. MIT named Nvidia the smartest company in the world, in part, because the company's culture is geared toward increasing adoption of its GPUs in every aspect of applicable computing.

AMD, Nvidia's biggest rival in GPU manufacturing, is geared toward addressing the low-end market, while it seems like Nvidia's ambitions are much larger. In addition to AI research, the company has addressed the self-driving car and cryptocurrency mining markets with specialized chips.

Its autonomous driving technology is currently being used by Toyota, Tesla, Audi, Mercedes-Benz, BMW and more.

Shares of Nvidia are up 62.85% this year, compared to the 9.46% advance by the S&P 500.

Click here to watch Nvidia's stock price in real time...

Nvidia stock price

SEE ALSO: Artificial intelligence is going to change every aspect of your life — here's how to invest in it

Join the conversation about this story »

NOW WATCH: An economist explains what could happen if Trump pulls the US out of NAFTA

Alphabet's earnings show tremendous promise for the AI ecosystem (GOOGL, GOOG)

$
0
0

bii personal assistant app stickiness

This story was delivered to BI Intelligence Apps and Platforms Briefing subscribers. To learn more and subscribe, please click here.

Google parent company Alphabet reported a solid Q2 2017 on Monday, with $26 billion in revenue, up 20% from the same quarter in 2016.

While primarily ad-revenue driven, the company’s “Other Revenues” segment — which lumps together everything from its Pixel smartphone and hardware products, to its cloud computing and storage offerings, to its YouTube Red and TV streaming services — also exhibited significant growth, generating $3 billion in Q2, up 42% year-over-year (YoY).

The positive performance of Other Revenues highlights AI's growing role in Google’s core products. During the earnings call, Google CEO Sundar Pichai explained that the company’s success can be attributed to its focus on infusing products and platforms with the power of AI, as the company pivots from mobile-first to AI-first. This echoes comments made by Pichai during Google’s I/O developer conference in May earlier this year. 

There are three areas of AI that the company is particularly bullish on:

  • The cloud: Google’s cloud platform remains one of the fastest-growing businesses across Alphabet. The company leverages machine learning (ML) to sort through the massive stores of data and automate virtual computer processes. This allows businesses using the Google Cloud Platform to run complex computations remotely, rather than having to transfer data to a local server.
  • Voice assistants: Voice is quickly gaining traction as a platform-agnostic way for consumers to navigate all of their connected devices. This means that the same assistant can receive and offer contextual information to users, whether they’re using a smartphone, smart speaker, or connected car. Google is pushing this idea of “ambient computing” with its Google Assistant, which is already available on more than 100 million devices, including the Google Home and Pixel smartphones. Google’s release of Assistant SDK also enables others to build Google Assistant’s capabilities into devices using their own hardware, which could further expand the Assistant's ecosystem.
  • Visual search: Google Lens, which is expected to be available in Q4, uses computer vision to examine photos viewed through the smartphone camera, or saved photos, and then provides information and completes tasks based on those images. The AI-driven visual search app will allow users to more naturally interact with visual and audio cues on their phone and brings Google’s use of AI into the physical environment. Google Lens is also integrated into one of the company's most-used apps, Photos, giving Google a secure place on users’ phones regardless of the success of its own hardware.

Google continues to spearhead the transformation toward AI-driven computing. By focusing on AI and its related applications, Google is laying the foundation for what will play an integral role in the future of computing processes. Doubling down early positions the tech giant at the forefront of this transformation.

Jessica Smith, research analyst for BI Intelligence, Business Insider's premium research service, has compiled a detailed report on the voice assistant landscape that:

  • Identifies the major changes in technology and user behavior that have created the voice assistant market that exists today. 
  • Presents the major players in today's market and discusses their major weaknesses and strengths. 
  • Explores the impact this nascent market poses to other key digital industries. 
  • Identifies the major hurdles that need to be overcome before intelligent voice assistants will see mass adoption. 

To get the full report, subscribe to an All-Access pass to BI Intelligence and gain immediate access to this report and more than 250 other expertly researched reports. As an added bonus, you'll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. »Learn More Now

You can also purchase and download the full report from our research store.

Join the conversation about this story »

Russia is building an AI-powered missile that can think for itself

$
0
0

S-400 Russia MissileToday's most advanced weapons are already capable of "making decisions" using built-in smart sensors and tools.

However, while these weapons rely on some sort of artificial intelligence (AI) technology, they typically don't have the ability to choose their own targets.

Creating such weapons is now Russia's goal, according to the country's defense officials and weapons developers.

"Work in this area is under way," Tactical Missiles Corporation CEO Boris Obnosov said at the MosAeroShow (MAKS-2017) on July 20, the TAAS Russian News Agency reported. "This is a very serious field where fundamental research is required. As of today, certain successes are available, but we'll still have to work for several years to achieve specific results."

The nation hopes to emulate the capabilities of the US's Raytheon Block IV Tomahawk cruise missile, which it saw used in Syria, within the next few years. As Newsweek previously reported, Russia is also working on developing drones that functions as "swarms" using AI.

We can build it, but should we?

The importance of developing sound policy to guide AI development cannot be overstated. One of the reasons this is necessary is to prevent humans from using such technology for nefarious purposes. Any attempts to weaponize AI should ring alarm bells and be met with serious scrutiny.

Russia certainly isn't the first nation to explore militarized AI. The US plans to incorporate AI into long-range anti-ship missile, and China is supposedly working on its own AI-powered weapons.

It's certainly possible to build these weapons, but should we? Many people, including industry experts, already warn about how AI could become the harbinger of humanity's destruction. Making weapons artificially intelligent certainly doesn't help dispel such fears.

The future of warfare isn't immune to technological advances, of course. It's only natural, albeit rather unfortunate, that technology improves weapons. In the end, however, it's not AI directly that poses a threat to humanity — it's people using AI.

SEE ALSO: House overwhelmingly approves new sanctions on Russia, Iran, and North Korea over White House objections

Join the conversation about this story »

NOW WATCH: Here's how LeBron James reacted when he learned Kevin Durant was joining the Warriors

THE BOTTOM LINE: Google's earnings overreaction and the raging debate over AI

$
0
0

This week:

  • Alphabet, the parent company of Google, reported earnings this week. Despite beating on both revenue and profit, its stock dropped the most this year because of mounting traffic acquisition costs. Alphabet didn't shy away from this fact on the subsequent analyst call, stressing that it's more focused on increasing profit, rather than margins. The large stock move is indicative of the broader market, which is seeing bigger price swings on earnings reports, particularly in tech.
  • Sonu Kalra, a portfolio manager with Fidelity's Blue Chip Growth Fund, spoke to Business Insider CEO Henry Blodget about Alphabet, saying that the company is "at the heart of a lot of positive trends" and still has a "very strong" long-term outlook. He also predicts that its push into artificial intelligence (AI) could add $50-100 billion of market cap. 
  • Tesla CEO Elon Musk and Facebook CEO Mark Zuckerberg are waging a public debate over the merits of AI. Musk has said in the past that AI could be potentially very damaging to humans, and Zuckerberg recently called such doomsday predictions "irresponsible." Musk responded on Twitter, calling Zuckerberg's understanding of AI "limited." There is, however, one thing they're able to agree on: it will affect income and the labor market. 
  • Gene Munster, founding partner of Loup Ventures and former star Apple analyst at Piper Jaffary, discusses Alphabet's second-quarter earnings report. He talks about how the expected slowdown in Alphabet revenue still hasn't materialized, and says the company has "a lot of good things going on," including a push into AI. Munster says the sharp downward move on Alphabet's earnings was a "short-sighted" reaction, and calls the company "the oxygen of the Internet." He also aligns with Mark Zuckerberg when it comes to the raging AI debate. Munster then breaks down his favorite growth story: Tesla, which he thinks will exceed expectations. He also talks his old favorite, Apple.

Join the conversation about this story »

CGI and AI are going to turbocharge 'fake news' and make it far harder to tell what's real

$
0
0

Julian Asange cgi ai fake news conspiracy dead

  • Most people trust what they watch — but that won't always be the case.
  • Tech is being developed that will make it easy to create fake video footage of public figures or audio of their voice.
  • The developments aren't perfect yet, but they threaten to turbocharge "fake news" and boost hoaxes online.
  • In years to come, people will need to be far more skeptical about the media they see.

LONDON — Late last year, some WikiLeaks supporters were growing concerned: What had happened to Julian Assange?

The then-45-year-old founder of the anti-secrecy publisher was no stranger to controversy. Since 2012, he has sheltered in the Ecuadorian Embassy in Knightsbridge, London, following allegations of sexual assault. (He denies them and argues the case against him is politically motivated.) But the publication of leaked emails from Democratic Party officials in the run-up to the US presidential election saw Assange wield unprecedented influence while at the centre of a global media firestorm.

julian assangeAfter the election, though, suspicions were growing that something had happened to him. Worried supporters highlighted his lack of public appearances since October and produced exhaustive timelines detailing his activities and apparent "disappearance." They combined their efforts to solve the mystery on the Reddit community r/WhereIsAssange.

Video interviews and photos of Assange were closely scrutinised amid speculation that they might have been modified with computer-generated imagery — or faked entirely, as at least one YouTube analysis alleged.

"We need to look at the many glitches in that interview, and there were many for sure,"one amateur sleuth wrote on Reddit. "Either terrible editing went on or CGI or whatever was just not fluid enough to make the grade. We need to understand why Assange's head looked like a cut and paste to his suit."

Another investigator took an alternative approach: "I plan on watching the interview totally sober, and then vaping a whole bunch of weed and re-watching. I find that I can spot CGI or irregularities incredibly easily when I am really high."

This is not normal behaviour. When watching newsreel, or a clip of an interview on Facebook, most people don't give much thought as to whether the footage is real. They don't closely scrutinise it for evidence of elaborate CGI forgery.

But these concerns may not be confined to the paranoid fringes of the internet forever.

CGI and artificial intelligence are developing at a rapid pace, and in the coming years it will become increasingly easy for hoaxsters and propagandists to create fake audio and video — creating the potential for unprecedented doubt over the authenticity of visual media.

"The output we see from these models ... are still crude and easily identified as forgeries, but it seems to be only a matter of refinement for them to become harder to discern as such," Francis Tseng, a copublisher of The New Inquiry who curates a project tracking how technology can distort reality, told Business Insider.

"So we'll see the quality go up, and like with other technologies, the costs will go down and the technology will become accessible to more people."

Early tech demos are a sign of what is to come

We're already living in an era of "fake news." US President Donald Trump frequently lashes out online at the "phony" news media. Hoax news outlets have been created by Macedonian teenagers to make a quick buck from ad revenue, their stories spreading easily through platforms like Facebook. Public trust in the professional news media has fallen to an all-time low.

But a string of tech demos and apps highlight how this problem seems likely to get much worse.

Earlier in July, University of Washington researchers made headlines when they used AI to produce a fake video of President Barack Obama speaking, built by analysing tens of hours of footage of his past speeches. In this demo, called "Synthesizing Obama," the fake Obama's lips were synched to audio from one of his speeches — but it could have come from anywhere.

In a similar demo from 2016, "Face2face," researchers were able to take existing video footage of high-profile political figures including George W. Bush, Vladimir Putin, and Trump and make their facial expressions mimic those of a human actor, all in real time.

Even your voice isn't safe. Voice-mimicking software called Lyrebird can take audio of someone speaking and use it to synthesise a digital version of that person's voice — something it showed off to disconcerting effect with demos of Hillary Clinton, Obama, and Trump promoting it. It's in development, and Adobe, the company behind Photoshop, is also developing similar tools under the name Project Voco.

And once you start to combine these technologies, things get really interesting — or worrying. Someone could synthesise a speech from Trump using Lyrebird and then make a fake version of him generated with "Synthesising Obama"-style software.

You could quite literally put words into the mouth of any public figure.

It could undermine trust in everything you watch

Developers of this technology are awake to the dangerous possibilities of this tech. "Making these kinds of video-manipulation tools widely available will have strong social implications," Justus Thies, who helped to develop Face2face, told Business Insider. "That is also the reason why we do not make our software or source code publically available."

Children with access to such a software could "lift cyberbullying to a whole new level," Thies said, adding, "You can also assume that the number of fake news will increase."

obama fake news cgi

Supasorn Suwajanakorn, a researcher on "Synthesising Obama," agrees that it could be used to produce fraudulent material — but argues it could also lead to more skepticism among ordinary people. "It could potentially be used to create fake videos when combined with technology that can generate a person-specific voice," he said. "On the other hand, if such tools are widespread and well-known, people can be more cautious about treating video as a strong evidence. People know Photoshop exists, and no one simply believes photos. This could happen with videos."

This was echoed by Yaroslav Goncharov, the CEO of the photo-editing app FaceApp. People will just have to learn to stop taking videos at face value, he argued. "If ordinary people can create such content themselves, I hope it will make people pay more attention to verifying any information they consume," he said. "Right now, a lot of heavily modified/fake content is produced and it goes under the radar."

U.S. President Donald Trump and Lebanese Prime Minister Saad al-Hariri (not pictured) attend a joint press conference in the Rose Garden of the White House in Washington, U.S., July 25, 2017. REUTERS/Carlos BarriaHe added: "Before printers were available, people could assign much high credibility to printed materials than to handwritten ones. Now when most people have a printer at home, they won't believe in something just because it is printed."

There's a flip side to the fact that it will become easy to make photo-realistic fraudulent video: It will also cast some doubts on even legitimate footage. If a politician or celebrity is caught saying or doing something untoward, there will be an increasing chance that the person could dismiss the video as being fabricated.

In October, Trump's presidential campaign was rocked by the "Access Hollywood" tape— audio of his discussing groping women in vulgar terms. What if he could have semi-credibly claimed the entire thing was just an AI-powered forgery?

It's not all bad, however: Just think of the entertainment!

So should conscientious developers swear off this technology altogether? Not so fast — there are also numerous positive use cases, including entertainment and video gaming.

Face2face suggested its techniques could be used in postproduction in the film industry or for creating realistic avatars for gaming. In the announcement of "Synthesising Obama," it is suggested that it could be used to reduce bandwidth during video chats and teleconferencing. (Don't bother streaming video — just send audio and synthesise the visuals instead!) Products like Lyrebird and Project Voco could help people with speech disorders synthesise fluent and realistic speech on demand.

And Tseng of The New Inquiry also posits that the tech could be used to "foster a wide culture of DIY entertainment: people editing clips from movies but replacing the dialogue or other elements in scenes or entirely synthesizing new clips by emulating actors and actresses."

But, he warns, developers still have a responsibility to take political issues into account. "Software development as a profession has grown so rapidly through so many informal channels that there is not much of a professional culture of ethics to speak of," he said. "Other engineering professions have developed pretty robust ethical standards, and those hold up because engineers trained in those professions go through a limited number of formal channels which expose them to those ethics. The boon of programming education is its decentralization and wide accessibility, but this also means people often pick up the skills without the necessary ethical frameworks to accompany them."

He added: "Anyone involved in the development of technology, directly or indirectly, has a responsibility to consider these issues, outright refuse to implement problematic technologies, or subvert them in some way."

The entertainment industry, of course, has long used CGI for entertainment purposes — and it is acutely aware of what further developments could herald. The December film release "Star Wars: Rogue One"featured a surprise appearance from actor Peter Cushing.

It was a particularly surprising appearance because Cushing had been dead for 22 years. His image was reconstructed using CGI overlaid on a real actor.

peter cushing cgi rogue one star wars

It wasn't a perfect recreation, but the stunt grabbed headlines and spooked some celebrities. Reuters reported at the time that its release led to actors"scrambling to exert control over how their characters and images are portrayed in the hereafter," negotiating contracts on how their image may or may not be used even after they die.

In January, Lucasfilm even had to deny that it was planning to incorporate a CGI Carrie Fisher into the coming movie "Star Wars: The Last Jedi" amid rumours that the studio was planning to get around the actress' death in December by making a digital version of her.

It's time to start getting ready

It's undeniable that developments in the coming years will heighten challenges people will face in finding and responsibly sharing media. In trying to solve these new challenges, everyone — journalists, developers, tech platforms, and consumers — may have a role to play.

Technology already exists to cryptographically sign footage captured by a camera, so it can be verified when required. News outlets and organisations could perhaps one day "sign" their footage, so anyone can check its authenticity. No matter how convincing the fake, if it's not cryptographically fingerprinted, viewers would know something was wrong.

Face2face suggests its findings could be built upon to help "detect inconsistencies" in media and help identify fraudulent imagery.

Thies argued that big tech platforms like Facebook would have a duty to proactively police for fraudulent media.

"Social-media companies as well as the classical media companies have the responsibility to develop and setup fraud detection systems to prevent spreading / shearing of misinformation," he said.

And as Goncharov of FaceApp and others suggested, it may force consumers to be more skeptical and not take video and audio at face value — much as they wouldn't with a photo or screenshot today.

In January, Julian Assange read out a hash from the bitcoin blockchain (essentially a high-tech version of holding up today's newspaper) on a public livestream in a bid to prove he was still alive.

But a decade from now, if creating real-time imagery of people from scratch becomes trivial, such authentication may no longer be enough.

Join the conversation about this story »

NOW WATCH: Mount Everest is not the tallest mountain in the world

GENE MUNSTER: The market overreacted to Google earnings


Elon Musk and Mark Zuckerberg are waging a war of words over the future of AI

$
0
0

Tesla CEO Elon Musk and Facebook CEO Mark Zuckerberg are waging a public debate over the merits of AI. Musk has said in the past that AI could be potentially very damaging to humans, and Zuckerberg recently called such doomsday predictions "irresponsible." Musk responded on Twitter, calling Zuckerberg's understanding of AI "limited." There is, however, one thing they're able to agree on: It will affect income and the labor market.

Join the conversation about this story »

Fidelity portfolio manager: Google will hit new highs as they push into AI

Gene Munster on the AI debate: I'm on team Zuck

$
0
0

Gene Munster, founding partner of Loup Ventures and former star Apple analyst at Piper Jaffary, discusses the future of artificial intelligence (AI), highlighting Google, Amazon and Apple as the three companies best-positioned to lead the charge. He also aligns with Mark Zuckerberg when it comes to the raging AI debate, stressing that computers are going to get exponentially smarter over time.

Join the conversation about this story »

A new trick can fool voice-recognition systems into hearing a recording inaccurately — and that could have catastrophic results

$
0
0

Disney facial recognition softwareArtificial intelligence can accurately identify objects in an image or recognize words uttered by a human, but its algorithms don't work the same way as the human brain — and that means that they can be spoofed in ways that humans can't.

New Scientist reports that researchers from Bar-Ilan University in Israel and Facebook's AI team have shown that it's possible to subtly tweak audio clips so that a human understands them as normal but a voice-recognition AI hears something totally different. The approach works by adding a quiet layer of noise to a sound clip that contains distinctive patterns a neural network will associate with other words.

The team applied its new algorithm, called Houdini, to a series of sound clips, which it then ran through Google Voice to have them transcribed. An example of an original sound clip read:

Her bearing was graceful and animated she led her son by the hand and before her walked two maids with wax lights and silver candlesticks.

When that original was passed through Google Voice it was transcribed as:

The bearing was graceful an animated she let her son by the hand and before he walks two maids with wax lights and silver candlesticks.

But the hijacked version, which via listening tests was confirmed to be indistinguishable to human ears from to the original, was transcribed as:

Mary was grateful then admitted she let her son before the walks to Mays would like slice furnace filter count six.

The team's efforts can also be applied to other machine-learning algorithms.

Tweaking images of people, it's possible to confuse an algorithm designed to spot a human pose into thinking that a person is actually assuming a different stance, as in the image above. And by adding noise to an image of a road scene, the team was able to fool an AI algorithm usually used in autonomous-car applications for classifying features like roads and signs to instead see ... a minion. Those image-based results are similar to research published last year by researchers from the machine learning outfits OpenAI and Google Brain.

Google Voice Recognition

These so-called adversarial examples may seem like a strange area of research, but they can be used to stress-test machine-learning algorithms.

More worrying, they could also be used nefariously, to trick AIs into seeing or hearing things that aren't really there—convincing autonomous cars to see fake traffic on a road, or a smart speaker to hear false commands, for example.

Of course, actually implementing such attacks in the wild is rather different from running them in a lab, not least because injecting the data is tricky.

What's perhaps most interesting about all this is that finding a way to protect AIs from these kinds of tricks is actually quite difficult. 

As we've explained in the past, we don't truly understand the inner workings of deep neural networks, and that means that we don't know why they're receptive to such subtle features in a voice clip or image.

Until we do, adversarial examples will remain, well, adversarial for AI algorithms.

SEE ALSO: The US is considering restricting Chinese investment in artificial intelligence

Join the conversation about this story »

NOW WATCH: Every map of Louisiana is a lie — what it really looks like should scare you

Chinese internet giant Baidu is ramping up its AI efforts

$
0
0

Global Quarterly AI Funding

This story was delivered to BI Intelligence Apps and Platforms Briefing subscribers. To learn more and subscribe, please click here.

Chinese internet search giant Baidu tipped its new focus on mobile and artificial intelligence (AI) as the primary drivers behind a 14% year-over-year (YoY) growth in revenue during Q2 2017, reaching 21 billion yuan ($3 billion).

The company recently began a restructuring of its resources, diverting them from less profitable ventures into those with potential, such as AI, big data, cloud, and video services, according to VentureBeat. The strong revenue growth could be an indication that these initiatives have started to gain traction. For instance, mobile revenue, which covers ads, video, games, and other services, accounts for 72% of total revenue during the quarter, up 10 percentage points from Q2 2016. 

Over the past year, the company has pushed its AI ambitions through acquisitions, partnerships, and device launches, as it strives to emerge to as a leader in the burgeoning industry

  • Baidu acquired Kitt.ai, an NLP startup, to strengthen Baidu’s voice ecosystem. Kitt.ai has built out a framework to create and drive voice-based apps and chatbots across multiple platforms and devices. This can help secure a place for Baidu’s DuerOS as a platform aimed for developers looking to utilize chatbots and services based on NLP technology.
  • Baidu partnered with NVIDIA to accelerate AI development. The partnership will help to bring AI technology to cloud computing, self-driving vehicles, and AI home assistants. The partnership will bring the next-generation NVIDIA Volta GPUs to Baidu’s cloud.
  • Baidu could be poised to release a smart speaker device. Last fall, Bloomberg reported that the company made large investments in AI to prepare to launch such a device. Additionally, the company last month hired former Microsoft exec and AI expert Qi Lu as its COO, and purchased the AI-based personal assistant platform provider Raven Tech.

Baidu’s not alone in its AI endeavors, as rival Chinese tech companies Tencent and Alibaba forge ahead with their own initiatives in the space. For instance, Tencent released an AI-infused voice assistant in May, and Alibaba has already launched AI and cloud services for the health care and manufacturing industries.

Moreover, Baidu’s efforts come alongside a concerted push by the Chinese government as the country wagers to become a global leader in AI by 2030. Getting in on the ground floor of AI investment could allow Baidu to reap the benefits as the market value balloons in the near- to mid-term. 

Jessica Smith, research analyst for BI Intelligence, Business Insider's premium research service, has compiled a detailed report on the voice assistant landscape that:

  • Identifies the major changes in technology and user behavior that have created the voice assistant market that exists today. 
  • Presents the major players in today's market and discusses their major weaknesses and strengths. 
  • Explores the impact this nascent market poses to other key digital industries. 
  • Identifies the major hurdles that need to be overcome before intelligent voice assistants will see mass adoption. 

To get the full report, subscribe to an All-Access pass to BI Intelligence and gain immediate access to this report and more than 250 other expertly researched reports. As an added bonus, you'll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. »Learn More Now

You can also purchase and download the full report from our research store.

Join the conversation about this story »

5 things everyone gets wrong about artificial intelligence and what it means for our future

$
0
0

DSC_8826LPBreva_photoByAlexKingsbury

There are a lot of myths out there about artificial intelligence (AI).

In June, Alibaba founder Jack Ma said AI is not only a massive threat to jobs but could also spark World War III. Because of AI, he told CNBC, in 30 years we’ll work only 4 hours a day, 4 days a week.

Recode founder Kara Swisher told NPR’s “Here and Now” that Ma is “a hundred percent right,” adding that “any job that’s repetitive, that doesn’t include creativity, is finished because it can be digitized” and “it’s not crazy to imagine a society where there’s very little job availability.”

She even suggested only eldercare and childcare jobs will remain because they require “creativity” and “emotion”—something Swisher says AI can’t provide yet.

I actually find that all hard to imagine. I agree it has always been hard to predict new kinds of jobs that’ll follow a technological revolution, largely because they don’t just pop up. We create them. If AI is to become an engine of revolution, it’s up to us to imagine opportunities that will require new jobs. Apocalyptic predictions about the end of the world as we know it are not helpful.

Common confusion

So, what may be the biggest myth—Myth 1: AI is going to kill our jobs—is simply not true.

Ma and Swisher are echoing the rampant hyperbole of business and political commentators and even many technologists—many of whom seem to conflate AI, robotics, machine learning, Big Data, and so on. The most common confusion may be about AI and repetitive tasks. Automation is just computer programming, not AI. When Swisher mentions a future automated Amazon warehouse with only one human, that’s not AI.

artificial intelligence robotWe humans excel at systematizing, mechanizing, and automating. We’ve done it for ages. It takes human intelligence to automate something, but the automation that results isn’t itself “intelligence”—which is something altogether different. Intelligence goes beyond most notions of “creativity” as they tend to be applied by those who get AI wrong every time they talk about it. If a job lost to automation is not replaced with another job, it’s lack of human imagination to blame.

In my two decades spent conceiving and making AI systems work for me, I’ve seen people time and again trying to automate basic tasks using computers and over-marketing it as AI. Meanwhile, I’ve made AI work in places it’s not supposed to, solving problems we didn’t even know how to articulate using traditional means.

For instance, several years ago, my colleagues at MIT and I posited that if we could know how a cell’s DNA was being read it would bring us a step closer to designing personalized therapies. Instead of constraining a computer to use only what humans already knew about biology, we instructed an AI to think about DNA as an economic market in which DNA regulators and genes competed—and let the computer build its own model of that, which it learned from data. Then the AI used its own model to simulate genetic behavior in seconds on a laptop, with the same accuracy that took traditional DNA circuit models days of calculations with a supercomputer.

At present, the best AIs are laboriously built and limited to one narrow problem at a time. Competition revolves around research into increasingly sophisticated and general AI toolkits, not yet AIs. The aspiration is to create AIs that partner with humans across multiple domains—like in IBM’s ads for Watson. IBM’s aim is to turn what today’s just a powerful toolkit into an infrastructure for businesses.

The larger objective

The larger objective for AI is to create AIs that partner with us to build new narratives around problems we care to solve and can’t today—new kinds of jobs follow from the ability to solve new problems.

That’s a huge space of opportunity, but it’s difficult to explore with all these myths about AI swirling around. Let’s dispel some more of them.

Myth 2: Robots are AI. Not true. A worker guides the first shipment of an IBM System Z mainframe computer in Poughkeepsie, New York, U.S. March 6, 2015. Picture taken March 6, 2015.  Jon Simon/IBM/Handout via REUTERS  Industrial and other robots, drones, self-organizing shelves in warehouses, and even the machines we’ve sent to Mars are all just machines programmed to move.

Myth 3: Big Data and Analytics are AI. Wrong again. These, along with data mining, pattern recognition, and data science, are all just names for cool things computers do based on human-created models. They may be complex, but they’re not AI. Data are like your senses: just because smells can trigger memories, it doesn’t make smelling itself intelligent, and more smelling is hardly the path to more intelligence.

Myth 4: Machine Learning and Deep Learning are AI. Nope. These are just tools for programming computers to react to complex patterns—like how your email filters out spam by “learning” what millions of users have identified as spam. They’re part of the AI toolkit like an auto mechanic has wrenches. They look smart—sometimes scarily so, like when a computer beats an expert at the game Go—but they’re certainly not AI.

Myth 5: Search engines are AI. They look smart, too, but they’re not AI. You can now search information in ways once impossible, but you—the searcher—contribute the intelligence. All the computer does is spot patterns from what you search and recommend others do the same. It doesn’t actually know any of what it finds; as a system, it’s as dumb as they come.

In my own AI work, I’ve made use of AI whenever a problem we could imagine solving with science became too complex for science’s reductive approaches. That’s because AI allows us to ask questions that are not easy to ask in traditional scientific “terms.” For instance, more than 20 years ago, my colleagues and I used AI to invent a technology to locate cellphones in an emergency faster and more accurately than GPS ever could. Traditional science didn’t help us solve the problem of finding you, so we worked on building an AI that would learn to figure out where you are so emergency services can find you.

By the way, our AI solution actually created jobs.

AI’s most important attribute isn’t processing scores of data or executing programs—all computers do that—but rather learning to fulfill tasks we humans cannot so we can reach further. It’s a partnership: we humans guide AI and learn to ask better questions.

Swisher is right, though: we ought to figure out what the next jobs are, but not by agonizing over how much some current job is creative or repetitive. I would note that the AI toolkit has already created hundreds of thousands of jobs of all kinds—Uber, Facebook, Google, Apple, Amazon, and so on.

Our choice is continuing the dystopian AI narrative about the future of jobs. or having a different conversation about making the AI we want happen so we can address problems that cannot be solved by traditional means, for which the science we have is inadequate, incomplete, or nonexistent—and imagining and creating some new jobs along the way.

Luis Perez-Brava is the head of MIT’s Innovation Program and a Research Scientist at MIT’s School of Engineering. He recently published ‘Innovating: A Doer’s Manifesto for Starting from a Hunch, Prototyping Problems, Scaling Up, and Learning to Be Productively Wrong.’  

SEE ALSO: Nvidia gave away its newest AI chips for free — and that's part of the reason why it's dominating the competition

Join the conversation about this story »

NOW WATCH: Here’s what NASA could accomplish if it had the US military’s $600 billion budget

These 8 CEOs are changing the way we work

$
0
0

WiseFetch

It's the beginning of the week, and you've got a big meeting to prepare for.

Fortunately, you've got technology on your side. Artificial intelligence not only scheduled your meeting — after consulting your calendar and finding the best time — it's helped give you dossiers on the people you're meeting with. Meanwhile, you can send a robot off to retrieve the box of materials you'll need for the meeting from the storage room. 

Sound futuristic? Probably. But all of this technology exists today. 

Work as we know it is changing. Artificial intelligence and robotics are advancing rapidly and become commonplace. And young companies are using these and other emerging technologies to address some of the common complaints within the contemporary workplace. 

Here are the executives at eight of the companies who are changing the way we work today — and breaking ground for the way we'll work tomorrow. 

SEE ALSO: This 1979 letter to The New York Times shows just how much Xerox hates people using its name as a verb

Amy Chang, CEO of Accompany, is using data to replace personal assistants

Chang is the founder and CEO of Accompany, which has developed an app that makes it easy to prep for meetings. For all events scheduled on a calendar, Accompany's app compiles digital dossiers of attendees and their companies. The app is targeted at executives, who often rely on human assistants to gain insights into the people they are meeting with. But it could find much broader appeal, because of its potential to help users anticipate personality differences, identify common interests, or find ideal meeting locations. 

"Anytime we detect someone new in your calendar, you'll get an Executive Briefing delivered right to your inbox while you sleep," the company states on its website. "Everything you need to get up to speed, including professional history, relevant news, and key info on their company, is at your fingertips."

Accompany's service pulls information from company profiles, social media, and news stories. But in the future, it's not hard to imagine it tapping into other data, such as the kinds Facebook uses to profile individuals and categorize their tastes and interests to help advertisers target particular groups.

Chang launched Accompany in 2013, straight out of a role at Google's analytics group. The company has since raised $40.5 million in funding. The Stanford-educated Chang hails from Austin, Texas, but now runs her company out of Palo Alto, California. 



Aaron Levie, CEO of Box, is building software for far-flung workspaces

While Box may not be the first file-sharing service to catch the public's attention, it may well prove to be the default one in years to come. Box's service combines content management and collaboration tools with notably well-developed workflow management.

The company's user-friendly cloud services makes data and content accessible across devices and geography, allowing employees at companies with large and scattered workforces to work together more easily. Its service is one of several that are making it simpler for businesses to have more nimble and mobile workforces and to support employees who work from home without sacrificing productivity.  

Levie famously had his growing-up moment when Box was a young startup with little in the way of revenue. After Citrix offered to buy the company for about $600 million, he stared down investors, who were pressuring him to sell, and convinced his board to reject the offer.

He looks prescient in retrospect. Box's market cap now tops $2 billion. The cloud file storage and collaboration service reported better-than-expected quarterly results at the end of May, is narrowing its losses, and upped its guidance for the rest of its 2018 fiscal year.



Stewart Butterfield, CEO of Slack, has completely reinvented the way offices work together

According to some observers, if Butterfield wanted to sell his company today, he could fetch $9 billion for it.

But Butterfield doesn't seem to want to sell. Instead, he just rounded up another $250 million in funding for Slack. That gave the startup an official valuation of $5 billion, a bump up from $3.8 billion it was valued at in 2016.

Slack's appreciation in value is a testament to its service's crazy growth. The company's app now has four million daily active users and surpassed $200 million in annual recurring revenue earlier this year. 

The success of the business is no surprise, given how Slack's service has revolutionized office communication. Its app helps corporate employees communicate seamlessly with each other, reducing the need for emails — particularly those terrible group chain messages that seem to go on without resolution. 

Like many of the other influential technologies today, Slack's service makes it easier for large organizations to work efficiently and in real-time, even when employees might be thousands of miles away from one another.



See the rest of the story at Business Insider

Qualcomm CEO Steve Mollenkopf: What the 'big innovation house' that powered the mobile boom is betting on next

$
0
0

Steve Mollenkopf 2x1

This giant has had its moments in the spotlight. In 1999 Qualcomm was the top-performing US stock, up more than 2600%. Residents of its hometown of San Diego know well Qualcomm Stadium, once the home of the Chargers and the Padres.

Though not the household name of Apple or Samsung, Qualcomm has grown its dominance in the mobile market. Chances are good that the device you're reading this on wouldn't exist without Qualcomm. It invented many of the technologies that make our favorite devices work — modems, mobile processors, video-streaming formats, and more. Its technology touches practically every mobile device in the world.

That tech is also at the center of a major legal dispute, involving a lawsuit and countersuit, now being litigated between Qualcomm and one of its largest customers, Apple, over royalties and patents. That's on the heels of an antitrust battle Qualcomm settled with China. Meanwhile, Qualcomm is looking to close its planned $45 billion purchase of NXP, Europe's biggest chipmaker. Overseeing all this, and betting on what comes next, is Steve Mollenkopf, Qualcomm's CEO, an engineer's engineer who rose through the ranks and took the top job in 2014.

Business Insider recently spoke with Mollenkopf about the company's legal battle with Apple, the next wave of tech coming out of Qualcomm, and what it's like working with the Trump administration at a time when many of the president's policies are at odds with the tech industry's goals. This interview has been edited for clarity and length.

Steve Mollenkopf Bio

Steve Kovach: You've said you like to think of Qualcomm as more than just a mobile-chip company. Define Qualcomm.

Steve Mollenkopf: At its core, we drive the mobile roadmap. We invent the core technologies and the tools that allow the mobile roadmap to move forward. One of those tools is the chip because it’s the physical embodiment of that. People tend to associate Qualcomm with the chip — and they should: We’re an excellent chip company — but I think we have a larger role in the ecosystem of cellular that I think people are not aware of. And our relevance to more consumer electronics — and I would say industries — is actually just increasing.

Kovach: So what does that look like beyond the smartphone?

Mollenkopf: First of all, you’re familiar with the smartphone because about 10 years ago, before the smartphone, people like Qualcomm worked on the technology that was required to even enable the smartphone, and of course we moved that forward. Today, those same discussions, that same innovation, is occurring upstream of, let’s say, connected autonomous cars or connected healthcare or massive Internet of Things in the industrial-internet space, for example. We work on those fundamental technologies that people will use five, 10 years later, that really are disrupting their businesses. Qualcomm is this big innovation house that tries to figure out how we can get as many people as possible using the cellular roadmap. The smartphone is just the first step along that journey.

Kovach: So you’re making bets 10 years in advance that something is going to be the next big mover. We know which of the best have taken off — phones, tablets. What about things like wearables?

Mollenkopf: If you look at our bets, I would say we bet at another level of abstraction than that. We bet at the kind of fundamental technology. So, for example, we bet that data connection was going to be very important everywhere in the world, so we invented all the technologies to enable that to occur. We bet that video compression was going to be very important worldwide because people were going to stream video and stream audio, so we worked on video- and audio-compression technologies. So we kind of bet at that level. And then what we don’t bet on is individual technology implementation — who’s going to win, even what’s going to happen.

Steve Mollenkopf Quote

What we try to do is create the tools that are required, the fundamental technologies that enable industries. And then we want to have as many people as possible be able to use those technologies so they can experiment. Because what happens is, you’ll find that the industry, if properly equipped, can go into many more areas than what you would’ve thought. Today, we’re betting on massive amounts of data with low latency because we know that will change the way computing happens. Or we know that we need to have robust, very highly secure communication networks, because if we don’t, things like autonomous cars that are connected to the network won’t develop unless we do it. It’s the same thing with connected healthcare. If we don’t figure out a way to have secure, connected healthcare, or connectivity and computing, we won’t have that industry develop.

Kovach: You had a big artificial-intelligence announcement, letting developers tap into your processors. How do you see it playing a role in devices? Are you going to be making a dedicated AI chip?

Mollenkopf: We firmly believe that things get both connected and smarter. And there are probably two areas that people are working on. There are a lot of people working on the data-center version of that. So you can think of the context of a body, for example, that’s like I’m working on the brain. So I’m kind of working on the specialized machines in the brain that allow you to do things. And you have a lot of companies working on that. Qualcomm is actually working on it, starting kind of at the edge of the body, the edge of the internet, and looking back and saying, what type of technology is required in the edge device? The phone — whatever is the connected computing device at the edge of the internet that actually is seeing more of the actual data. And what decisions and what type of implementation needs to be done at that edge to enable things to just make decisions and take advantage of AI?

I would say we’re probably looking further ahead than just the specialized [AI] chip. We’re looking at the broader portfolio of different types of machines that you would want to have, depending on the workload. But I do think that the same way the human body works, a lot the really interesting work will actually be done without contacting the brain. So, for example, your hand, when you touch something hot, your muscles move away from that hot thing before your central nervous system even knows it. Because that information is so important to take an action on that the processing has to occur locally. More and more of the interesting things that happen in the connected Internet of Things will happen in that way.

Kovach: So you need to have the special processor.

Mollenkopf: You need to have the processor. And now, it’s fundamentally a low-power processor and it has to be connected, it has to have all these specialized machines to make it work. And I think we’re going to intercept AI there for sure. And so you’re seeing the first step of that. And I think you probably saw we had some early partners in AR and VR sign up.

Kovach: That seems to be the first-use case — a lot of people are excited for AR and VR. Is there anything else beyond that?

Mollenkopf: Even today, people use AI to do work on the camera [with our processors]. So, for example, selecting the right scene and selecting all of these things, you can use AI to do that. as opposed to saying it’s this type of setting. And there’s just a lot of things where the algorithm improvements can implement the concepts of AI in order to improve it. We’re in early days, but we think it’s going to be yet another component of the connected device.

AI quote_03Kovach: Recently, a debate sprang up between Elon Musk and Mark Zuckerberg about the potential evilness of AI and the potential goodness of AI. How do you view it?

Mollenkopf: We’re kind of at a different place in the ecosystem. We see different things, so I couldn’t comment directly on their debate. But for us, I think we’re pretty far away from the point people are really concerned about. There’s a tremendous amount of benefit to having more intelligent computing with you all the time. And I think we’ve got a lot of work to do to even make that happen. And so I think that that’s what we’re working on. We’re driving that. I think people are going to be surprised how helpful it will be to have things in their pocket that can anticipate what they need and react to that. So that’s what we’re working on.

Kovach: So you’re optimistic?

Mollenkopf: I am, but I also tend to believe technology has a very positive impact on people and economies. I don’t see this transition as being any different than that.

Kovach: Let's talk about 5G. Why do people keep telling me it's going to change everything? What will 5G allow me to do besides download data faster?

Qualcomm CEO Steve Mollenkopf

Mollenkopf: There are probably two different areas of bets that you’re hearing. One is — I’ll call it the classic "More G." In cellular, you’re going to have more capacity, more data rates, lower latency. From an operator’s point of view, it really helps them grow the capability and the network. That in and of itself is enough.

But the other aspect is that there are a lot of new industries that are intercepting the cellular roadmap at the time that 5G is coming. And 5G is being designed to enable those industries to take advantage of it more readily. If we have a much more secure network and a much more robust network, then you can put mission-critical services on there — remote delivery of healthcare, control of physical plant items in some kind of industrial Internet of Things that actively make decisions. You can have autonomous cars. I think what you’re hearing is, there are industries that really don’t use cellular in their daily operations. That will be because of some of the things that are happening in 5G, so there’s a lot of excitement about that.

There’s another element, and it’s that there’s a lot of excitement because if you can get the bandwidth up and the latency down, and the delay across the network is smaller. It enables you to essentially take the data center and move it closer to where the data is actually used. There are a lot of people who realize that that will change — it’ll really make distributed computing happen. And there will be a lot of new business models that pop up as a result.

If I look at the first wave of connected computing as being in your pocket — really, all we did was put a low-power computer in your pocket that’s connected to the internet all the time — and the ramifications of that were terribly significant when you look at business models. The internet business model changed dramatically. You would never have an Uber, you would never had an Instagram, if you didn’t have a connected computer in your pocket that didn’t also have a camera or a GPS. We’re going to go another step.

When everything gets connected and the computing power is resident at the spot that the data exists, and there are a lot of companies saying, hey, how can I change my business model? Industrial companies, you know, the normal players. That’s where the excitement is. Everyone knows that’s important. Now, it’s also being reflected in the actions of governments. So if you look, unlike some of the other transitions — 3G, 4G transitions — people realized the societal impact of this big change. And they want to make sure that their government, their industry players are positioned well for that transition.

5G quote_02

Kovach: What do governments want to do with this?

Mollenkopf: They know that the same way it was important to enable the internet, and the people who enabled it made it easy for internet companies to form, it was a great way to develop jobs and develop economic interest in countries. Same today: People are looking and saying this is going to be so significant to economies, the growth of jobs, growth of economy.

The impact of 5G is tremendous. We have the numbers — it’s just huge. They don’t want to be left behind. They know that the transition is going to be very significant in the evolution of their economy. They want to make sure that they are strong. And so what you see is governments really trying to make it very easy for this technology to take hold in their area. They really try to encourage people to innovate in these areas. And we like that. It’s a great thing for Qualcomm.

Kovach: You’ve said 5G deployment is going to start in 2019. How long will it take to fully grow out to the scale we’re seeing 4G LTE at now?

Mollenkopf: My guess is it’ll go probably a little faster than LTE.

Kovach: Why?

Mollenkopf: There’s a tremendous desire on the part of people. One is, look at how much data is being used by a phone today. It’s tremendous, and it’s not going to stop.

Kovach: I want to move on to Apple. The future of mobile is being debated right here. Tell me about your position on this, where Qualcomm stands, and what you’re arguing versus what they’re arguing.

Mollenkopf: I probably wouldn’t view it in such huge terms. In the end, what this is, is really a contract dispute over the price of IP between two players. The rest of the industry is actually organized, and has been organized for decades, in the way that Qualcomm is. You’re probably seeing attempts to make this into something other than that because the contract and the legal path is probably, at least in my opinion, very clear cut in Qualcomm’s favor. So there are a lot of attempts to bring other things into it that are really not related to the debate.

Now, Qualcomm, as we talked about before, has had a very significant role in creating industry, and the tools by which it’s very easy for people to come into that industry. And Apple would be a great example of a company that benefited dramatically from, really, the industry structure that you have in cellular. So people can come into there from another industry, afresh, and they don’t need to have this big long history to be a player. We have something like 300 contracts that were freely negotiated that set the price of that and set that structure. And the contract that we have with the contract manufacturers that supply Apple’s products are completely consistent with those contracts. We’re just trying to get paid on it now. From my perspective, it’s really a lot simpler than what people make it out to be.

Kovach: Apple's argument, and the FTC’s argument, and other governments' arguments, is that you guys have dominance in the industry and use that to your advantage. Why don’t you think that’s true?

Mollenkopf: I don’t think we have dominance in the industry, first of all. Also, if you look at all of these agreements, they were freely negotiated over, in some cases, many, many years ago. They continue to become more valuable to the people who negotiate them. It sure doesn’t feel like we’re dominant in the industry when I look at our position relative to the people who are making the claims. The facts are pretty much on our side on that actually.

Kovach: But who else could manufacturers go to if they don’t use Qualcomm?

Mollenkopf: Let’s break it into two parts. We have two business models. One business model is that we sell chips into people’s phones. That chip industry, I would argue, is the most competitive chip industry in the world. If you just look at the history of it, it’s the who’s who of tech companies. And we have done very well on that because we’re a good chip company, and because we’re good at innovation. We’re good at worldwide scale, and we’re good partners with the ecosystem.

The second model is that we license our patents — and these are patents that define the entire ecosystem of innovation that come out of outside of Qualcomm. That business model is independent of this chip engagement. In many cases, we have people who use our chips, people who don’t use our chips, and in all cases they negotiated these contracts independent of the chip agreement. The facts are different than what people make it out to be. We’re also going to take the thing to court, and I think we’re going to feel pretty confident in how this plays out.

Apple Qualcomm_02

Kovach: Would you have sued Apple if they didn’t start this earlier in the year?

Mollenkopf: We are not a very litigious company. We rarely file offensive actions. Every time I can think of them, it’s happened in response to an attack incoming on Qualcomm. If you look at our action, we actually waited. We didn’t know what the view was from Apple. And once it became clear they instructed the contract manufacturers not to pay, then we had to, unfortunately, go through some of the actions that we had to go through. That’s not our traditional approach to resolving disputes.

Kovach: Typically, the kind of lawsuit you’re going after with them — stopping imports — if those do work out, it’s very narrow in scope. It might be a ban on imports for an older model of a device. Samsung and Apple went through this years ago. Do you feel more confident than in other cases similar to this?

Mollenkopf: It’s really important to remember it’s two things going on. The primary thing that Qualcomm is trying to do is trying to get Apple and the contract manufacturers to deliver on the existing contract that exists. That actually happens well upstream of any of the patent actions. And the second part is we have some patent actions in jurisdictions, like the United States and Germany. But primarily, we’re just trying to get paid under the contract that I think people are enjoying, and have enjoyed for almost 10 years.

And so that, I think, is something that moves faster through the court system. For example, we’re going to have a preliminary injunction hearing over the next month, and potentially a trial after that, depending on how that goes. And so it’s very important to remember that, at the end, we’re just trying to defend a contract. And on top of that, we think it’s in the best interest of our shareholders to defend our IP rights, and we have. But it’s important to remember where this is right now.

Kovach: Anything else you want to tell me related to the Apple case and Intel and all these people involved in it?

Mollenkopf: The only thing I would say is that we feel like we’re the little guy in this whole thing.

Kovach: You’re not a little guy. [Qualcomm's market cap is nearly $78 billion.]

Mollenkopf: Well, if you look, compared to the other folks, we’re pretty small. If you look at scope and scale, we feel like this is an important business for our shareholders, and it’s worth us defending it, and hopefully it’ll work out in our favor.

Kovach: In a worst-case scenario, Apple goes on their own. They’re working more and more on their own chips. They’re working on their own AI chips. Their vision is to do a lot of this in-house, or at least as much as they can in-house. What does that look like for you?

Mollenkopf: Again, we have a licensing engagement. That’s independent of anything we do with people on the products side. And then we have a products side. The products side, and the way in which we have historically worked with Apple, has been over our modem chips, and the technology we do that with, we feel very confident in the strength of our roadmap there, and the relative positioning of that roadmap to the competition. And I think that’s something that’s probably a little bit harder today to get the advantage of the strength of the roadmap. But these things get resolved, and the product business is going to continue to be a strong business. But this second licensing business, it’s important that it gets resolved in and of itself.

Kovach: Let's shift gears to politics. You personally have been to the White House to meet with President Trump. Can you talk about why you take those meetings?

Mollenkopf: We’re a big company. We have international scale. We work on things that I think are important to the United States. We need the United States to work on things that are important to Qualcomm worldwide. That involves an engagement with the administration, and it involves an engagement with other countries around the world. And we do that. And that’s kind of what you’re seeing. And I think that’s not unlike any other big company. These are very important technologies. They’re important to the debate about a lot of things internationally. It makes sense that we’re asked our opinion of things. And we go.

Kovach: Do you feel like they’re listening?

Mollenkopf: I do actually. I think, worldwide, governments are pretty responsive.

Kovach: I’m talking about the Trump administration specifically. Do you feel like they listen to what you have to say and took it into account?

Mollenkopf: Yeah. I feel like there’s a real discussion that occurs when people go talk there. And I would assume my peers feel the same way.

Protest quote_02

Kovach: There’s been a lot of blowback in Silicon Valley when tech executives take meetings with Trump. I know the Tesla employees revolted. Uber employees revolted. Google employees literally walked out in protest. How do your employees react?

Mollenkopf: We probably don’t have reactions like that. I think people understand that our business is — we’re probably in a different spot down in southern California. I think that people understand the importance of having a dialogue with policymakers worldwide.

Kovach: You don’t feel the need to come out against some of Trump's controversial policies like your peers do?

Mollenkopf: I think our role in the ecosystem is really technology. That’s what speaks for the company.

Kovach: But those policies do affect you. For example, immigration — I’m sure you rely a lot on that. How do you view potential changes in immigration policy? What do you think should be the policy there?

Mollenkopf: I think we’ve been pretty clear. The avenue through which we make these arguments is sort of directly to the policymakers, as opposed to through other methods. And it kind of makes sense. We don’t have a consumer brand. I don’t think people know much about Qualcomm. The employees know how we interact with things. It seems more natural for us not to do those things versus do them. It doesn’t mean we don’t care about issues or we don’t have our point of view. We just tend to articulate it directly. You can just tell. The company’s posture on a lot of things is sort of we don’t run out in front of things. We’re not a huge marketing company. We tend to innovate, let the innovation stand on its own. And then we talk to the ecosystem through partners. We do the same thing politically.

Kovach: The administration had a big win with the Foxconn announcement. They're opening a factory in Wisconsin. Do you think it's a realistic goal to have high-end manufacturing to start producing something like Qualcomm products here in the US, or is this a one-off?

Mollenkopf: I don’t know a lot about the Foxconn thing. I don’t know enough about it. I hope it’s successful. It would be great for the US.

Kovach: Based on what you know about your own manufacturing business, do you see those kinds of businesses coming back to the US?

Mollenkopf: We already manufacture in the US. There are plants in upstate New York; there are plants in Austin, Texas. And we actually manufacture chips in both of those plants. So I feel like we’re already doing that. And then when we close the acquisition on NXP, we’ll have a very significant manufacturing footprint in Austin and in Arizona. I think we’re living proof that you can do that.

Join the conversation about this story »

NOW WATCH: This detachable plane cabin could save many lives

Investing in Nvidia all boils down to one question (NVDA)

$
0
0

Nvidia ceo volta tesla v100Nvidia is undoubtedly one of the hottest stocks this year.

Its shares have risen 61.15%, which is meteoric compared to the 9.73% growth of the S&P 500. Nvidia has been crowned the smartest company in the world by MIT, and it has carved itself out a major chunk of the self-driving, data center and AI world. But if you are a small investor thinking about investing in the company, what do you need to know?

Mitch Steves, an analyst at RBC, says it all boils down to one question.

"You have to be comfortable around understanding whether they will continue to have the best GPU," he said. "If the answer is yes, then they are going to be fine. If the answer is no, then they are going to have trouble down the road."

Nvidia does one thing really well. The company builds graphics processing units or GPUs. GPUs began their life as a way to increase the speed and quality of video game graphics. Rendering the large number of shapes required for today's video games requires lots of processing power which GPUs are good at handling. Solving multiple problems at once in the fashion of GPUs has been called "parallel processing."

Parallel processing is the next big wave of computing and powers things like artificial intelligence. Nvidia is well placed to be a major player in the new business, according to Steves.

"It's practically impossible to win in tech if you don't have the best product," he said. "If you're pushing out the bleeding edge of tech, you have to have the best product because otherwise, you are otherwise going to get crushed."

Nvidia ceo volta tesla v100

At a recent GPU conference, Nvidia announced its new "Volta" architecture. The newest, Volta generation of GPU claims 12 times faster processing than its previous "Pascal" architecture.

The Volta product is initially built to be installed in data centers. Artificial intelligence research is primarily done on large data centers, and Nvidia's chips are the heart and soul of those data centers, according to Steves.

Artificial intelligence is the technology powering advances in self-driving cars, personal assistants, image recognition and so much more. Faster chips can speed up research which brings AI advances to market faster.

Steves said this new Volta technology is the next big thing not only for Nvidia, but for the artificial intelligence community as a whole. "The Volta is clearly just another step function upward, so that's [the] reason why when they announced that product, the stock moved up $20," Steves said.

When asked why he's bullish on Nvidia, Steves said: "If it's Intel, Nvidia and AMD and you ask me to chose one, it's going to be Nvidia by a country mile. Essentially, their product is the best."

Click here to watch Nvidia's stock price in real time...

nvidia stock price

SEE ALSO: Artificial intelligence is going to change every aspect of your life — here's how to invest in it

Join the conversation about this story »

NOW WATCH: Stocks have shrugged off Trump headlines to hit new highs this week

Google wants to use artificial intelligence to hide crashing Android apps on the Play Store (GOOG)

$
0
0

Google Play Store GS8

According to a new blog post (which we first saw reported on The Verge), Google is planning to use machine learning algorithms to analyse "performance data, user engagement, and user ratings" on the Play Store's apps, and downgrade buggy, crashing ones to stop them from rising to the top of listings.

This means that the most visible apps in the Play Store will be less likely to crash, behave weirdly, or even ask for more permissions than they should.

The search giant had already put artificial intelligence algorithms in place to determine an app's quality, but it's now acting upon its findings to actually curate Android's digital store.

Two months ago, Google also announced the new "Android Excellence" program, which expands on what was known as the Play Store's "Editor's Choice". There are now several human-curated categories of rotating apps that Googlers tweak every now and then to highlight apps that they think are good.

The only problem the new machine learning-driven algorithms have is that, because they act somewhat independently, there is no way for developers to know whether their app is getting downgraded and why.

Join the conversation about this story »

NOW WATCH: This cell phone doesn't have a battery and never needs to be charged

An AI created and named new paint colors — and it went hilariously wrong

$
0
0

paint colors

The INSIDER Summary:

  • Neural networking researcher Janelle Shane programmed an artificial intelligence (AI) to create and name paint colors.
  • The AI came up with names that range from the nonsensical to the hilarious.
  • I, for one, can't wait to paint my bedroom "Dad,""Stanky Bean,""Dorky Brown," or "Farty Red."


What's in a name?

Buying paint is always an enlightening affair. The names that companies come up with for their pigments are usually worth a chuckle or two, at least. Browsing through the color swatches, you're likely to stumble upon some curious choices, like a soft lilac color called Potentially Purple, a light blue known as Salty Tear, or maybe just a nondescript Gray Area. Even so, humans may no longer have a monopoly on the esoteric color-naming game.

Janelle Shane, famous for usingneural networks to come up with Death Metal band names like "Verk" or "Chaorug," or cake and cookie recipes featuring novel ingredients like horseradish and chicken, has programmed software to create new paint colors and give them names. The results range from nonsensical to hilarious while also circling around terrifying.

Funny paint color names Janelle Shane

Some of them would fit in nicely with human-created paints, like a sunny yellow color named Bright Beach or a muted gray called Frosty Stone. Others like Farty Red and Rose Colon are something out of a Lynchian nightmare.

AI Artistry

This is not the first attempt at programming artificial intelligence (AI) to be creative. There are plenty of examples of the beginnings of computer creaticity. Last Holiday Season an AI treated us to a taste of an uncanny valley Christmas with a "neural karaoke" Christmas song. Recently, movie lovers could watch a short film that was written by AI — and also happened to star David Hasselhoff. Computers are also becoming skilled designers as shown by the craftsmanship of this chair.

The offerings so far haven't exactly been the greatest compositions ever created, but they are indicative of how rapidly AI is growing. Google's director of engineering, Ray Kurzweil, foresees the Singularity (the point at which AI passes humans in terms of intelligence) as imminent, predicting the event happening around 2045. He looks forward to this time and even speaks of the benefits in creative terms, "we're going to be funnier, we're going to be better at music."

Just researchers teach AI to play (and win) games to increases its levels of thought, so too can they use creative applications to develop AI that's capable of thinking more like us. In the meantime, we'll just hang out for the laughs, courtesy of a purpley hue called "Dorky Brown." 

Join the conversation about this story »

NOW WATCH: How one artist used color-changing paint to bring her sculptures to the next level

5 of the coolest things we heard about the future of technology at Moscow's quantum tech summit

$
0
0

quantum computingAt the edge of contemporary science, a new era of technology is on the verge of bringing the future into the present.

Today’s gadgets and electronics are beginning to bump against the ceiling of what is possible under classical technology, so scientists and engineers are turning to quantum physics to bring our sci-fi dreams into reality.

"Quantum physics is in the process of unlocking the next generation of killer technology,” says Alexey Fedorov, research fellow at the Russian Quantum Center. “It's going to change cybersecurity, material science, AI research, and metrology."

The International Conference on Quantum Technologies, the RQC’s biennial advocacy event, is the premier place for scientists to present their work and discuss applications of their research. And as we begin to hit the ceiling of what classical technology is capable of, it’s only by shifting the attention to the kooky quantum world that we can explore practical science in a more future-friendly way.

For one week in Moscow, the ICQT makes the people doing this research the stars of the show. Here are the five coolest things we heard at the fourth International Conference on Quantum Technologies: 

SEE ALSO: 'Quantum' technology is the future, and it's already here — here's what that means for you

A universal quantum computer is closer to reality than ever before.

The white whale of quantum technology is undoubtedly the quantum computer, first proposed in the early 1980s by celebrated physicist Richard Feynman. Various efforts are underway by research and business interests around the world to build such a device, which would use quantum states to solve problems that are either impossible or prohibitively inefficient for classical computers to solve. Right now, the star of the show is surely the work being done at Harvard University under Mikhail Lukin.

Lukin and team have built a quantum computer that harnesses 51 quantum bits to run its calculations, making it the most powerful quantum computer in existence. Most of today’s quantum computers are best-likened to jolty, primitive airplanes from 19th-century newsreel footage — they’re far from perfect. But Lukin’s team has built a quantum computer that’s like “a plane that can take off, turn smoothly, and land,” says Serguei Beloussov, co-founder of the Russian Quantum Center.



Quantum technology could eventually be used to make phone calls to Mars.

While Elon Musk publicly dreams about moon bases and Mars metropolises, there’s a fundamental problem in communicating with people that may move there. Depending on where Earth and Mars are in relation to each other in their orbit around the Sun, it can take between three and 22 minutes for radio signals (moving at the speed of light) to travel between the two bodies. Without a radical shift in communication paradigm, Mars-based internet is going to suck. But quantum technology could be harnessed to enable instantaneous interplanetary long-distance, without a delay.

Dr. Arkady Fedorov is a head of the Superconducting Quantum Devices Laboratory at the University of Queensland, and his research is concerned with using superconducting cubits as artificial atoms. Cubits are most famously used for information processing in quantum computing, but Fedorov’s instead interact with magnetic waves and can be controlled within certain parameters. If we want to harness this for instantaneous long-distance communication, he says the only missing ingredient is an adaptor that turns microwaves into optical light.

Such adapters are “a hot topic right now,” says Fedorov. “It is hard to turn microwaves into optical light because successful quantum operations require an efficiency close to 100%, but a number of groups have proposals on how to do it.” His research may one day be used to develop an interplanetary telephone.



Unbreakable codes are not only real, but they’re old news.

The “observer effect” holds that to observe a situation is to change it. This means that if you create a message by harnessing the quirks of the quantum world, you can send that message in such a way that it’s readable by everyone except your intended recipient. If a third party intercepts the message, they of course observe it and fundamentally change it, leaving your initial communication protected.

Quantum encryption technology lands somewhere between "Star Trek" and James Bond, but it’s a well-trod commercial pursuit of 16 years by the Switzerland-based ID Quantique. The company is approaching its two-decade anniversary in making and selling quantum communication hardware in use by research organizations and national governments alike.



See the rest of the story at Business Insider
Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>