Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

A lot of people who make over $350,000 are about to get replaced by software

$
0
0

wealthy cheering crowd

Artificial intelligence is poised to automate lots of service jobs. The White House has estimated there's an 83% chance that someone making less than $20 a hour will eventually lose their job to a computer. That means gigs like customer-service rep could soon be extinct.

But it's not just low-paying positions that will get replaced. AI also could cause high-earning (like top 5% of American salaries) jobs to disappear.

Fast.

That's the theme of New York Times reporter Nathaniel Popper's new feature, "The Robots Are Coming for Wall Street."

The piece is framed around Daniel Nadler, the founder of Kensho, an analytics company that's transforming finance. By 2026, Nadler thinks somewhere between 33% and 50% of finance employees will lose their jobs to automation software. As a result, mega-firms like Goldman Sachs will be getting "significantly smaller."

That's because Kensho does analytics work — previously an artisanal skill within Wall Street — at high speeds. Instead of sorting through news clippings to create a report, Kensho generates them from its database of finance analytics — essentially doing the work of researchers and analysts algorithmically.

Type in "Syrian Civil War" into Kensho and you'll get a number of data sets showing how major assets like oil and currencies reacted to events in the conflict, Popper reports. The minutes-long search "‘would have taken days, probably 40 man-hours, from people who were making an average of $350,000 to $500,000 a year," says Nadler.

Goldman is actually a huge investor in Kensho. It will be interesting, to say the least, to see how that investment pays off.

When speaking with Tech Insider about Google's huge AI victory in the game of Go, Brown University computer scientist Michael L. Littman explained that in any game with fixed rules, computers would win.

"What we're finding is that any kind of computational challenge that is sufficiently well-defined, we can build a machine that can do better," Littman says. "We can build machines that are optimized to that one task, and people are not optimized to one task. Once you narrow the task to playing Go, the machine is going to be better, ultimately."

Perhaps machines are just more optimized for certain types of white-collar finance work, too.

SEE ALSO: 13 jobs that are quickly disappearing thanks to robots

Join the conversation about this story »

NOW WATCH: Neil deGrasse Tyson explains why killer robots don't scare him


The former Prime Minister of Norway says people will need to be 'taken care of' when robots take their jobs

$
0
0

Gro Harlem Brundtland

Gro Brundtland, the former Prime Minister of Norway, issued a warning to governments and technology companies around the world on Tuesday as they race to develop the latest artificial intelligence (AI) systems.

Those that see their jobs displaced by robots in coming decades will need to be looked after, said Brundtland, who served three terms as Prime Minister of Norway (1981, 1986–89, and 1990–96).

"People who are not able to adapt or re-train into the society that is there have to be taken care of," said Brundtland at the UBS Future of Work conference in London when she was asked if humans need to prepare for an AI revolution.

Self-thinking computers and machines are poised to take five million human jobs by 2020, according to the World Economic Forum. They also have the potential to reduce energy consumption and significantly advance scientific research into diseases like cancer.

The former Labour politician made her comments after Wired editor David Rowan gave a presentation where he claimed that robots will replace jobs across all walks of life, from taxi drivers and accountants to surgeons and solicitors.

"I’m sure that what he (David Rowan) is describing is something that is coming," said Brundtland.

Brundtland said that it's up to governments to regulate new sci-fi technologies as they start to turn into a reality. "That’s my assessment of this," she said.

"Although I’m 78, as I’m listening to the future of the working life and society, I realise people have to be younger than me to be those who are going to carry through these things," she continued. "But many of the same principles that have dominated my own thinking are still there."

Join the conversation about this story »

NOW WATCH: The iPhone 7 is hitting stores on September 23 — here's what you're getting

Researchers are figuring out how to make virtual assistants understand your feelings

$
0
0

ex machina

Artificial intelligence (AI) is all about getting a machine to mimic a human in every way: thought, speech, movement. That’s why one of the tests for AI is the Turing test: whether a robot can fool a human into thinking it is conversing with another of its own species.

An integral part of accomplishing this is making the AI recognize human emotions. So one research lab is working on the next iteration of virtual assistants, those that can recognize and react to emotional cues.

SRI International, the birthplace of Siri, is working on better chatbots and phone assistants that can detect agitation, confusion, and other emotional states, and respond accordingly.

The technology, called SenSay Analytics, is envisioned to analyze human behaviour that can indicate emotion, like typing patterns, speech tone, facial expressions, and body movements. This would then be used to tailor the machine’s reaction. For example, a virtual assistant, via phone or face-to-face, would slow down when the customer is confused, or try to explain what it is doing. Current tech can actually detect emotion already, but it is the reaction side that SRI is trying to polish.

Being like "Her"

her movieEver watch Her? The movie actually depicts the logical endgame of this technology, where a natural language assistant can interact and predict the needs of the user, without a humanoid body.

Although the film also showed the imperfection of human-like AI, the movie illustrates the need for adequate language and emotional understanding in the technology. In the future, our chatbots would have to be able to break down the nuances of language and human emotion, in order to actually understand what humans are saying.

These developments from SRI could be a piece of that puzzle, interpreting verbal and non-verbal cues in order to gauge and react to emotion.

SEE ALSO: One man built a robot that looks eerily similar to Scarlett Johansson

Join the conversation about this story »

NOW WATCH: This scientist thinks Elon Musk is wrong about the threat of artificial intelligence

Amazon Echo will bring artificial intelligence into our lives much sooner than expected

$
0
0

RTSNQKM

What’s all the fuss about the voice-activated home speaker that Amazon is due to release in the UK and Germany in late September?

This gadget has been available in the US for over a year and has proven a minor hit, with sales estimates between 1.6m and 3m.

But these figures belie the potential impact this kind of artificial intelligence device could have on our lives in the near future.

Echo doesn’t just let you switch on your music by voice command.

It’s the first of what will be several types of smart home appliances that work beyond simple tasks like playing music or turning on a light.

It uses an artificial intelligence assistant app called Alexa to allow users to access the information and services of the internet and control personal organisation tools.

You can order a pizza or a taxi, or check the weather or your diary, all just by speaking to Alexa. In this way, it is similar to Apple’s Siri but has advances in microphone and AI technology that make it significantly more accurate than past devices in understanding and executing commands – and from anywhere in your home that it can hear you.

I’ve been living with Amazon Echo for a year now, having imported it from the US via eBay. It’s an astonishing piece of kit that has to be experienced to see exactly why it has the potential to make the idea of a personal assistant smart home hub successful. It’s not surprising that Amazon’s CEO Jeff Bezos has said it is potentially the fourth core Amazon service after its marketplace, cloud services and mobile devices.

Many of us have already become used to poor voice-recognition software and error-prone requests on our mobile devices. But Amazon started developing a high-precision microphone and more sophisticated voice recognition system a full 12 months before its competitors and has gained a significant headstart. The big difference with other AI assistants is that instead of a single piece of software, Alexa uses 300 of its own apps (which Amazon calls “skills”) to provide the device’s different capabilities. This creates a system that is far more integrated and sophisticated yet simple to use with minimal setup.

This is a very significant development in the rise of the connected home, which is coming as we move from PCs and mobile devices to the era of the internet of things when computer chips will be in objects all around us. Echo is arguably the first successful product to bridge that gap. It’s working voice recognition service and connected sensors essentially link your home to a marketplace supply chain that services many (if not all) of your needs.

It’s still early days for this kind of device, but it raises the question how other shops, banks and entertainment companies might need to respond to the technology because it could effectively place a middle-man between them and their customers. If you want to order something, instead of going to the company that provides it directly, you just go to Amazon through your Echo. It’s what the IT industry might call an “aggregator” or a “service broker platform”. This is the much-spoken of but near-mythical goal of many tech companies who want to become the service provider of all other services.

565978761 2

Any downsides?

The US feedback on Echo has been very strong from early adopters.

In my experience, the argument that it doesn’t have a screen and therefore is harder to interact with disappears when you actually use the device.

The voice interaction is natural and if there is a problem with the system it’s more to do with learning the range of “skills” the device can perform than getting them to work.

A device that is constantly listening for your commands (although, the company is at pains to make clear, not the rest of your unrelated conversations) will no doubt raise concerns about privacy, just as all our smart devices do.

Echo and Alexa work through the existing security protocols that many people already use when online shopping or accessing cloud web services through Amazon.

But how secure these systems really are – and their potential for misuse – may come under greater scrutiny once Amazon (or any smart home company) has access not just to our bank details but our private conversations, too.

Echo represents a new kind of interface that will likely make voice activated services, along with the breaking concepts of virtual and augmented reality, the cutting edge way we interact with computers in 2017 and beyond.

Google have already launched Google Home in the US (a full year late) and other firms are developing similar solutions.

The astonishing thing about this is that it’s a vision of the future that’s arriving much sooner that expected.

We’re still far from general artificial intelligence, with machines fully able to think and perform like humans, but the days of the keyboard and mouse are numbered.

SEE ALSO: The meltdown at one of Silicon Valley's hottest young VCs could lead to more investigations

Join the conversation about this story »

NOW WATCH: NASA is bringing back one of its most experimental and dangerous programs

Tech billionaire Mike Lynch: 'You're seeing the beginning of a new age'

$
0
0

mike lynch  autonomy invoke capial

Mike Lynch is making a bet on robot lawyers.

This Wednesday, the tech billionaire investor announced an investment in Luminance, a newly launched startup that uses artificial technology to read contracts in order help law firms with the arduous process of due diligence for mergers and acquisitions (M&A).

It's not a "sexy" piece of technology, Lynch argues — but one that has huge implications for the way we live our lives, and is indicative of a quiet revolution in artificial intelligence.

"It's like seeing a steam engine for the first time. What this is is probably an example of what's going to be changing a lot of things. If you can get machine technology to be reading contracts, it's going to be changing a lot of the world around us ... you're seeing the beginning of a new age."

52-year-old Lynch is best known for starting Autonomy, an enterprise search engine sold to Hewlett-Packard in an $11 billion (£8.3 billion) deal that has left the British entrepreneur embroiled in ongoing lawsuits. He has since founded venture capital firm Invoke Capital — the vehicle through which the investment in Luminance was made.

This week, Business Insider sat down with the investor to discuss Luminance, Brexit, his augmented reality plans, and why he likes having an "unfair advantage."

A multi-million dollar investment in a Cambridge team

cambridgeMike Lynch is an investor in Luminance — but was also instrumental in helping create it.

"The bit that makes it possible is the machine learning, and that was being done by some research people at Cambridge, and I actually have a connection because my PhD a long, long time ago was in machine learning," Lynch said. "I was introduced to them, and what they were doing looked great, but I said to them 'look, you gotta go and meet some real world people.'

"So they started getting real data and they met up with [law firm] Slaughter and May, and basically the machine learnt from Slaughter and May how to do these thing and at that point they made a little company. They got a CEO who is a lady who'd actually been involved in a lot of M&A deals over their career and we funded it, and it's been developing the product, and today it comes out into the bright lights of day."

Invoke is Luminace's sole investor, and invested a figure in the "low millions," Lynch said. The startup's valuation is not being disclosed. Its tech has already been used on live deals, and has a contract signed to help Slaughter and May. While the focus is — for now — on M&A analysis, the ultimate vision is more ambitious.

Is this the end of lawyers? (No.)

Some of the collection of robots and space toys, the majority of which are in superb condition and with original packaging, which will go under the hammer at Vectis Auctions on Tuesday March 15th.Long-term, the plan is to apply Luminance's contract-reading technology in other use-cases. Analysing procurement deals for their relative value, or continually monitoring extant contracts to make sure they stay compliant with changes in the law are two examples Lynch gives.

"This is a generic technology, that's really the big story today. If you think of how much of the world is out there handling contracts at the moment, reading them, negotiating them ... these are technologies that will automate and support all those tasks in the future. So although not quite as sexy as driverless cars, it's likely to have a very big impact."

This is, Lynch said, "the beginning of a new age."

So is this the beginning of the end for the human legal profession? "No," he countered. "The thing about being a lawyer is you have all this education, and you do all this highly skilled stuff, and then you spend a lot of time doing the grunt work. What we're trying to do is make sure the lawyers are doing the clever bit, and that's what the the clients want."

"We like an unfair advantage."

rio dogsLuminance is an example of what the investor calls a "fundamental technology."

At Invoke, the "first thing we look for is fundamental technology. We like an unfair advantage. So don't bring us your great new social media shoe store. And the second thing is because we've done a lot of this in the past, is we're only interested in things that we think can be very big."

How big? "We look at businesses that we think can be billion-dollar-plus size."

Will Luminace hit that in five years? "We don't put a timescale on it, but hey — five years is a long time in our world. Someone once told me tech years should be in dog years."

Invoke has other unannounced projects still in the pipeline — including one focusing on augmented and virtual reality, another super-buzzy area. "We've got an interesting one which is all to do with augmented reality and VR, which when we announce that one see it," Lynch said. "It's cool."

 

Brexit: If we don't start to give those messages soon, I do think it could have some effect."

Theresa MayLynch has also been involved in the political sphere, acting as a scientific advisor to former British leader David Cameron. He still holds the same role for new Prime Minister Theresa May, he told Business Insider, though he has yet to be called up to help her: "I suspect the Prime Minister is probably busy with some quite short term things right now," he said with a chuckle.

He was also vocal in the run-up to the referendum on Britain's membership of the European Union, calling Brexit "lunacy."

"I’ve sat in rooms where overseas organisations have made decisions about where they’re going to put a factory or whether they are going to invest in a R&D centre,"he said in an interview with Leaders In."At the moment, they think about it should it be Holland, should it be Britain. It’s never going to be Britain if we’re outside the EU. They just will not do that."

Post-Brexit, he takes a more diplomatic tone — but clearly still has serious reservations. "I think it's going to present a significant number of short-to-medium-term challenges, and the outcome will be dependant on how we handle those."

Whether or not it affects how he does business depends on the exact terms of the UK's withdrawal from the EU. "For example, in tech we're probably less concerned about tariffs, because of the way how tech works. But we're very concerned at being able to get the best talent from around the world, so these are the kind of questions the government has to think about."

And will it threaten Britain's status as tech capital of Europe? "It all comes down to what decision is made. If we start to make noises that we welcome talent from wherever, be it the EU or the rest of the world ... then I think we'll be fine. If we don't start to give those messages soon, I do think it could have some effect."

Join the conversation about this story »

NOW WATCH: This monster floor cleaner is incredibly satisfying to watch

Google just unveiled new tools to fight harassment online (GOOG, GOOGL)

$
0
0

JaredCohen

Google is doing its part to fight online harassment with new tools powered by artificial intelligence.  

A new piece from Wired's Andy Greenberg describes the technology, which was created by Google subsidiary Jigsaw. Jigsaw was previously Google's think-tank division and was spun out in February to focus on projects that use technology to solve geopolitical problems.

Called Conversation AI, Jigsaw's new software is aimed at blocking vitriolic or harmful statements online. The software uses machine learning to automatically catch abusive language, giving it an "attack score" of 100 (with 100 being extremely harmful and 0 being not at all harmful). The technology will first be tested out in the comments sections within The New York Times, and Wikipedia also plans on using it, though the company hasn't said how, according to Wired.

The technology will eventually be open-source, so websites or social media platforms could use it to catch abuse before it even hits its intended target. According to Wired, Conversation AI can "automatically flag insults, scold harassers, or even auto-delete toxic language."

It's not clear how accurate the technology is quite yet — Greenberg discovered some distinct flaws in the software in his own tests, butGoogle told Wired it has a 92% certainty and a 10% false-positive rate, and that it will continue to improve over time.

SEE ALSO: The subtle way Google plans to use its greatest skill to combat ISIS

Join the conversation about this story »

NOW WATCH: A regular guy tests out Apple’s wireless AirPod headphones — here’s what he thought

Artificial intelligence that spots abuse and harassment could be the answer to internet trolls

$
0
0

artificial intelligence robot

It’s being used to fight ISIS, and now, an app developed by a subsidiary of Google is tackling another kind of vitriol — online trolls.

Jigsaw, an organization that once existed as Google’s think tank, has now taken on a new life of its own and has been tasked with using technology to address a range of geopolitical issues.

The latest software to come out of the group is an artificial intelligence tool known as Conversation AI

As Wired reports, “the software is designed to use machine learning to automatically spot the language of abuse and harassment — with, Jigsaw engineers say, an accuracy far better than any keyword filter and far faster than any team of human moderators.

Conversation AI learns and automatically flags problematic language, and assigns it an “attack score” that ranges from 0 to 100. A score of 0 suggests that the language in question is not at all abusive, whereas a score of 100 suggests that it is extremely harmful.

And it looks like it’s working. As Wired notes, “Jigsaw has now trained Conver­sation AI to spot toxic language with impressive accuracy.

Feed a string of text into its Wikipedia harassment-detection engine and it can, with what Google describes as more than 92 percent certainty and a 10-percent false-positive rate, come up with a judgment that matches a human test panel as to whether that line represents an attack."

Artificial intelligence

Currently, the plan is to test Conversation AI first in the New York Times’ comments section (though perhaps YouTube would be a better place to start), and Wikipedia also plans on making use of the software, though it’s unclear how.

“I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight,” Jigsaw founder and president Jared Cohen told Wired, “to do everything we can to level the playing field.”

Eventually, Conversation AI will become open source so that any site can make use of its anti-trolling capabilities to protect its users. So advanced is the technology already that it can “automatically flag insults, scold harassers, or even auto-delete toxic language.”

So look out, internet trolls of the world. It looks like your days of abuse may be numbered.

Join the conversation about this story »

NOW WATCH: The last harvest moon eclipse of the decade has come and gone — here’s what a harvest moon actually is

Facebook has built a world-class, murderous, Doom-playing AI (FB)

$
0
0

red mark zuckerberg angry facebook ceo

Researchers from Facebook have built a highly efficient piece of artificial intelligence (AI), programmed to slaughter anything in its path.

Luckily, it can only play "Doom".

A team from the social network blasted its way to glory this week in a contest to build programs capable of autonomously playing the classic first-person shooter game.

Called VizDoom, it's an irreverent test of skill for AI researchers, focused on eight-player deathmatches. Think your software is smart? Prove it: Defeat your rivals in a bloody eight-player shoot-out.

(Engadget's Aaron Soupporis has talked to some of the human teams, and written a deep dive on their strategies and the outcome of the matches.)

AI, of wildly varying levels of sophistication, has been in games since the earliest days of the medium. But what sets VizDoom apart is the kind of input the AI competitors are given. Traditionally, computer-controlled entities will be fed a stream of incomprehensible (in human terms) data: Coordinates, shifting variables, internal data (like maps) and so on, which they respond to according to preset rules.

In contrast, the VizDoom competitors are only given access to the "screen buffer"— AKA the contents of the computer screen — for it to decide what to do, in the same way human players do. And the competing AI's also "learn" via constant reinforcement, adapting their strategies over time as they figure out what works and what doesn't.

In short, it's forced to play exactly as a human does.

There were two categories of play in the competition: Facebook's F1 team won the first, and chipmaker Intel's IntelAct team won the second.

The one that Facebook won was simpler. It's a deathmatch that takes place on a map that AI competitors have played on and trained on in advance, with only one weapon available — the rocket launcher. Ammo and medkits (health) are scattered around to collect as required.

You can watch them battle in the video below. By and large, they're pretty good.

But the second is more complex — and makes Intel's achievement especially impressive. It takes place on a map the AI's haven't previously ever seen before, with a variety of mystery weapons and items in. The competitors can't just put their pre-match training into effect — they have to actively learn the map in their battle for supremacy.

As you can see in the video below, this is where the competitors' failings become more apparent. PotatoesArePrettyOx (in the top right) struggles to walk in a straight line — when it's not just shooting the wall.

On first glance, Intel's and Facebook's achievements seem both impressive, and slightly pointless. But you have to remember that this isn't really about Doom. The teams are stacked with incredibly sophisticated researchers, at the cutting edge of AI development. Just like Google DeepMind's successes playing boardgame Go, VizDoom is an accessible way to demonstrate to the world the nascent possibilities of artificial intelligence tech — and also encourage friendly rivalries between professionals at different companies.

But that said, developing AI designed to destroy everything it sees with explosives in the most efficient way possible won't do much to assuage the fears of people worried about the rise of "killer robots."

Join the conversation about this story »

NOW WATCH: Facebook has a feature that stalks you all over the internet — here’s how to turn it off


Hollywood bigshots gave this startup founder the idea to write music with robots

$
0
0

Amper Group Photo 1

When an amateur filmmaker wants to soundtrack a video, they just pick one of their favorite songs and hope it doesn't get zapped with a copyright takedown.

But for professional content, like the web videos from publications that are flooding Facebook, or for commercials, that's not an option. You need music that sounds good, that's appropriate for the video, and most importantly, is affordable to use legally.

That's what Hollywood producers kept telling Drew Silverstein when he worked as a composer with Hollywood composers like Christopher Lennertz. Silverstein could write music, but having a human write music is a long and expensive process, which means it's simply not practical for many uses. 

He's one of the founders of Amper Music, a startup writing software that writes music when given a few descriptors — "dark and epic," or "happy classic rock."

After feedback from producers, he decided to hunker down and create a "creative artificial intelligence," as he likes to call it. 

"I built the algorithm initially in a massive Microsoft Excel spreadsheet. It took forever, but it worked," Silverstein told Business Insider.

Now Amper Music is a three-year old startup with a handful of employees and seed funding from Brooklyn Bridge Ventures, Two Sigma Ventures, and Advancit Capital, with a product that generates an original song in a web browser in seconds, while doing the algorithmic processing in the cloud. 

And the music it writes sounds good — not so good you'd want to listen to it on purpose, but more than good enough to soundtrack a car commercial. Silverstein says Amper's music has passed "blind taste tests," and listeners usually can't tell that it was made by a computer. 

Listen for yourself: Here's an example of a "dark epic" song Amper's algorithm wrote: 

 And here's a song that might be better for a reality TV show: 

Commodity music 

Amper User Interface

Silverstein is aware that people might see his software as a threat to working musicians. That's not how he approaches it — instead, he sees machine generated music as totally different from music written for expression.

He calls what his software writes "commodity music." He thinks that music valued for its end purpose, say, soundtracking a viral video, won't encroach on artistic space. 

"Everyone at the company is a musician or audio professional," he explains, so his staff sees Amper as more of a tool, for creators to use, as opposed to a push-button robot threatening to take performers' jobs. 

The Amper web app allows a film maker or creator to set the mood, style, and even emphasis points of a short song — "basically, you can have the same conversation you could have with Hans Zimmer," a famous Hollywood composer, Silverstein says.  

Press "render" and in minutes you'll have a custom song you can use freely. Silverstein won't reveal too many details about the algorithm that Amper uses to write its music, citing it as the company's main trade secret. 

That's because several other companies are trying to replace stock music. Google recently revealed Magenta, a open-source project that uses Google's machine learning expertise to create "compelling art and music." And British startup Jukedeck is working on a similar product as well. 

Amper's founder isn't too worried about competition yet, and sees it as a reflection that there's a real demand for custom, cheap, royalty-free music generated by creative artificial intelligence. "Creative AI isn't a mainstream thing yet, but our peers and investors think it's a big deal, and it will be a big deal," he says. 

SEE ALSO: Google is building AI that can create its own art and music

Join the conversation about this story »

NOW WATCH: This monster floor cleaner is incredibly satisfying to watch

Google says its AI software is now almost as good at translating Chinese as a human (GOOG)

$
0
0

china mask air pollution chinese man cctv building beijing smog

Google is super-charging its translation software with artificial intelligence (AI).

The Californian tech giant has announced that it is implementing its neural networking AI tech into Google Translate — with radically improved results.

Simply put, neural networks are AI modelled on connections in the human brain, capable of learning and improving over time.

Google has been aggressively introducing the tech into its core products in attempts to improve efficiency. It has been used for everything from reducing the power bill of the company's enormous data centres, to defeating the human world champion of ancient Chinese board game Go in a highly publicised bout.

It's now being used in Google Translate, starting with Chinese-to-English translations, with plans for other language pairs in the works.

The Google Neural Machine Translation system (or GMNT, as Google is catchily calling it) is reducing translation errors by between 55 and 85%, the company says.

And according to its research, its translations score only just below those of human translations. Take a look at the chart below: A theoretical perfect translation scores a 6, but these basically never happen, even with human translators. But humans still translate much more effectively than traditional phrase-based translation software. GMNT manages to close the gap considerably.

google translation software ai gmnt neural network language pairs

That said, it's definitely not perfect, and Google admits as much. "GNMT can still make significant errors that a human translator would never make,"two research scientists for the Google Brain Team wrote in a blog post, "like dropping words and mistranslating proper names or rare terms, and translating sentences in isolation rather than considering the context of the paragraph or page."

The GMNT software is live now on all Chinese-to-English translations — of which Google says it processes 18 million a day.

Here's the full research:

Join the conversation about this story »

NOW WATCH: This monster floor cleaner is incredibly satisfying to watch

US tech giants have formed a group to promote the safe development of AI

$
0
0

Partnership on AI

Google, Facebook, Amazon, IBM, and Microsoft have come together to form a new organisation in a bid to ensure that artificial intelligence (AI) is developed safely, ethically, and transparently.

The organisation — announced on Wednesday and known as the Partnership on Artificial Intelligence to Benefit People and Society, or simply Partnership on AI — will aim to address some of the challenges that AI presents to people and society, while also figuring out how humanity can best take advantage of new technologies in the field, which has advanced rapidly in the last few years.

The Partnership, unveiled by AI executives from the founding companies during a telephone briefing with journalists on Wednesday, said it will carry out research and recommend how others in the field of AI should go about developing their systems.

During the call, the consortium dismissed concerns that this was effectively an attempt by the tech industry to self-regulate AI without government involvement.

AI has been tipped to improve many aspects of life, ranging from healthcare, education, and manufacturing to home automation and transportation. But it's also been described by the likes of renowned scientist Stephen Hawking and tech billionaire Elon Musk as something that could wipe out humans altogether.

Ensuring AI's positive impact

Programming a machine to learn from its environment in a particular way is now the job of many software developers around the world. But some people are concerned that these self-aware machines could be programmed in a way that allows them to develop intelligence that could be used to cause harm, while others, such as Oxford University professor Nick Bostrom, think superintelligent machines will outsmart humans within a matter of decades.

Mustafa Suleyman, cofounder and head of applied AI at Alphabet-owned research lab, DeepMind said it's "critical" for technology companies to start engaging with the public on how they're developing AI. "The positive impact of AI will depend on the level of public engagement," he said, adding that the new organisation will compliment Google's own AI ethics board, which was created after the search giant acquired DeepMind for a reported £400 million in 2014, but has been kept under wraps ever since.

Yann LeCun, director of AI research at Facebook, and Ralf Herbrich, director of machine learning science and core machine learning at Amazon, said AI has the potential to improve the lives of millions of people, while also stating how crucial the technology is to the future of their platforms.

"As researchers in industry, we take very seriously the trust people have in us to ensure advances are made with the utmost consideration for human values," said LeCun. "By openly collaborating with our peers and sharing findings, we aim to push new boundaries every day, not only within Facebook, but across the entire research community. To do so in partnership with these companies who share our vision will help propel the entire field forward in a thoughtful responsible way."

The Partnership will be funded and supported by the founding companies, who actually compete with one another across many other parts of their businesses. The location of the organisation's headquarters are yet to be decided, as are job roles and staff numbers, although decisions on these matters are expected to be made in the coming weeks.

Her

Through the Partnership on AI consortium, the US tech giants will carry out AI research into areas like:

  • ethics, fairness and inclusivity
  • transparency, privacy, and interoperability
  • collaboration between people and AI systems
  • and the trustworthiness, reliability and robustness of the technology.

There will be equal representation of corporate and non-corporate members on the Partnership's board, which will meet on a yet-to-be determined basis. Academics and non-profits will also be invited to join, as will other organisations looking to monitor the development of AI, such as Elon Musk's OpenAI. The Partnership said it is already in membership discussions with the likes of the Association for the Advancement of Artificial Intelligence (AAAI) and the Allen Institute for Artificial Intelligence (AI2).

"In the coming weeks and months we’ll be announcing non-corporate members of our Partnership and their roles," said Suleyman.

8-point plan

Murray Shanahan, professor of cognitive robotics at Imperial College London, endorsed the formation of the Partnership, saying: "A small number of large corporations are today the powerhouses behind the development of sophisticated artificial intelligence. The inauguration of the Partnership on AI is a very welcome step towards ensuring this technology is used wisely."

Yoshua Bengio, a professor at the University of Montreal and scientific director at IVADO, an organisation that aims to partner industry professionals and academic researchers, added: "Bringing together the major players in the field is the best way to ensure we all share the same values and overall objectives to serve the common good."

The Partnership on AI shares the following tenets:

  1. We will seek to ensure that AI technologies benefit and empower as many people as possible.
  2. We will educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions.
  3. We are committed to open research and dialog on the ethical, social, economic, and legal implications of AI.
  4. We believe that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders.
  5. We will engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed.
  6. We will work to maximise the benefits and address the potential challenges of AI technologies, by:
    1. Working to protect the privacy and security of individuals.
    2. Striving to understand and respect the interests of all parties that may be impacted by AI advances.
    3. Working to ensure that AI research and engineering communities remain socially responsible, sensitive and engaged directly with the potential influences of AI technologies on wider society.
    4. Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints.
    5. Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.
  7. We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.
  8. We strive to create a culture of cooperation, trust, and openness among AI scientists and engineers to help us all better achieve these goals.

Join the conversation about this story »

NOW WATCH: Apple is announcing a new iPhone on September 7 — here's what you're getting

Machine learning has boosted Google's translation capabilities to near-human levels

$
0
0

Google TranslateNo one would accuse Google Translate, the favored tool of unscholarly high school language students everywhere, of being an inaccurate interpreter. 

The 10-year-old internet interpreter can fluently translate more than 100 tongues, recognize foreign restaurant menus and signage, and differentiate between dialects in real time.

But there’s always room for improvement, and in Translate’s case, it’s occurring through machine learning.

The project is called Google Neural Machine Translation, or GNMT, and it isn’t strictly speaking new. It was first employed to improve the efficiency of single-sentence translations, explained Google engineers Quoc V. Le and Mike Schuster, and did so ingesting individual words and phrases before spitting out a translation. But the team discovered that the algorithm was just as effective at handling entire sentences — even reducing errors by as much as 60 percent. And better still, it was able to fine-tune the accuracy over time. “You don’t have to make design choices,” Schuster said. “The system can entirely focus on translation.”

In a whitepaper published on Monday, the Google Brain team detailed the ins and outs of GNMT. Under the hood is long short-term memory, or LSTM, a neural networking technique that works a bit like human memory. Conventional translation algorithms divides a sentence into individual words which are matched to a dictionary, but LSTM-powered systems like Google’s new translation algorithm are able to “remember,” in effect, the beginning of a sentence when they reach the end. Translation is thus tackled bilaterally: GNMT breaks down sequences of words into their syntactical components, and then translates the result into another language.

GNMT’s approach is a boon for translation accuracy, but historically, such methods haven’t been particularly swift. Google, however, has employed a few techniques that dramatically boost interpretation speed.

As Wired explains, neural networks usually involve layered calculations — the results of one feeds into the next — a speed bump which Google’s model mitigates by completing what calculations it can ahead of time. Simultaneously, GNMT leverages the processing boost provided by Google’s specialized, AI-optimized computer chips it began fabricating in May. The end result? The same sentence that once took ten seconds to translate via this LSTM model now takes 300 milliseconds.

And the improvements in translation quality are tangible. In a test of linguistic precision, Google Translate’s old model achieved a score of 3.6 on a scale of 6. GNMT, meanwhile ranked 5.0 — or just below the average human’s score of 5.1.

It’s far from perfect, Schuster wrote. “GNMT can still make significant errors that a human translator would never make, like dropping words … mistranslating proper names or rare terms … and translating sentences in isolation rather than considering the context of the paragraph or page.” And prepping it required a good deal of legwork. Google engineers trained GNMT for about a week on 100 graphics processing units — chips optimized for the sort of parallel computing involved in language translation. But Google is confident the model will improve. “None of this is solved,” Schuster said. “But there is a constant upward tick.”

Google isn’t rolling out GNMT-powered translation broadly, yet — for now, the method will remain relegated to Mandarin Chinese. But the search giant said it’ll begin AI-powered translations of new languages in the coming months.

GNMT may be the newest product of Google’s machine learning experiments, but it’s hardly the first. Earlier this year, AlphaGo, software produced by the company’s DeepMind division, became the first AI in history to beat a human grand master at the ancient game of Go. Earlier this summer, Google debuted DeepDream, a neural network with an uncanny ability to detect faces and patterns in images. And in August, Google partnered with England’s National Health Service and the University College London Hospital to improve treatment techniques for head and neck cancer.

Not all of Google’s artificial intelligence efforts are as high-minded. Google Drive uses machine learning to anticipate the files you’re most likely to need at a given time. Calendar ‘s AI-powered Smart Scheduling suggests meeting times and room preferences based on the calendars of all parties involved. And Docs Explore shows text, images, and other content Google thinks is relevant to the document on which you’re working.

SEE ALSO: A man went on a rampage in an Apple Store, smashing a bunch of iPhones with a metal ball

Join the conversation about this story »

NOW WATCH: The world’s largest pyramid is not in Egypt

IBM is making a major $200 million investment in Watson

$
0
0

IoT DataThis story was delivered to BI Intelligence IoT Briefing subscribers. To learn more and subscribe, please click here.

IBM says it will be investing $200 million to improve and expand its Watson IoT data analytics platform in Munich, as the company looks to add collaboration centers with its clients, reports ZDNet.

The popular data analytics platform now has over 6,000 customers, up from 4,000 only eight months ago.

IBM says heightened demand for a variety of use cases has triggered the necessary improvements. The company added Schaeffler, a German industrial supply company, as a customer recently in a large deal that centers on using machine learning to monitor and optimize wind energy use in connected vehicles and trains.

Further, IBM is also working with Thomas Jefferson University Hospital in Philadelphia to create Watson-powered hospital rooms that enhance the patient experience and ensure optimal medical outcomes.

Watson’s popularity will continue to grow along with the IoT in the coming years. IBM opened the German site for Watson last December in a move that centered on its IoT use cases for the vast volumes of data that IoT devices collect. It now appears that IBM is seeing these investments come to fruition in a wide variety of use cases. The company could see their investments continue to pay dividends as devices collect increasingly larger amounts of data as the IoT grows.

Thanks to technology such as Watson, the IoT Revolution is picking up speed. And when it does, it will change how we live, work, travel, entertain, and more.

From connected homes and connected cars to smart buildings and transportation, every aspect of our lives will be affected by the increasing ability of consumers, businesses, and governments to connect to and control everything around them.

Imagine “smart mirrors” that allow you to digitally try on clothes. Assembly line sensors that can detect even the smallest decrease in efficiency and determine when crucial equipment needs to be repaired or replaced. GPS-guided agricultural equipment that can plant, fertilize, and harvest crops. Fitness trackers that allow users to transmit data to their doctors.

It’s not science fiction. This “next Industrial Revolution” is happening as we speak. It’s so big that it could mean new revenue streams for your company and new opportunities for you. The only question is: Are you fully up to speed on the IoT?

After months of researching and reporting this exploding trend, John Greenough and Jonathan Camhi of BI Intelligence have put together an essential report on the IoT that explains the exciting present and the fascinating future of the Internet of Things.  It covers how the IoT is being implemented today, where the new sources of opportunity will be tomorrow and how 16 separate sectors of the economy will be transformed over the next 20 years.

The report gives a thorough outlook on the future of the Internet of Things, including the following big picture insights:

  • IoT devices connected to the Internet will more than triple by 2020, from 10 billion to 34 billion. IoT devices will account for 24 billion, while traditional computing devices (e.g. smartphones, tablets, smartwatches, etc.) will comprise 10 billion.

  • Nearly $6 trillion will be spent on IoT solutions over the next five years.

  • Businesses will be the top adopter of IoT solutions because they will use IoT to 1) lower operating costs; 2) increase productivity; and 3) expand to new markets or develop new product offerings.

  • Governments will be the second-largest adopters, while consumers will be the group least transformed by the IoT.


And when you dig deep into the report, you’ll get the whole story in a clear, no-nonsense presentation:

  • The complex infrastructure of the Internet of Things distilled into a single ecosystem

  • The most comprehensive breakdown of the benefits and drawbacks of mesh (e.g. ZigBee, Z-Wave, etc.), cellular (e.g. 3G/4G, Sigfox, etc.), and internet (e.g. Wi-Fi, Ethernet, etc.) networks

  • The important role analytics systems, including edge analytics, cloud analytics, will play in making the most of IoT investments

  • The sizable security challenges presented by the IoT and how they can be overcome

  • The four powerful forces driving IoT innovation, plus the four difficult market barriers to IoT adoption

  • Complete analysis of the likely future investment in the critical IoT infrastructure:   connectivity, security, data storage, system integration, device hardware, and application development

  • In-depth analysis of how the IoT ecosystem will change and disrupt 16 different industries


To get your copy of this invaluable guide to the IoT universe, choose one of these options:

  1. Subscribe to an ALL-ACCESS Membership with BI Intelligence and gain immediate access to this report AND over 100 other expertly researched deep-dive reports, subscriptions to all of our daily newsletters, and much more. >> START A MEMBERSHIP
  2. Purchase the report and download it immediately from our research store. >> BUY THE REPORT

The choice is yours. But however you decide to acquire this report, you’ve given yourself a powerful advantage in your understanding of the fast-moving world of the IoT.

Join the conversation about this story »

I bought Google Home instead of Amazon's Echo — here's why (GOOG, GOOGL, AMZN)

$
0
0

star trek the next generation

I've been sold on voice-controlled digital assistants since I was a little kid.

How could I not be? I grew up with "Back to the Future 2" on VHS and "Star Trek: The Next Generation" in primetime. 

More to the point, how could anyone not be? The concept of handling casual tasks by voice rather than touch is incredibly appealing. It's no surprise that Amazon's Echo device, with its voice controlled assistant named "Alexa," is such a hit. Being able to play music, control lighting, and order an Uber — all through voice, quickly — is a huge deal.

It's the actual promise of home automation: saving you the time, in aggregate, of not doing millions of menial tasks. 

Even with that promise, I hesitated with the Echo.

It's expensive, at $179, and Amazon has a way of funneling all of its products into a means of increasing revenue on Amazon.com. It's a question of intent — as a consumer, I don't trust that Amazon's creating a product to create a great product, and I don't trust that Amazon will continue to support it in the long-run. And that pushes me off of dropping nearly $200 on a total luxury item.

But when Google announced the price and release date for Google Home on Tuesday, I was intrigued once again in digital assistants. $129? And it uses Google's excellent, proven voice recognition software? 

Google Home

Simply put: Google Home is a speaker with two microphones mounted on top, which it uses for hearing your commands. Say, "Okay, Google: Play The Bee Gees." Just like that, you're ha, ha ha, ha, stayin' aliiiiiive.

But here's what really sold me: a simple, obvious feature called "My Day." It's a daily briefing. Bear with me here.

"We designed a feature called 'My Day,' that (with your permission) summarizes important topics and activities for you in a really simple way. It's a great thing to try with the morning coffee," Rishi Chandra, a senior product manager at Google explained on-stage Tuesday.

The demo is short. Chandra says, "Okay, Google: Good morning!" 

Google Home responds accordingly with the following:

"Good Morning, Rishi! It is 7:32 AM. The weather in San Francisco currently is 59 degrees and cloudy, with a high of 65 degrees. Your commute to work is currently 59 minutes with moderate traffic if you take US 101 South. Today at 5PM you have Bollywood hip-hop dance class. By the way, remember to cook dinner tonight for the kids. Have a good one!"

Really simple! Really obvious! So useful!

This is the base level stuff I've wanted from personal AI assistants since I was 10. Alexa doesn't do it. Siri doesn't do it. Cortana doesn't do it.

It's what Google already basically does for me on my phone (I have a Nexus 5X), but in a much, much easier way: by voice, no phone needed!

her

Google Home does all the other personal AI assistant stuff, of course:

  • It sets calendar events, timers, and dinner reservations. 
  • It plays your music (from a variety of services) out of its multi-directional speaker setup.
  • It employs Google/Google Maps to answer questions and give you directions.
  • It works with various connected home devices, like Phillips Hue light bulbs and Nest thermostats.

And all of that is fantastic, but the fact that Google realizes how important it is for its assistant to actually be an assistant — unlike Apple's Siri, Amazon's Alexa, and Microsoft's Cortana — is really meaningful. And the fact that Google already nails this so, so well with Google Now gives me faith that this ethos will carry over to Google Home.

Google's also talking big game about the "Google Assistant" AI software built into Google Home (as well as Google's new phone, the Pixel). It's a seeming evolution of the Google Now concept, which is the best part of owning an Android phone.

Google Home

Since Google services — Gmail, Maps, Calendar, etc. — are so tightly integrated in Android, Google Now takes pieces from each and turns it into incredibly useful, predictive information. For instance, I take the same train line to work pretty much every time, around the same time in the morning. Google Now automatically tells me about delays and closures. It knows if the gates have changed on my upcoming flight before I do, and it tells me.

With Google Assistant, I expect an extension of that already useful functionality. It takes the amazing, predictive stuff that Google Now already does, and it turns that into a conversation. 

Amazon's Echo, for all its functionality, doesn't take that same approach. It's an assistant in that it can do things for you — play music, set timers, etc. — but it's not predictive, it's reactive. That's a crucial difference, and it's why I pre-ordered a Google Home on Tuesday.

Google Home (receipt of purchase)

SEE ALSO: Google unveils its newest major product: the Google Home speaker

DON'T MISS: There's one big problem with Google Assistant: Saying 'OK Google' is super creepy

Join the conversation about this story »

NOW WATCH: We got our hands on the Home — Google’s answer to the Amazon Echo

I've owned an Amazon Echo for nearly a year now — here are my 19 favorite features (AMZN)

$
0
0

amazon echoI activated my Amazon Echo for the first time last December. It's quickly become one of my favorite tech gadgets ever.

Google unveiled a similar device on Tuesday, called Home, but Echo is the device it's trying to emulate. Amazon's speaker, which responds to either "Alexa" or "Amazon," is extremely quick to respond and understands commands much better than anything else I've used.

Thanks to its excellent audio system, with seven microphones for listening and a 360º omni-directional audio grille for speaking, Amazon Echo works exceedingly well wherever I am in my home. I can hear it — and it can hear me — perfectly.

Amazon Echo has completely transformed the way I live in my apartment. There's just so much you can do with Echo. Take a look.

SEE ALSO: Google just unveiled its newest major product: the Google Home speaker

"Alexa, what time is it?"

Honestly, the best use cases for Amazon Echo are the simplest ones. With the Echo, I don't need to bother searching for my phone just to get the time — you can ask for the time from anywhere in your house and get the answer immediately. It's a small thing, but it totally makes a difference when you're rushing in the morning.



"Alexa, how's the weather outside today?"

Again, it's a simple task, but it's way quicker and better than pulling out your phone and opening your favorite weather app. Amazon Echo will not only tell you the current temperature, but also the expected high and low temperatures throughout the day, and other conditions such as clouds and rain.



"Alexa, set a timer for 10 minutes."

Amazon Echo is the perfect cooking or baking companion because it's totally hands-free. When the timer's up, a radar-like ping will sound until you say "Alexa, stop."



See the rest of the story at Business Insider

Google's most ambitious new product isn't its fancy new phone (GOOGL, GOOG)

$
0
0

It's a big week for Google.

The search giant unveiled a slew of new products: a phone, a virtual reality headset, updates to its Chromecast line of products, a new type of wireless router. There was even a big to-do event with press invited to Google's Mountain View, California campus. 

While the new phone — the Pixel— is nice, and the new VR headset — Daydream View— is a look to the future, Google's most ambitious new product announced on Tuesday was actually a small speaker with a bizarre, slanted top.

Google Home

It's called Google Home, and it's an in-home personal assistant/multidirectional speaker. You speak — "Okay, Google"— and it listens. "How do I get from here to Roosevelt Island on the subway?" Google Home has an answer, using Google Maps and up-to-date MTA route information pulled from Google, and it's going to tell you.

All you have to do is ask.

Google Home

Like Amazon's Echo, it's meant to serve a role previously occupied only by fictional AI characters: to perform casual tasks by voice alone. But Google Home has some fascinating new additions to the concept, and a price point $50 below the Echo.

Here's everything we know about Google Home thus far:

SEE ALSO: Google unveils its newest major product: the Google Home speaker

DON'T MISS: I bought Google Home instead of Amazon's Echo — here's why

Let's be real: Price matters so much when it comes to new types of technology. Thankfully, Google Home is an affordable $129.



Google Home is meant to fit seamlessly into your life. Simply say, "Okay, Google," and your wish is its command.

Here's just a short list of the stuff Google Home can do:

-Set calendar events, timers, and dinner reservations.
-Play your music (from a variety of services) out of its multi-directional speaker setup.
-Use Google/Google Maps to answer questions and give you directions.
-Control various connected home devices, like Phillips Hue light bulbs and Nest thermostats.



It listens for the command "Okay, Google," which it can hear using the top-mounted microphones.



See the rest of the story at Business Insider

Samsung just bought the AI firm run by the co-creator of Apple's Siri

$
0
0

samsung galaxy phones

SEOUL (Reuters) - Tech giant Samsung said on Thursday it is acquiring U.S. artificial intelligence (AI) platform developer Viv Labs Inc, a firm run by a co-creator of Apple's Siri voice assistant program.

Samsung said in a statement it plans to integrate the San Jose-based company's AI platform, called Viv, into the Galaxy smartphones and expand voice-assistant services to home appliances and wearable technology devices.

Financial terms were not disclosed.

Technology firms are locked in an increasingly heated race to make AI good enough to let consumers interact with their devices more naturally, especially via voice.

Alphabet's Google is widely considered to be the leader in AI, but others including Amazon.com, Apple and Microsoft have also launched their own offerings including voice-powered digital assistants.

Samsung, the world's top smartphone maker, is also hoping to differentiate its devices, from phones to fridges, by incorporating AI. 

The acquisition of Viv could help the Korean firm shore up its competitiveness at a time when Google's new Pixel smartphones - armed with the U.S. firm's voice-powered digital assistant - threatens Samsung and other smartphone makers who are largely reliant on the Android operating platform.    

"Viv brings in a very unique technology to allow us to have an open system where any third-party service and content providers (can) add their services to our devices' interfaces," Rhee In-jong, Samsung's executive vice president, told Reuters in an interview.

The executive said Samsung needs to "really revolutionize" how its devices operate, moving towards using voice rather than simply touch. "We can't innovate using only in-house technology," Rhee said.

Viv chief executive and co-founder Dag Kittlaus, a Siri co-creator, and other top managers at the firm will continue managing the business independently following the acquisition.

Rhee told Reuters Samsung will continue to look for acquisitions to bolster its AI and other software capabilities, without naming any targets.

(Editing by Kenneth Maxwell)

Join the conversation about this story »

NOW WATCH: Toyota created a $392 baby robot to help comfort people in Japan without children

Marc Andreessen on what everybody gets wrong about the US economy

$
0
0

marc andreessen

Venture capitalist Marc Andreessen gave a long interview that Vox published Wednesday about artificial intelligence, but the most interesting part was actually about something different: the bifurcated US economy.

From Silicon Valley, the economy looks great. Tech wages are plump, housing prices are skyrocketing, and construction cranes are everywhere, while the five most valuable companies in the US are related to tech: Apple, Alphabet (Google), Microsoft, Amazon, and Facebook.

But in much of the country, wages are stagnant, good jobs are scarce, and people's paychecks are being eaten up by skyrocketing prices.

Overall, growth is sluggish and interest rates have been close to zero for eight years. What's going on?

Andreessen argues that there are actually two economies side by side, and the poorly performing one is dragging everything else down.

Prices are dropping rapidly in some industries: consumer electronics and computer gear, food, and media.

People look at these changes and blame innovation for killing jobs or shipping them overseas and then blame the economy's sluggishness on those lost jobs.

But as Andreessen pointed out, prices are rising rapidly in other sectors: mainly healthcare and education. He believes these rising prices cancel out the benefits of technological innovation, making the entire economy sluggish.

college graduates

Why are these industries so slow to innovate? Because, as he puts it:

"You've got monopolies, oligopolies, cartels, government-run markets, price-fixing — all the dysfunctional behaviors that lead to rapid increase in prices. The government injects more subsidies into those markets, but because those are inelastic markets, the subsidies just cause prices to go up further, which is what is happening with higher education."

So in Andreessen's view, the answer is to set markets free — by eliminating government-enabled distortions on one hand and busting up monopolies or oligopolies on the other. Then, he said, lowered prices should lift all boats.

He also disagrees that increased automation will kill jobs. Rather, he thinks it will increase the need for people to provide higher levels of service than the machines can do. His evidence: There are more retail clerks and bank tellers, even as those industries get more automated.

Read the whole interview here»

SEE ALSO: Tech billionaires are asking scientists for help breaking humans out of the computer simulation they think they might be trapped in

Join the conversation about this story »

NOW WATCH: Google just unveiled the Pixel — its first smartphone

Apple cofounder Steve Wozniak says he's not concerned about AI anymore (AAPL)

$
0
0

Steve Wozniak at Festival of Marketing

PayPal billionaire Elon Musk, Microsoft cofounder Bill Gates, and renowned scientist Stephen Hawking have all said artificial intelligence (AI) has the potential to harm to humanity if it's not developed in the right way.

But Apple cofounder Steve Wozniak told Business Insider in an interview this week that he's no longer concerned about AI. He said he reversed his thinking on AI for several reasons.

"One being that Moore’s Law isn’t going to make those machines smart enough to think really the way a human does," said Wozniak. "Another is when machines can out think humans they can’t be as intuitive and say what will I do next and what is an approach that might get me there. They can’t figure out those sorts of things.

"We aren’t talking about artificial intelligence actually getting to that point. [At the moment] It’s sort of like it magically might arise on its own. These machines might become independent thinkers. But if they do, they’re going to be partners of humans over all other species just forever."

Nick BostromWozniak's comments contrast with what Swedish philosopher Nick Bostrom said at the IP Expo tech conference in London on the same day.

The academic believes that machines will achieve human-level artificial intelligence in the coming decades, before quickly going on to acquire what he describes as "superintelligence," which is also the title of a book he authored on the topic.

Bostrom, who heads the Future of Humanity Institute at the University of Oxford, thinks that humans could one day become slaves to a superior race of machines. This doomsday scenario can be avoided, however, if self-thinking machines are developed from the very beginning in a way that ensures they're going to act in the interest of the human race.

Bostrom said this doesn't mean we have to "tie its hands behind its back and hold a big stick over it in the hope we can force it to our way" but rather developers and tech companies must "build it in such a way that it's on our side and wants the same things as we do."

Join the conversation about this story »

NOW WATCH: Microsoft just unveiled a $37 Nokia phone

Samsung's latest purchase is its smartest yet

$
0
0

viv ceo Dag KlittausExploding phones aside, Samsung has a major puzzle to solve.

The company makes more phones than anyone else and dominates the Android ecosystem. But when it comes to moving beyond the smartphone to the next major computing paradigm, it doesn't have much to go on.

Artificially intelligent assistants like Apple's Siri and Amazon's Alexa are becoming more and more capable — to the point where it won't be long before they allow us to wean ourselves away from the smartphone screen.

That's why Samsung's acquisition of Viv, an AI startup run by the same team that built Apple's Siri, is the smartest purchase Samsung has made.

Historically, Samsung has tried to build everything in-house, without looking to acquire talent and technology from the outside. (It tried its own digital assistant called S Voice several years ago, but it bombed.)

But in recent years, Samsung has changed its attitude. It bought the smart home company SmartThings in 2014, which it'll use as a platform to connect all its appliances together. It also bought LoopPay, a Boston-based startup that now powers a lot of the technology behind Samsung Pay.

Viv is the best purchase of them all.

Even though the product hasn't launched yet, what we've seen so far is impressive. During a demo at TechCrunch's Disrupt conference in May, Viv's CEO Dag Kittlaus showed that the assistant is more than a way to get you basic information like news and weather. It's an open platform that any developer can build into. Instead of a separate app for everything, you just tell Viv what you want to do, from ordering flowers to booking a hotel.

The Viv demo at Disrupt already felt far ahead of what we've seen Siri do, and it fills a major gap in Samsung's product portfolio. Samsung may rely on Android for a lot of its success, but for now, Google's new Assistant will remain on Google's own hardware like the new Pixel phones and Google Home speaker. 

Buying Viv isn't a way to attract users. Samsung will continue to sell boatloads of phones in the future. But eventually, assistants like Viv will be so standard that shipping a phone without capable AI will be as stupid as shipping one with an exploding battery.

Samsung doesn't have the time to build its own AI. It needs something now, and Viv is the answer. 

SEE ALSO: Google is going to win the next major battle in computing

Join the conversation about this story »

NOW WATCH: We got our hands on the Home — Google’s answer to the Amazon Echo

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>