Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Here's how Samsung's Viv acquisition will help it compete with Apple and Google (AAPL, GOOG)

$
0
0

Artificial IntelligenceThis story was delivered to BI Intelligence Apps and Platforms Briefing subscribers. To learn more and subscribe, please click here.

Samsung announced on Wednesday that it acquired AI company Viv Labs, the team behind Apple's Siri, The Wall Street Journal (WSJ) reports.

While specific details about the acquisition have not been revealed, the company's virtual assistant, dubbed Viv, is expected to be integrated into Samsung’s existing connected devices — most prominently its smartphones — while remaining separate from Samsung. The acquisition will help both platforms become more competitive against existing operating systems (OSs) and their virtual assistants, such as Siri, Google Assistant, and Microsoft’s Cortana. 

The acquisition helps Samsung on a number of fronts:

  • It helps Samsung stay relevant within the smartphone market. During its Made by Google event on Tuesday, Google revealed that its Google Assistant would not be included in Android 7 updates. This leaves Samsung without a device-integrated AI component, a feature that's rapidly becoming part and parcel of smartphones.
  • It bolsters Samsung’s software. Samsung is adding more and more features to its software so it can better compete with other OSs, notes WSJ. In 2015, the companybought mobile payments startup LoopPay in order to launch Samsung Pay, a mobile payments system similar to Apple Pay and Android Pay. Adding its own virtual assistant is just another piece of that puzzle.
  • It furthers Samsung’s efforts in building Tizen OS into a competitive platform. Viv will likely become the primary interface that connects Samsung’s widely varied device portfolio, which spans smartphones, connected TVs, smartwatches, and its home hub. Adding Viv also means that Samsung controls even more of what’s running on its flagship smartphones, making it less reliant on Android. This is essential for Tizen OS to become comparable to other major platforms.

Samsung’s massive market share gives Viv the perfect launching point to reach ubiquity, according to Viv co-founder Dag Kittlaus. Samsung’s unrivaled smartphone shipments, in particular, are advantageous for the AI. That’s because AI relies on massive amounts of constantly updated user data. At the moment, smartphones provide the most prolific user data. The acquisition will give the Viv plenty of room to grow its data banks, making it increasingly more powerful and, soon, competitive with existing AI platforms.

To receive stories like this one directly to your inbox every morning, sign up for the Apps and Platforms Briefing newsletter. Click here to learn more about how you can gain risk-free access today.

Join the conversation about this story »


Artificial intelligence-powered malware is coming, and it's going to be terrifying

$
0
0

bus explosion drill china test

Imagine you've got a meeting with a client, and shortly before you leave, they send you over a confirmation and a map with directions to where you're planning to meet.

It all looks normal — but the entire message was actually written by a piece of smart malware mimicking the client's email mannerisms, with a virus attached to the map.

It sounds pretty far out — and it is, for now. But that's the direction that Dave Palmer, director of technology at cybersecurity firm Darktrace, thinks the arms race between hackers and security firms is heading.

As artificial intelligence becomes more and more sophisticated, Palmer told Business Insider in an interview at the FT Cybersecurity Summit in London in September, it will inevitably find its way into malware — with potentially disastrous results for the businesses and individuals that hackers target.

It's important to remember that Palmer is in the security business: It's his job to hype up the threats out there (present and future), and convince customers that Darktrace is the only one that can save them. It's a $500 million (£401 million) British firm, with an AI-driven approach to defend networks. It creates an "immune system" for customers that learns how businesses operate then monitors for potential irregularities.

But with that in mind, Palmer provides an fascinating insight into how one of the buzziest young companies in the industry thinks cybersecurity is going to evolve.

Smart viruses will hold industrial equipment to ransom

mri brain scanRansomware is endemic right now. It's a type of malware that encrypts everything on the victim's computer or network, then demands a bitcoin ransom to decrypt it. If they don't pay up in a set timeframe, the data is lost for good.

AI-infused ransomware could turbo-charge the risks these attacks make — self-organising to inflict maximum damage, and going after new, even more lucrative targets.

"[We'll] see coordinated action. So imagine ransomware waiting until it's spread across a number of areas of the network before it suddenly takes action," Palmer said.

"I'm convinced we'll see the extortion of assets as well as data. So factory equipment, MRI scanners in hospitals, retail equipment — stuff that you'd pay to have back online because you can't actually function as a business without it. Data's one thing and you can back that up, but if your machine stops working then you're not going to be making any more money."

Malware will learn to mimic people you know

deepmind alphago round two lee sedol goUsing recurring neural networks, it's already possible to teach AI software to mimic writing styles — whether that's clickbait viral news articles or editorial columns from The Guardian. Palmer suggests that in the future, malware will be able to look through your correspondence, learn how you communicate, and then mimic you in order to infect other targets.

"Nadia's got something on her laptop that can read all her emails, reads her messages, can read her calendar, and then sends people messages in the same communication style she uses with them. So Nadia's always very rude to me so she'll send jokey messages ... but to you she'll be extremely polite. So you would receive, maybe, a map of this location of where to meet from Nadia — because it can see in her calendar that we're due to meet. And you'd open it, because it'd be relevant, it'd be contextual — but that map would have a payload attached to it."

It's like a more sophisticated version of a "CFO email" scam or "trust attack"— where a scammer, purporting to be a company employee, sends an email to the target asking them to make a money transfer. The FBI estimates that such attacks have cost businesses $2.3 billion (£1.84 billion) over the last three years.

The worst hacks won't be the most noticeable ones

explosions drill airport plane transport disasterIn December 2015, a Ukrainian power station was knocked offline by an unprecedented hack. 80,000 people lost power as a result, and Russian state-sponsored hackers are believed to be responsible. It's a spectacular example of how vulnerable the modern world is to hack attacks — but Palmer thinks the most destructive hacks in the future may be far less visible.

"If you can disable an oil rig, people are going to notice. Everyone's going to get around to trying to fix it. If you really wanted to try and harm an oil and gas firm, to my mind what you would do is have your self-hunting, self-targeting malware go in there and then start to change the geophysical data on which they decide where they're going to buy mining rights. And over a long time you can make sure they're buying drilling rights in the wrong places, those wells are coming up drier than they should be, and do really serious harm to their business in a way they're much less likely to notice and be able to respond to."

He added: "You might think, 'okay, that's a good idea, we should go and look at our databases, and see if there's any funny software there.' But the attacks of the future could just as likely be in their internet of things sensors, their submarines, their scanning equipment that's collecting [the data] in the first place, and good luck finding those attacks."

It's the dark side of the artificial intelligence revolution

We're in the early days of an artificial intelligence revolution. The technology is being for everything from self-driving cars to treating cancer, and we're only just scratching the surface right now. But as it becomes ever-more advanced and ever-more accessible — it is, inevitably, going to be used for ill.

What's the timeframe for all of this? "I reckon you could train a neural network in the next 12 months that would be smart enough to [carry out a trust attack] in a rudimentary way," Palmer said. "And if you look at the progress people like Google DeepMind are making on natural speech and language tools, it's in the next couple of years."

The future is on its way, and there's nothing you can do about it.

Join the conversation about this story »

NOW WATCH: Google just unveiled the Pixel — its first smartphone

Apple cofounder Steve Wozniak dismisses AI concerns raised by the likes of Stephen Hawking and Nick Bostrom (AAPL)

$
0
0

Steve Wozniak at Festival of Marketing

PayPal billionaire Elon Musk, Microsoft cofounder Bill Gates, and renowned scientist Stephen Hawking have called out artificial intelligence (AI) as one of the biggest threats to humanity's very existence.

But Apple cofounder Steve Wozniak told Business Insider in an interview this week that he's not concerned about AI. At least, not anymore. He said he reversed his thinking on AI for several reasons.

"One being that Moore’s Law isn’t going to make those machines smart enough to think really the way a human does," said Wozniak. "Another is when machines can out think humans they can’t be as intuitive and say what will I do next and what is an approach that might get me there. They can’t figure out those sorts of things.

"We aren’t talking about artificial intelligence actually getting to that point. [At the moment] It’s sort of like it magically might arise on its own. These machines might become independent thinkers. But if they do, they’re going to be partners of humans over all other species just forever."

Nick BostromWozniak's comments contrast with what Swedish philosopher Nick Bostrom said at the IP Expo tech conference in London on the same day.

The academic believes that machines will achieve human-level artificial intelligence in the coming decades, before quickly going on to acquire what he describes as "superintelligence," which is also the title of a book he authored. 

Bostrom, who heads the Future of Humanity Institute at the University of Oxford, thinks that humans could one day become slaves to a superior race of artificially intelligent machines. This doomsday scenario can be avoided, he says, if self-thinking machines are developed from the very beginning in a way that ensures they're going to act in the interest of humans. 

Commenting on how this can be achieved, Bostrom said this doesn't mean we have to "tie its hands behind its back and hold a big stick over it in the hope we can force it to our way." Instead, he thinks developers and tech companies must "build it [AI] in such a way that it's on our side and wants the same things as we do."

Join the conversation about this story »

NOW WATCH: The internet can’t decide whether this purse is white or blue

The UK is totally unprepared for our robot future, MPs warn

$
0
0

Over 50 robots dance during the opening ceremony of the sixth Shandong Cultural Industries Fair (SDCIF) at Jinan International Convention & Exhibition Center on August 25, 2016 in Jinan, Shandong Province of China.The 50 robots are named 'Alpha' and are connected to cellphones instructing them to perform different actions according to various musical sounds. (Photo by )

Advances in robotics and artificial intelligence (AI) are going to completely change how we live and work but the UK government is totally unprepared, MPs have warned.

The Science and Technology Committee released a report on Wednesday warning that the UK government "does not yet have a strategy" for equipping citizens with the skills they need to flourish in a world where AI is more prevalent.

It also has no strategy for dealing with the social and ethical dilemmas that AI advances present, according to the report.

Acting chair of the Science and Technology Committee, Dr Tania Mathias MP, said in a statement: "Artificial intelligence has some way to go before we see systems and robots as portrayed in the creative arts such as Star Wars. At present, 'AI machines' have narrow and specific roles, such as in voice-recognition or playing the board game 'Go'.

"But science fiction is slowly becoming science fact, and robotics and AI look destined to play an increasing role in our lives over the coming decades. It is too soon to set down sector-wide regulations for this nascent field but it is vital that careful scrutiny of the ethical, legal and societal ramifications of artificially intelligent systems begins now."

The Committee — comprised of 14 MPs appointed by the House of Commons — pointed out that AI systems are already starting to transform our everyday lives, calling out driverless cars and computers that can help doctors to diagnose patients as examples.

But advances in AI raise a host of questions for society, according to the Committee, particularly around ethics, transparency, and privacy.

As a result, the Committee is calling on the government to create a new "Commission on Artificial Intelligence" at the Alan Turing Institute, headquartered at the British Library in London, to examine the social, ethical, and legal implications of recent and potential developments in AI.

UK is well-placed to lead on AI

The UK is poised to become a world leader in this type of "intellectual leadership" on AI, the Committee said, adding that UK engineers have developed improved automated voice recognition software, predictive text keyboards on smartphones, and autonomous vehicles.

While UK AI startups like DeepMind (acquired by Google for a reported £400 million) and Magic Pony Technologies (acquired by Twitter for a reported (£122 million) often punch above their weight, the UK government is failing to deliver leadership in the field of AI, according to the Committee.

"Government leadership in the fields of robotics and AI has been lacking. Some major technology companies — including Google and Amazon — have recently come together to form the 'Partnership on AI'," said Mathias. "While it is encouraging that the sector is thinking about the risks and benefits of AI, this does not absolve the government of its responsibilities. It should establish a 'Commission on Artificial Intelligence' to identify principles for governing the development and application of AI, and to foster public debate."

DeepMind match 1In terms of robots taking people's jobs, there are conflicting views, the Committee says. However, despite the differing views, the Committee believes that "a much greater focus" is needed on adjusting the UK's education and training systems to deliver the skills that will enable people to adapt and thrive as the new technology comes to fruition.

Rob McCargow, artificial intelligence leader at PwC, hailed the report as "the first step in the right direction."

He added: "We need to ensure all parties come together to develop the necessary regulation for building trusted and transparent AI systems to support future economic growth. Having the right standards in place is essential to take advantage of AI for the good of human kind, but we can't just think about this from a UK point of view - AI has no regard for international borders so we need a coherent global approach to regulation."

McCargow stressed that the UK must prioritise funding for R&D in the field of AI, especially after Brexit happens.

"Developing the right skills to ensure we can continue to innovate is important," said McCargow. "One school of thought is to equip the workforce of the future purely with digital skills, but because AI has the potential to democratise access to technology and code for us, humans will need to focus on creativity and critical thinking."

On AI ethics, McCargow said: "We need to ensure there is diversity in the field at the point of technology creation. If the workforce creating these first forays into AI systems isn't representative of the population, how can we ensure we're creating unbiased products that are relevant to everyone?"

TechUK, the trade body that represents UK technology companies, also welcomed the report.

"Like all new powerful technologies, robotics and AI will bring great changes, and it is essential that they are used in a way that enhances the lives of ordinary people and strengthens the society that we live in," said Sue Daley, head of big data and analytics at TechUK.

"Business, academia, citizens and government all have a role to play in ensuring we have an informed and balanced debate about the potential impact of these new technologies and how we can ensure we all benefit from their development and use."

Join the conversation about this story »

NOW WATCH: The 7 best TV shows on Netflix you've probably never heard of

Huawei has formed a strategic partnership to develop AI

$
0
0

Quarterly AI FundingThis story was delivered to BI Intelligence Apps and Platforms Briefing subscribers. To learn more and subscribe, please click here.

Huawei’s R&D arm, Noah’s Ark Laboratory, announced on Tuesday a partnership with the University of California, Berkley’s artificial intelligence (AI) lab focused on researching AI in all its forms.

Initially, the Chinese smartphone company will fund UC Berkley $1 million as it covers areas like deep learning, reinforcement learning, machine learning, natural language processing (NLP), and computer vision. Huawei is likely investigating AI in order to ensure its not left behind by rival smartphone companies like Samsung, Apple, and Google, all of which have begun implementing AI into their devices.

As hardware shipments begin to decelerate, hardware companies are looking at AI as the next growth platform. Consumers will increasingly find the value in AI as the technology becomes more sophisticated. Therefore, it will become imperative for hardware companies to have an offering of some sort, Cyanogen executive chairman Kirt McMaster told Bloomberg. In line with this, the number of investments in AI is growing rapidly, according to data from CB Insights.

AI is already being pushed as the technology that will power the next generation of consumer-facing products, such as chatbots, search, and camera functionality. Here are a few recent examples:

  • During its hardware event last week, Google spent the vast majority of the conference outlining the many ways its virtual assistant, Google Assistant, is being integrated into its smartphones, messaging app, and connected home device.
  • Samsung acquired Viv, which it intends to integrate into future models of its smartphones and connected devices to rival other connected virtual assistants, like Siri.
  • Microsoft is betting that AI will usher in the next stage of consumer interaction, calledconversational commerce. The company is also integrating AI into all of its software offerings, including Office Suite 365, its third-party keyboard SwiftKey, and camera app Pix.

To receive stories like this one directly to your inbox every morning, sign up for the Apps and Platforms Briefing newsletter. Click here to learn more about how you can gain risk-free access today.

Join the conversation about this story »

The White House is preparing for the future of artificial intelligence

$
0
0

The White HouseThis story was delivered to BI Intelligence Apps and Platforms Briefing subscribers. To learn more and subscribe, please click here.

The White House released a lengthy report Wednesday on artificial intelligence (AI) and its potential impact on multiple industries.

Understanding and pinpointing possibly harmful factors that underline the rapid evolution of AI will help prepare future government bodies, businesses, and users for the broad deployment of AI.

The report, titled “Preparing For The Future Of Artificial Intelligence,” highlighted four key areas of AI and offered recommendations for each: 

  • AI and regulation: Regulatory procedures can provide a sense of accountability and protect user data. Nevertheless, it’s important that these regulations do not inhibit innovation within the space. The report recommends that agencies draw on technical experts at the senior level when setting regulatory policies to provide the appropriate level of touch for potential regulations.
  • Research and the workforce: AI is becoming a bigger part of everyday life. Thus, it will be necessary for the workforce to become familiar with the ins and outs of the technology. It’s in the interest of schools, universities, businesses, and the government to begin implementing programs aimed at educating the current and next generation of workers in how to use and apply AI across various fields. The White House concurrently released acompanion report focused on federally funded research and development in AI.
  • Fairness, safety, and governance: While education will be key for preparing users of AI, it’s also important that this education provides an ethical foundation that safeguards against improper application of AI and explores security, privacy, and safety.
  • Global considerations and security: While the US leads global efforts in developing AI technology, international engagement will be necessary to fully explore the many applications of AI across various industries. Governments will also need to work together to reach a standardized policy for the use of AI in things like warfare, cyber security, and sharing user data. 

To receive stories like this one directly to your inbox every morning, sign up for the Apps and Platforms Briefing newsletter. Click here to learn more about how you can gain risk-free access today.

Join the conversation about this story »

Cambridge has opened a £10 million research facility to examine the morality and governance of AI

$
0
0

Cambridge University

The University of Cambridge has opened a £10 million research centre to explore the impact of artificial intelligence, Wired reports.

The Leverhulme Centre for the Future of Intelligence— first announced last December and funded with a research grant from the Leverhulme Trust — will study the impacts of this "potentially epoch-making technological development, both short and long term."

The centre's new website details a list of projects that its researchers will look at. Projects include:

  • Science, value, and the future of intelligence
  • Policy and responsible innovation
  • The value alignment problem
  • Kinds of intelligence
  • Autonomous weapons — prospects for regulation
  • AI: Agents and persons

The centre, which is due to start work this month and will eventually have its own building on Mill Lane in the heart of Cambridge, also writes on its website that its aim is to build a new interdisciplinary community of researchers, with strong links to technologists and the policy world.

Led by Cambridge philosophy professor Huw Price, the centre, will work in conjunction with the university’s Centre for the Study of Existential Risk (CSER), which is funded by Skype cofounder Jaan Tallinn and looks at emerging risks to humanity’s future including climate change, disease, warfare, and artificial intelligence.

Zoubin Ghahramani, deputy director of the new centre and professor of information engineering at Cambridge, said in a statement last December: "The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks—from recognising images to translating between languages and driving cars.

"We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications."

Think it sounds interesting? Well, you could be in luck. The centre is currently looking for PhD students to focus on the "study of values and intelligence."

Stephen Cave, director of the centre, told Business Insider: "So far, the core team is about 15 people. We are now starting to recruit post-doctoral researchers (about 12 in the first year). But not all of these people will be based in Cambridge, and not all of those who are based in Cambridge will be physically in our premises (because they already have offices)."

Join the conversation about this story »

NOW WATCH: The story of Lisa Brennan-Jobs, the daughter Steve Jobs claimed wasn't his

Apple wants to use AI to turbo-charge your iPhone's battery life (AAPL)

$
0
0

super charge iphone battery

Apple wants to use artificial intelligence (AI) to turbo-charge the battery life of your iPhone.

In an interview with The Nikkei Asian Review published Monday, CEO Tim Cook said that the company is looking at integrating AI technology across its products to improve them. While the tech is most commonly associated with chatbots like Apple's Siri or Google's Assistant, it can be used "in ways that most people don't even think about," Cook said.

A key example he gave was making your devices last longer in between charges: "We want the AI to increase your battery life."

Earlier this year, Google announced it had integrated AI into its power-hungry data centres — and it managed to produce savings of up to 40% by automatically figuring out the most efficient way to run things.

Apple (and others) could try and achieve similar gains in smartphones and other commercial devices by using AI to optimise battery life and reduce power consumption.

Battery life is a constant pain for just about every smartphone on the market. The iPhone lasts around a day, and even the most energy efficient consumer devices only last 2 or 3 — far less than dumbphones from before the smartphone era, that could last a week or so on a single day.

The race for ever-thinner and more-powerful devices means that any gains in energy efficiency or battery tech from year to year typically just result in more features being crammed into a device, rather than meaningful improvements in battery life.

As such, any improvements — whether from AI, or anywhere else — will be sure to be welcomed by consumers.

Join the conversation about this story »

NOW WATCH: This Lego-style home can be built in a few weeks with just a screwdriver


REVIEW: Google's first phone makes Siri look trivial (GOOG, GOOGL)

$
0
0

Google Pixel

Android has always been a mess.

Its greatest strength — the openness and ability for any phone maker to freely adopt and modify the software — is also its greatest weakness. It has caused fragmentation, spotty or missed updates, and major security concerns. After all these years, the companies that make Android phones have shown no signs of cleaning things up.

So Google decided to fix the debacle it helped create.

The Pixel, the first smartphone designed by Google from the ground up, is the antidote to most of Android's problems. The phone, which starts at $649 and goes on sale this week, highlights Google's ambition to take back control of Android and finally prove it can be a streamlined and easy-to-use platform.

And it worked.

The Pixel is an excellent phone, and it's what Android should have been from the beginning. Google has finally figured out that it's not just enough to make great software. You also need to pair it with excellent hardware. Yes, that should be obvious. And yes, that has been Apple's philosophy for decades. But it's the truth.

Google also has a major advantage over Apple. It has always been better at software and services, and nothing proves that more than Google Assistant, the new digital helper that lives inside the Pixel and future Google-made products like the Google Home speaker.

The Pixel phone is a taste of a future in which hardware matters less and the artificial intelligence that powers it takes precedence. And no one is better positioned to take advantage of that future at the moment than Google.

'OK Google, when's my next flight?'

The new Google Assistant functions a lot like Siri. Tap and hold the home button, and the Assistant pops up to ask what you need.

Google Assistant pulls information from everything you do in Google's services, from Search to Gmail to Calendar to Photos. The more Google services you use, the better Assistant becomes at helping you.

My favorite example over the past few days: I asked Assistant when my next flight was and it gave me the answer, complete with the Delta flight number and scheduled takeoff time. I never told Google about my flight. It just knew based on the confirmation email Delta sent me when I booked.

That's just one tiny example, but it's an important one. Assistant is smart enough to understand context across a variety of services to get you that one thing you want. It's shocking and magical when it works, and it's just the first step in Google's ambition to create a personalized Google for everyone. It's not there yet, but after spending a few days with Assistant and the Pixel, I can tell Google is better equipped to make AI work for users than any other company.

Google Pixel assistant

Assistant’s capabilities are so broad and varied that it's impossible to list them all here. I haven't even come close to unlocking everything Assistant can do, but I was routinely surprised whenever I dreamed up something new to ask.

Pull up the photos I took from my latest trip to San Francisco. Done. Give me the fastest route home. Done. Remind me to chat with my boss when I get to work tomorrow. Done. Play that Calvin Harris and Rihanna song. Done.

Then there's the ability to tap into Google's vast knowledge of the web and deliver answers to the questions you ask. What time is the next presidential debate? Did the Jets win? Are there any good ramen restaurants near me, and can I get a reservation?

I could go on and on, but you probably get the idea. Google has tens of billions of answers logged into its system, and it can pull even more from trusted sources like Wikipedia if it's stumped. It's almost always able to get you what you're looking for, though I did experience some rare cases in which it would pull up a standard list of Google search results.

Google Pixel assistant

And when you couple Assistant with Google Now, Google's proactive helper that delivers information and alerts based on what Google knows about you, the Pixel turns into more than just a phone that responds to your swipes and taps. The Pixel is constantly working for you, delivering what you want before you even know you want it.

Apple should be embarrassed that Siri, which had a five-year head start on Google Assistant, is nowhere near as capable.

Still, there were some flaws with Assistant. It could send emails and text messages but couldn't read ones sent to me (a feature that should be coming soon). It also couldn't tell me when my next Amazon order was expected to arrive, even though that information appeared in Google Now. Those things can easily be fixed over time, and Assistant will continue to get smarter and learn new skills the more people use it.

There are also some obvious privacy concerns. Assistant is so good because it knows so much about you. So you have to a let a piece of yourself go and have a high level of trust that Google won't misuse or abuse all that personal information that makes Assistant work so well. It will most likely scare off some people, and I don't blame them for it. But for me, it's a fair price to pay for a tool that makes my life so much easier.

Android perfected

The other benefit to the Pixel is Android. This isn't the modified Android you've experienced on phones from Samsung or LG. It's "pure" Android, delivered the way Google intended it. And it's really, really good.

This latest version is called Nougat, and it sports a clean design and all the standard features you'd expect from a high-end phone.

Google Pixel

But the real bonus is that Pixel will be the only phone that receives new versions of Android as soon as they are available. That's almost unheard-of for Android devices. Even the Nexus phones Google has helped other manufacturers develop over the years have struggled to deliver timely updates.

The Pixel comes with the promise that you are buying a phone that will continue to improve over time. It's one of the biggest things keeping users locked into the iPhone, and it's refreshing to see that finally come to Android. At last, Android finally feels on par with iOS. The next great challenge will be to expand that philosophy to the rest of the Android ecosystem, but I'm not very optimistic that can happen. From now on, if you want the best of Android, your best bet will be to buy a phone straight from Google.

Just another phone

The hardware is easily the least exciting part about the Pixel. Everything here is pretty standard. It comes in two sizes, one with a 5-inch screen and an "XL" model with a 5.5-inch screen. There's a fingerprint sensor, a super-sharp screen, fast charging (if you use the included wall plug), and a standard headphone jack.

That doesn't make the hardware bad. It just goes to show that the real draw of the Pixel comes from the software.

But the Pixel is missing two features that are becoming standard in premium phones: wireless charging and water resistance. Neither is a must-have, but if Google is making you pay this much for a phone, it would have been nice to include something like that.

Google Pixel

The design is also shockingly similar to that of the iPhone 7, so much so that one of my colleagues thought I had two iPhones sitting on my desk when he took a quick glance. It's also noticeably thicker, which is probably why Google was able to brag that the Pixel doesn't have an unsightly camera bump.

That's the truly disappointing thing about the Pixel's hardware. The iPhone 7 design already feels dated, and it's even worse that Google borrowed so heavily from it. I would have liked to see some creativity design-wise.

That said, the camera does stand out. I'm not confident enough to back Google's claim that the Pixel has the best smartphone camera ever, but it's definitely right up there. As with everything about the Pixel, some extra AI is built into the camera, in this case to help you find the best shot when you take several in a row. You won't be disappointed.

Even better: Google will give you unlimited storage for all your high-resolution images taken with the Pixel, a welcome treat when Apple gives you only a few measly gigs of free iCloud storage.

Conclusion

The Pixel is the best of Android and the best alternative to the iPhone. It's also just a first step as Google accelerates its hardware ambitions and takes development seriously for the first time. Google is finally ready to push Android forward and do it right, and the Pixel is an amazing start.

Hardware is easy. Anyone can make a really nice phone these days and even do it on the cheap. The real challenge is making the phone do more for you through AI and other useful services. The Pixel is proof Google isn't just up for that challenge. It can beat the competition on the first try.

SEE ALSO: Google is going to win the next major battle in computing

Join the conversation about this story »

NOW WATCH: We got our hands on the Pixel — Google's first ever smartphone

The tech industry is making big money off people's fear of AI

$
0
0

Stephen Hawking

Star physicist Stephen Hawking has reiterated his concerns that the rise of powerful artificial intelligence (AI) systems could spell the end for humanity.

Speaking at the launch of the University of Cambridge’s Centre for the Future of Intelligence on 19 October, he did, however, acknowledge that AI equally has the potential to be one of the best things that could happen to us.

So are we on the cusp of creating super-intelligent machines that could put humanity at existential risk?

There are those who believe that AI will be a boom for humanity, improving health services and productivity as well as freeing us from mundane tasks.

However, the most vocal leaders in academia and industry are convinced that the danger of our own creations turning on us is real.

For example, Elon Musk, founder of Tesla Motors and SpaceX, has set up a billion-dollar non-profit company with contributions from tech titans, such as Amazon, to prevent an evil AI from bringing about the end of humanity. Universities, such as Berkeley, Oxford and Cambridge have established institutes to address the issue. Luminaries like Bill Joy, Bill Gates and Ray Kurzweil have all raised the alarm.

Listening to this, it seems the end may indeed be nigh unless we act before it’s too late.

The role of the tech industry

Artificial Intelligence

Or could it be that science fiction and industry-fuelled hype have simply overcome better judgement? The cynic might say that the AI doomsday vision has taken on religious proportions. Of course, doomsday visions usually come with a path to salvation.

Accordingly, Kurzweil claims we will be virtually immortal soon through nanobots that will digitise our memories.

And Musk recently proclaimed that it’s a near certainty that we are simulations within a computer akin to The Matrix, offering the possibility of a richer encompassing reality where our “programs” can be preserved and reconfigured for centuries.

Tech giants have cast themselves as modern gods with the power to either extinguish humanity or make us immortal through their brilliance. This binary vision is buoyed in the tech world because it feeds egos – what conceit could be greater than believing one’s work could usher in such rapid innovation that history as we know it ends?

No longer are tech figures cast as mere business leaders, but instead as the chosen few who will determine the future of humanity and beyond.

For Judgement Day researchers, proclamations of an “existential threat” is not just a call to action, but also attracts generous funding and an opportunity to rub shoulders with the tech elite.

So, are smart machines more likely to kill us, save us, or simply drive us to work? To answer this question, it helps to step back and look at what is actually happening in AI.

Underneath the hype

Evil robot

The basic technologies, such as those recently employed by Google’s DeepMind to defeat a human expert at the game Go, are simply refinements of technologies developed in the 1980s. There have been no qualitative breakthroughs in approach.

Instead, performance gains are attributable to larger training sets (also known as big data) and increased processing power. What is unchanged is that most machine systems work by maximising some kind of objective. In a game, the objective is simply to win, which is formally defined (for example capture the king in chess). This is one reason why games (checkers, chess, Go) are AI mainstays – it’s easy to specify the objective function.

In other cases, it may be harder to define the objective and this is where AI could go wrong. However, AI is more likely to go wrong for reasons of incompetence rather than malice. For example, imagine that the US nuclear arsenal during the Cold War was under control of an AI to thwart sneak attack by the Soviet Union.

Due to no action of the Soviet Union, a nuclear reactor meltdown occurs in the arsenal and the power grid temporarily collapses. The AI’s sensors detect the disruption and fallout, leading the system to infer an attack is underway.

The president instructs the system in a shaky voice to stand down, but the AI takes the troubled voice as evidence the president is being coerced. Missiles released. End of humanity.

The AI was simply following its programming, which led to a catastrophic error. This is exactly the kind of deadly mistakes that humans almost made during the Cold War. Our destruction would be attributable to our own incompetence rather than an evil AI turning on us – no different than an auto-pilot malfunctioning on a jumbo jet and sending its unfortunate passengers to their doom. In contrast, human pilots have purposefully killed their passengers, so perhaps we should welcome self-driving cars.

Artificial intelligence

Of course, humans could design AIs to kill, but again this is people killing each other, not some self-aware machine. Western governments have already released computer viruses, such as Stuxnet, to target critical industrial infrastructure. Future viruses could be more clever and deadly. However, this essentially follows the arc of history where humans use available technologies to kill one another.

There are real dangers from AI but they tend to be economic and social in nature. Clever AI will create tremendous wealth for society, but will leave many people without jobs. Unlike the industrial revolution, there may not be jobs for segments of society as machines may be better at every possible job.

There will not be a flood of replacement “AI repair person” jobs to take up the slack. So the real challenge will be how to properly assist those (most of us?) who are displaced by AI. Another issue will be the fact that people will not look after one another as machines permanently displace entire classes of labour, such as healthcare workers.

Fortunately, the governments may prove more level-headed than tech celebrities if they choose to listen to nuanced advice.

A recent report by the UK’s House of Commons Science and Technology Committee on the risks of AI, for example, focuses on economic, social and ethical concerns. The take-home message was that AI will make industry more efficient, but may also destabilise society.

If we are going to worry about the future of humanity we should focus on the real challenges, such as climate change and weapons of mass destruction rather than fanciful killer AI robots.

Bradley Love, Professor of Cognitive and Decision Sciences, UCL

This article was originally published on The Conversation. Read the original article.

Join the conversation about this story »

NOW WATCH: A neuroscientist explains why going to a chiropractor may be a waste of money

The least educated people in the world are in denial about how robots will take over their jobs

$
0
0

japan robot kengoro

The least educated people in the professional world are also the least concerned about the risk of technology taking their jobs, according to a new survey released by salary benchmarking site Emolument.

Emolument surveyed 900 people working across "several industries and countries" and found that those with no university education were least likely to think that technology is a risk to their job. Just 18% of people with no degree answered yes when asked: "Is technology putting your job at risk?"

On the other hand, 40% of people with a masters degree in finance answered yes.

Here's Emolument's chart showing different answers by degree level:

emolument degree technology

While those with the least education may not be particularly worried about losing their jobs to robots, they should be.

Recent research from the University of Oxford's policy school, the Oxford Martin School titled "Technology at work: V2.0," concluded that 35% of jobs in the UK are at risk of being replaced by automation, 47% of US jobs are at risk, and across the OECD as a whole an average of 57% of jobs are at risk. In China, the risk of automation is as high as 77%.

As my colleague Oscar Williams-Grut wrote earlier in the year"most of the jobs at risk are low-skilled service jobs like call centres or in manufacturing industries."

Another report released at 2016's annual World Economic Forum conference argued that automation"will lead to a net loss of over 5 million jobs in 15 major developed and emerging economies by 2020," also saying that the majority of jobs will be lost by low skilled workers, generally with lower levels of education.

Along with looking at how people with different levels education perceive the threat of technology to their jobs, Emolument also looked at different job areas, finding that those in financial services — where computer programmes and algorithms are now doing much of the work — are most worried, while engineers are the least.

Check out the table below:

emolument tech job risk by job

Join the conversation about this story »

NOW WATCH: The 'Mrs. Doubtfire' house is on sale for $4.45 million — here’s what it looks like 23 years later

How Google embarrassed Apple (GOOG, AAPL)

$
0
0

Sundar Pichai Google event Pixel 2016

This week didn't look good for Apple.

Google's new Pixel phone launched to positive reviews, largely because of the phone's new digital helper called Google Assistant.

As I wrote in my review of the new Google Pixel, it’s relatively easy to make a high-end smartphone these days. The real challenge is lighting it up with unique software that helps you do more.

And the new Google Assistant accomplishes just that.

Right out of the gate, Assistant is noticeably smarter and more capable than Siri, a stark embarrassment for Apple, which had a five-year head start on Google. AI and voice control are considered to be the next big step in how we compute (just look at the early success of the Amazon Echo), and Google has already pulled ahead.

Assistant is so good because it taps into Google’s vast network of products and culls them together into a single, all-knowing app. The more Google services like Calendar, Photos, and Gmail you use, the smarter Assistant gets.

It’s also better at answering questions than the competition, thanks to its ability to tap into Google’s vast Knowledge Graph and deliver the single answer to the question you ask. Google Assistant has so many impressive skills that it’s impossible to list them all now.

I’m still discovering new capabilities after almost two weeks with the Pixel. Here’s how I put it in my review:

I haven't even come close to unlocking everything Assistant can do, but I was routinely surprised whenever I dreamed up something new to ask.

Pull up the photos I took from my latest trip to San Francisco. Done. Give me the fastest route home. Done. Remind me to chat with my boss when I get to work tomorrow. Done. Play that Calvin Harris and Rihanna song. Done.

Then there's the ability to tap into Google's vast knowledge of the web and deliver answers to the questions you ask. What time is the next presidential debate? Did the Jets win? Are there any good ramen restaurants near me, and can I get a reservation?

I could go on and on, but you probably get the idea. Google has tens of billions of answers logged into its system, and it can pull even more from trusted sources like Wikipedia if it's stumped. It's almost always able to get you what you're looking for, though I did experience some rare cases in which it would pull up a standard list of Google search results.

The shame here is that Siri had a five-year head start on Google Assistant, and Apple totally blew it. Siri struggles to answer even the simplest of queries. It wasn’t until two tech columnists recently pointed out those flaws that Siri quickly learned the answers to some of the questions they were griping about. Curious!

The reality is Apple can’t be reactive and improve Siri every time someone blogs about its flaws. 

Luckily, the pieces are in place, as Apple has acquired a series of AI and machine learning companies over the last year or so. Most notably, it bought the UK-based startup Vocal IQ, which as I reported earlier this year had technology that allowed users to control a phone or computer completely with voice. That also jibes with Apple’s near-term goal to make Siri fully capable of controlling the iPhone within the next few years, as Bloomberg’s Mark Gurman reported.

But for now, Google Assistant is clearly in the lead, and that lead will only get wider as more people use it and increases its intelligence.

SEE ALSO: 7 reasons why the Google Pixel is the best Android phone

Join the conversation about this story »

NOW WATCH: The story of Lisa Brennan-Jobs, the daughter Steve Jobs claimed wasn't his

Here’s what a computer is thinking when it plays chess against you

$
0
0

A co-lead at Google's Big Picture data visualization group has created an online version of chess called the Thinking Machine 6, which lets you play against a computer and visualize all of its possible moves. While the computer may not be the most advanced player, the program provides you with an inside look into how your artificial opponent's mind works. Here's a look at how it works.

Follow Tech Insider:On Facebook

Join the conversation about this story »

11 thought-provoking questions raised by 'Westworld'

$
0
0

Dolores Abernathy fly Westworld premiere

Note: Spoilers are ahead for previously aired Westworld episodes, as is some potentially spoiler-y speculation for future episodes.

Something is wrong in "Westworld."

HBO's sci-fi western drama — a serialized reboot of Michael Crichton's 1973 thriller by the same name — depicts a fantastical robot-filled "theme park" of the future.

Westworld guests can interact with artificially intelligent "hosts"— gunslingers, brothel madams, a farmer’s daughter, Native Americans, and more — taking part in all the sex and violence that can be jammed into these characters’ storylines. And all of it teed up by the people who are essentially Westworld's game designers.

But as visitors ride, terrorize, shoot, and sleep with the park's robot hosts, the designers operating behind the scenes soon discover that something is off.

Along the way, Westworld’s story brushes up against all kinds of uneasy questions — mainly scientific and philosophical — about the complex intersection of technology and people.

While we can't say where the show is going, or whether it will ever answer any of these questions, here are some of the most interesting ones we’ve spotted so far.

Do we all live in a simulation?

Everyone in Westworld wakes up to go about their day — working, drinking, fighting, whatever it may be — without knowing that their entire existence is a simulation of a “real world” created by the park’s designers.

Physicists and philosophers say that in our world, we can’t prove we don’t live in some kind of computer simulation.

Some think that if that is the case, we might be able to "break out" by noticing any errors in the system, something the Westworld robots seem to be brushing up against.



Can we control artificial intelligence?

Each time the park wakes up (or the simulation restarts?), the hosts are supposed to go about their routines, playing their roles until some guest veers into the storyline. The guest might go off on an adventure with the host — or they might rape or kill them. In any case, when the story resets, the hosts' memories are wiped clean.

Supposedly.

For some reason, a few hosts seem to remember their disturbing past lives. This may be related to a “software update” created by park founder Dr. Robert Ford (played by Anthony Hopkins) or it may have something to do with his mysterious co-founder, Arnold.

Luckily, and for a variety of reasons, AI researchers today believe out-of-control AI is a myth and that we can control intelligent software. Then again, few computer and linguistic scientists thought machines could ever learn to listen and speak as well as people — and now they can on a limited level.



How far off are the intelligent machines of Westworld?

Behind the scenes at Westworld's headquarters, advanced industrial tools can 3D-print the bodies of hosts from a mysterious white goop. Perhaps it's made of nanobots, or some genetically engineered tissue, or maybe it's just plastic that's later controlled by as-yet-undisclosed advanced technology.

There's a lot of mystery here, and as we find out in one episode (when a host smashes his own head in with a rock), the "thinking" part of the machines is definitely located in the head. But what's it made of? And what powers these strange constructs? And how are the batteries recharged, if at all? Can (and how do) they feel pain and pleasure?

These automatons seem like an engineer's dream as well as her nightmare.

Nothing like this exists in the real world, but researchers and entrepreneurs are working hard to advance soft robots, ultra-dense power sources, miniaturized everyday components (some down to an atomic scale), and other bits and pieces that might ultimately comprise a convincing artificial human.



See the rest of the story at Business Insider

You can now go on a 'Stranger Things' scavenger hunt using Google's new messaging app

$
0
0

Stranger Things Barb

Google is using its new messaging app Allo to send people on a "Stranger Things"-inspired scavenger hunt.

Today only in New York City — that's Friday, October 28 — users of Allo, Google's AI-powered messaging app, can embark on a scavenger hunt to locate Barb, one of the show's characters who mysteriously disappeared. 

According to Engadget, if you ask Google's Assistant within Allo "Where is Barb?", it will reveal the first location of the hunt, which will take you throughout the city.

The hunt could result in winning prizes, like a BMX bike, Pentax camera or a Panasonic boombox, according to Engadget. 

Google itself was mysterious about the scavenger hunt, writing only this on its official blog:

Google Allo will help you unlock your powers today in New York City. Stay tuned to Google on Twitter for a hint on where the drop-off from Hawkins National Laboratories will take place.

Google is also unveiling new features to the Allo app, including a new "Stranger Things"-inspired sticker pack, the option to reply to messages directly from your notifications on both Android and iOS, and split-screen mode, which is a feature of Android N. The option to draw on photos, which was only available for Android users at launch, is now available on iOS.

The Allo app was the first Google product to use Google's Assistant, which uses AI to help answer questions and provide information, much like Apple's Siri and Amazon's Alexa (Google has now added the Assistant to its new Pixel phone and Google Home device, too). By the end of September, Allo had reached 5 million downloads in the Google Play store, five days after it launched. But the company is mostly letting the app's user base grow organically, opting not to make it standard on its new phone or requiring it of Android phone developers. The app is now No. 96 on the Play store's top free apps. 

SEE ALSO: Google just launched its answer to Siri. Here's how it works

Join the conversation about this story »

NOW WATCH: We got our hands on the Home — Google’s answer to the Amazon Echo


Watch a haunting MIT program transform photos into your worst nightmares

$
0
0

capitol building ai toxic mit nightmare machine

The smarter artificial intelligence (AI) software gets, the more fearful the world's brightest contemporary minds seem to grow.

SpaceX and Tesla founder Elon Musk said in 2014 that "we're summoning the demon" with the technology. And just last week Stephen Hawking, during the opening of the Leverhulme Centre for the Future of Intelligence, also stumped on his trepidation of ever-powerful computer algorithms.

"In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity,"Hawking said. "We do not yet know which."

Preying on these fears, three researchers at MIT Media Lab took a page from "Black Mirror" and created the Nightmare Machine.

"We know that AI terrifies us in the abstract sense. But we wondered: [...] can AI elicit more powerful visceral reactions more akin to what we see in a horror movie?"Pınar Yanardağ, a data scientist and member of the project, told Business Insider in an email. "That is, can AI creatively imagine things that we find terrifying?"

Here's how the team developed its scary software, plus some images and animations of it at work.

SEE ALSO: 11 unsettling questions raised by HBO's 'Westworld'

DON'T MISS: Stephen Hawking warned us about contacting aliens, but this astronomer says it's 'too late'

The team first borrowed an algorithm that can transform ordinary images into the styles of famous artists, like Vincent Van Gogh.



The algorithm can be trained on some source material, then recognize and break up discrete stylistic elements into layers.

Source: Business Insider



The MIT Media Lab team adapted the algorithm, editing some of the source code and then training it with some ... more macabre source material. They did so several times to create different filters.



See the rest of the story at Business Insider

How Google embarrassed Apple (GOOG, AAPL)

$
0
0

Sundar Pichai Google event Pixel 2016

This week didn't look good for Apple.

Google's new Pixel phone launched to positive reviews, largely because of the phone's new digital helper called Google Assistant.

As I wrote in my review of the new Google Pixel, it’s relatively easy to make a high-end smartphone these days. The real challenge is lighting it up with unique software that helps you do more.

And the new Google Assistant accomplishes just that.

Right out of the gate, Assistant is noticeably smarter and more capable than Siri, a stark embarrassment for Apple, which had a five-year head start on Google. AI and voice control are considered to be the next big step in how we compute (just look at the early success of the Amazon Echo), and Google has already pulled ahead.

Assistant is so good because it taps into Google’s vast network of products and culls them together into a single, all-knowing app. The more Google services like Calendar, Photos, and Gmail you use, the smarter Assistant gets.

It’s also better at answering questions than the competition, thanks to its ability to tap into Google’s vast Knowledge Graph and deliver the single answer to the question you ask. Google Assistant has so many impressive skills that it’s impossible to list them all now.

I’m still discovering new capabilities after almost two weeks with the Pixel. Here’s how I put it in my review:

I haven't even come close to unlocking everything Assistant can do, but I was routinely surprised whenever I dreamed up something new to ask.

Pull up the photos I took from my latest trip to San Francisco. Done. Give me the fastest route home. Done. Remind me to chat with my boss when I get to work tomorrow. Done. Play that Calvin Harris and Rihanna song. Done.

Then there's the ability to tap into Google's vast knowledge of the web and deliver answers to the questions you ask. What time is the next presidential debate? Did the Jets win? Are there any good ramen restaurants near me, and can I get a reservation?

I could go on and on, but you probably get the idea. Google has tens of billions of answers logged into its system, and it can pull even more from trusted sources like Wikipedia if it's stumped. It's almost always able to get you what you're looking for, though I did experience some rare cases in which it would pull up a standard list of Google search results.

The shame here is that Siri had a five-year head start on Google Assistant, and Apple totally blew it. Siri struggles to answer even the simplest of queries. It wasn’t until two tech columnists recently pointed out those flaws that Siri quickly learned the answers to some of the questions they were griping about. Curious!

The reality is Apple can’t be reactive and improve Siri every time someone blogs about its flaws. 

Luckily, the pieces are in place, as Apple has acquired a series of AI and machine learning companies over the last year or so. Most notably, it bought the UK-based startup Vocal IQ, which as I reported earlier this year had technology that allowed users to control a phone or computer completely with voice. That also jibes with Apple’s near-term goal to make Siri fully capable of controlling the iPhone within the next few years, as Bloomberg’s Mark Gurman reported.

But for now, Google Assistant is clearly in the lead, and that lead will only get wider as more people use it and increases its intelligence.

Join the conversation about this story »

NOW WATCH: Here's why the time is always 9:41 in Apple product photos

Facebook AI director Yann LeCun explains how he hires the smartest minds in the world (FB)

$
0
0

Yann LeCun

US tech giants like Facebook, Google, Amazon, and Microsoft are investing hundreds of millions of dollars into artificial intelligence as they look to make their platforms and personal assistants that it smarter.

Part of this effort involves finding and hiring the brightest minds in the world. But with so many large companies involved in the so-called "AI race" it's not always easy to recruit the best talent.

Yann LeCun, the director of Facebook AI Research and one of the world’s most prominent AI academics, told Business Insider last week that he employs certain tactics to get people to come and work for him.

"There’s various things ... but a lot of it is nurturing relationships with academic laboratories that have a track record of producing interesting students," said LeCun, who is also a professor at New York University.

He went on to say that allowing scientists to publish their work — something that Apple does not do — is also key. "It’s very important for a scientist because the currency of the career as a scientist is the intellectual impact," he said. "So you can’t tell people 'come work for us but you can’t tell people what you’re doing' because you basically ruin their career. That’s a big element, which I think we pioneered within this context."

LeCun could not be drawn on how much Facebook is willing to pay the top AI people who hold expertise in fields like machine learning, computer vision, mobile robotics, and computational neuroscience. However, online forums suggest the tech giants are willing to pay the best candidates salaries that run into the hundreds of thousands of dollars.

"[Salary] is important," said LeCun. "Particularly when there is a competitive situation with Microsoft, DeepMind, Google etc. But the other fundamentals have to be right. If they’re not right, people are just not even considering coming to work for you."

LeCun added that the FAIR group, who refer to themselves internally as the "FAIRies", is now about 75 people strong, with offices in New York, Palo Alto, Seattle, and Paris.

"The role of FAIR is to advance the science and the technology of AI and do experiments that demonstrate that technology for new applications like computer vision, dialogue systems, virtual assistants, speech recognition, natural language understanding, translation, things like that," he said.

"There’s a lot of basic science behind it which is not particularly geared towards an application; it’s more about making progress and understanding intelligence and AI.

"Then we work very closely with another group, which is about twice our size, called applied machine learning. They turn the science into visible technology and build platforms for the company that product groups can use to deploy AI-based services in the company."

LeCun's comments were made as part of a longer interview which will be published on Business Insider at a future date.

Join the conversation about this story »

NOW WATCH: How to choose the best cut of steak — according to Anthony Bourdain

Facebook's AI director explained why some of the world's brightest minds might not want to work for Apple (FB, AAPL)

$
0
0

Yann Lecun

Apple is one of the most secretive technology companies in the world, with the Cupertino tech giant typically only providing public updates when it has a new product to announce.

Part of this secrecy involves getting employees to sign strict contracts that forbid them from talking to their friends and family about certain aspects of their work, particularly research and development (R&D) activities.

Unlike Facebook and Google, which let employees publish their academic breakthroughs in scientific journals and on blogs, Apple prevents its staff from talking about their research both online and offline. They're allowed to attend conferences but they don't give talks about what Apple is working on and they generally only disclose their employer when they're asked to.

This approach could be hindering the company's ability to hire some of the world's smartest minds, based on what Facebook AI director Yann LeCun said last week.

Describing how he gets the most talented software engineers in the world to come and work on Facebook's AI efforts, LeCun said: "Offering researchers the possibility of doing open research, which is publishing their work.

"In fact, at FAIR [Facebook Artificial Intelligence Research], it’s not just a possibility, it’s a requirement," he said in London. "So, [when] you’re a researcher, you assume that you’re going to publish your work. It’s very important for a scientist because the currency of the career as a scientist is the intellectual impact. So you can’t tell people 'come work for us but you can’t tell people what you’re doing' because you basically ruin their career. That’s a big element."

Apple's secrecy was cited by Bloomberg last October as something that's holding back the company's AI development efforts.

"Apple is off the scale in terms of secrecy," Richard Zemel, a professor in the computer science department at the University of Toronto, told Bloomberg. "They’re completely out of the loop."

Despite being a highly secretive company, Apple was still able to hire Carnegie Mellon University's (CMU) Russ Salakhutdinov, one of the world’s leading talents in AI, as director of AI research. Although Salakhutdinov works for Apple, he still holds his position at CMU. This suggests that Apple is willing to let some of its staff remain a part of the academic community, providing they don't talk about what they do for the iPhone maker. 

Apple declined to comment.

Join the conversation about this story »

NOW WATCH: The story of Lisa Brennan-Jobs, the daughter Steve Jobs claimed wasn't his

Experts are worried that advancements in AI could threaten humanity

$
0
0

hello barbie ai artificial intelligence mattel

Oren Etzioni, a well-known AI researcher, complains about news coverage of potential long-term risks arising from future success in AI research (see “No, Experts Don't Think Superintelligent AI is a Threat to Humanity”).

After pointing the finger squarely at Oxford philosopher Nick Bostrom and his recent book, Superintelligence, Etzioni complains that Bostrom’s “main source of data on the advent of human-level intelligence” consists of surveys on the opinions of AI researchers. He then surveys the opinions of AI researchers, arguing that his results refute Bostrom’s.

It’s important to understand that Etzioni is not even addressing the reason Superintelligence has had the impact he decries: its clear explanation of why superintelligent AI may have arbitrarily negative consequences and why it’s important to begin addressing the issue well in advance. Bostrom does not base his case on predictions that superhuman AI systems are imminent. He writes, “It is no part of the argument in this book that we are on the threshold of a big breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur.”

Thus, in our view, Etzioni’s article distracts the reader from the core argument of the book and directs an ad hominem attack against Bostrom under the pretext of disputing his survey results. We feel it is necessary to correct the record. One of us (Russell) even contributed to Etzioni’s survey, only to see his response being completely misconstrued. In fact, as our detailed analysis shows, Etzioni’s survey results are entirely consistent with the ones Bostrom cites.

How, then, does Etzioni reach his novel conclusion? By designing a survey instrument that is inferior to Bostrom’s and then misinterpretingthe results.

The subtitle of the article reads, “If you ask the people who should really know, you’ll find that few believe AI is a threat to humanity.” So the reader is led to believe that Etzioni asked this question of the people who should really know, while Bostrom did not. In fact, the opposite is true: Bostrom did ask people who should really know, but Etzioni did not ask anyone at all. Bostrom surveyed the top 100 most cited AI researchers. More than half of the respondents said they believe there is a substantial (at least 15 percent) chance that the effect of human-level machine intelligence on humanity will be “on balance bad” or “extremely bad (existential catastrophe).” Etzioni’s survey, unlike Bostrom’s, did not ask any questions about a threat to humanity.

Instead, he simply asks one question about when we will achieve superintelligence. As Bostrom’s data would have already predicted, somewhat more than half (67.5 percent) of Etzioni’s respondents plumped for “more than 25 years” to achieve superintelligence—after all, more than half of Bostrom’s respondents gave dates beyond 25 years for a mere 50 percent probability of achieving mere human-level intelligence. One of us (Russell) responded to Etzioni’s survey with “more than 25 years,” and Bostrom himself writes, of his own surveys, “My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates.”

Now, having designed a survey where respondents could be expected to choose “more than 25 years,” Etzioni springs his trap: he asserts that 25 years is “beyond the foreseeable horizon” and thereby deduces that neither Russell nor indeed Bostrom himself believes that superintelligent AI is a threat to humanity. This will come as a surprise to Russell and Bostrom, and presumably to many other respondents in the survey. (Indeed, Etzioni’s headline could just as easily have been “75 percent of experts think superintelligent AI is inevitable.”) Should we ignore catastrophic risks simply because most experts think they are more than 25 years away? By Etzioni’s logic, we should also ignore the catastrophic risks of climate change and castigate those who bring them up.

Contrary to the views of Etzioni and some others in the AI community, pointing to long-term risks from AI is not equivalent to claiming that superintelligent AI and its accompanying risks are “imminent.” The list of those who have pointed to the risks includes such luminaries as Alan Turing, Norbert Wiener, I.J. Good, and Marvin Minsky. Even Oren Etzioni has acknowledged these challenges. To our knowledge, none of these ever asserted that superintelligent AI was imminent. Nor, as noted above, did Bostrom in Superintelligence.

Artificial IntelligenceEtzioni then repeats the dubious argument that “doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.” The argument does not even apply to Bostrom, who predicts that success in controlling AI will result in “a compassionate and jubilant use of humanity’s cosmic endowment.” The argument is also nonsense. It’s like arguing that nuclear engineers who analyze the possibility of meltdowns in nuclear power stations are “failing to consider the potential benefits” of cheap electricity, and that because nuclear power stations might one day generate really cheap electricity, we should neither mention, nor work on preventing, the possibility of a meltdown.

Our experience with Chernobyl suggests it may be unwise to claim that a powerful technology entails no risks. It may also be unwise to claim that a powerful technology will never come to fruition. On September 11, 1933, Lord Rutherford, perhaps the world’s most eminent nuclear physicist, described the prospect of extracting energy from atoms as nothing but “moonshine.” Less than 24 hours later, Leo Szilard invented the neutron-induced nuclear chain reaction; detailed designs for nuclear reactors and nuclear weapons followed a few years later. Surely it is better to anticipate human ingenuity than to underestimate it, better to acknowledge the risks than to deny them.

Many prominent AI experts have recognized the possibility that AI presents an existential risk. Contrary to misrepresentations in the media, this risk need not arise from spontaneous malevolent consciousness. Rather, the risk arises from the unpredictability and potential irreversibility of deploying an optimization process more intelligent than the humans who specified its objectives. This problem was stated clearly by Norbert Wiener in 1960, and we still have not solved it. We invite the reader to support the ongoing efforts to do so.

Allan Dafoe is an assistant professor of political science at Yale University.

Stuart Russell is a professor of computer science at the University of California, Berkeley.

Join the conversation about this story »

NOW WATCH: NASA recorded moving clouds on Titan — and it's amazing

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>