Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

The Rolls-Royce shipping command center concept looks right out of a sci-fi movie


The world's best poker bot is learning, now crushing humanity again

$
0
0

T1000 Terminator Genisys

For a few days, it looked like the humans had it figured out. Four poker pros facing off against the Libratus AI in a 20-day no-limit Texas Hold'em competition pulled back from an early $193,000 deficit with big wins on days four and six, bringing the deficit down to to $51,000, with one human, Dong King, up $33,000.

"It took us a while to study and get an understanding of what was going on," one of the pros, Jason Les, wrote in an email.

But then the bot started winning again and big. By the end of day 10, it was up a likely insurmountable $677,000, with all of the humans down six figures. (You can see the latest here).

What happened? Simply said, the bot is learning.

"We can't talk about Libratus's techniques in detail until the match is over," bot co-creator Tuomas Sandholm wrote in an email. "However, I can say this. Libratus's algorithms runs all the time on the supercomputer, so the supercomputer keeps outputting improved strategies every day."

By the end of January, AI will likely have beaten humans on yet another competition. For what it's worth, AI still hasn't beaten humans in group no-limit Hold'em, but we can't imagine it will be long.

Sandholm and Noam Brown created Libratus at Carnegie Mellon University. Their previous bot, Claudico, failed to beat human pros in a 2015 competition.

Teaching AI poker could have significance outside of the poker world (although it has already transformed how humans play poker). For one thing, it's a way of teaching AI to work with incomplete information, which comes up in real world situations like negotiations.

"In the real world, all of the relevant information is not typically laid out neatly like pieces on a chessboard," Brown wrote in an email. "There will be important information that is missing or hidden, and the AI needs to be able to handle that."

SEE ALSO: Humans are still better than robots at these 9 things

DON'T MISS: Self-driving cars are already deciding who to kill

Join the conversation about this story »

NOW WATCH: Here's how Google Maps knows when there is traffic

Elon Musk may be gearing up for his strangest announcement yet on artificial intelligence

$
0
0

Elon Musk

Elon Musk hasn't given up on his vision to add a digital layer of intelligence to our brain.

The Tesla and SpaceX CEO teased that he may have an announcement about "neural lace," a concept he first brought up at Vox Media's Code Conference in June, coming next month on Twitter Wednesday morning.

Musk first described neural lace as a brain-computer system that would link human brains with a computer interface. It would allow humans to achieve "symbiosis with machines" so they could communicate directly with computers without going through a physical interface.

Musk has said a neural lace will help prevent people from becoming "house cats" to artificial intelligence.

Musk has strong opinions about the risk of artificial intelligence, saying it could turn evil and pose a threat to humanity. One time, he went so far as to compare AI to "summoning the demon."

"I don't love the idea of being a house cat, but what's the solution? I think one of the solutions that seems maybe the best is to add an AI layer," Musk said in June. "A third, digital layer that could work well and symbiotically."

In 2015, Musk and Y Combinator's Sam Altman founded OpenAI, a nonprofit with the mission to advance "digital intelligence in the way that is most likely to benefit humanity as a whole."

The neural lace concept, which Musk last said he was "making progress" on in August, seems to fit with OpenAI's mission. But we'll have to wait for more details.

SEE ALSO: Elon Musk: Rex Tillerson could be an 'excellent' secretary of state

Join the conversation about this story »

NOW WATCH: How to answer Elon Musk's favorite job interview question

Nest looks to enhance AI and machine learning

$
0
0

Smarthome

This story was delivered to BI Intelligence IoT Briefing subscribers. To learn more and subscribe, please click here.

Nest is looking to improve the artificial intelligence (AI) and use of machine learning for its products, as indicated by the recent appointment of Yoky Matsuoka as the company's chief technology officer. 

Matsuoka, who previously cofounded Alphabet's X unit before becoming the VP of technology at Nest, will leave her current position at Apple to employ her expertise in AI and robotics for the company, according to Recode. This move will likely lead to Nest introducing the Google Assistant to its devices, better positioning the voice assistant within the competitive smart home market.

Nest's new focus on AI and machine learning could have two key implications for the company:

  • Nest could improve its bottom line following a tumultuous period. While the company’s device sales remained healthy through 2016, co-founder and CEO Tony Faddell resigned in June after reports surfaced that the company was failing to meet Alphabet’s revenue expectations, and that Faddell’s management style was turning off some employees. Faddell was replaced by Marwan Fawaz, the former head of Motorola Mobility. Since then, Alphabet moved Nest’s developers into a new group shared with Google, a move that BI Intelligence identified at the time could help smooth the rollout of the Google Home, as well as make the company more profitable.
  • The device maker could integrate Google Assistant into its devices. Amazon recently partnered with the voice recognition startup Sensory to bring Alexa to non-Echo devices, while also expanding to LG’s smart fridges and some Ford vehicles. Matsuoka’s strong AI background indicates that Alphabet could task her with overseeing a cross-company partnership to introduce Google’s AI assistant to Nest’s products, making the voice assistant more competitive with Alexa and other players in the voice assistant market. 

As the smart home market continues to mature, it will be critical to monitor Nest's progress. AI will be critical to many sectors moving forward, and adding staff that has experience with the technology will be key to the company's long-term success. 

And yet, the U.S. smart home market has yet to truly take off. At its current state, we believe the smart home market is stuck in the 'chasm' of the technology adoption curve, in which it is struggling to surpass the early-adopter phase and move to the mass-market phase of adoption.

There are many barriers preventing mass-market smart home adoption: high device prices, limited consumer demand and long device replacement cycles. However, the largest barrier is the technological fragmentation of the smart home ecosystem, in which consumers need multiple networking devices, apps and more to build and run their smart home.

John Greenough, senior research analyst for BI Intelligence, Business Insider's premium research service, has compiled a detailed report on the U.S. smart home market that analyzes current consumer demand for the smart home and barriers to widespread adoption. It also analyzes and determines areas of growth and ways to overcome barriers.

Here are some key takeaways from the report:

  • Smart home devices are becoming more prevalent throughout the US. We define a smart home device as any stand-alone object found in the home that is connected to the internet, can be either monitored or controlled from a remote location, and has a noncomputing primary function. Multiple smart home devices within a single home form the basis of a smart home ecosystem.
  • Currently, the US smart home market as a whole is in the "chasm" of the tech adoption curve. The chasm is the crucial stage between the early-adopter phase and the mass-market phase, in which manufacturers need to prove a need for their devices.
  • High prices, coupled with limited consumer demand and long device replacement cycles, are three of the four top barriers preventing the smart home market from moving from the early-adopter stage to the mass-market stage. For example, mass-market consumers will likely wait until their device is broken to replace it. Then they will compare a nonconnected and connected product to see if the benefits make up for the price differential.
  • The largest barrier is technological fragmentation within the connected home ecosystem. Currently, there are many networks, standards, and devices being used to connect the smart home, creating interoperability problems and making it confusing for the consumer to set up and control multiple devices. Until interoperability is solved, consumers will have difficulty choosing smart home devices and systems.
  • "Closed ecosystems" are the short-term solution to technological fragmentation. Closed ecosystems are composed of devices that are compatible with each other and which can be controlled through a single point.

In full, the report:

  • Analyzes the demand of US consumers, based off of survey results
  • Forecasts out smart home device growth until 2020
  • Determines the current leaders in the market
  • Explains how the connected home ecosystem works
  • Examines how Apple and Google will play a major role in the development of the smart home
  • Some of the companies mentioned in this report include Apple, Google, Nest, August, ADT, Comcast, AT&T, Time Warner Cable, Lowe's, and Honeywell.

To get your copy of this invaluable guide, choose one of these options:

  1. Subscribe to an ALL-ACCESS Membership with BI Intelligence and gain immediate access to this report AND over 100 other expertly researched deep-dive reports, subscriptions to all of our daily newsletters, and much more. >> START A MEMBERSHIP
  2. Purchase the report and download it immediately from our research store. >> BUY THE REPORT

The choice is yours. But however you decide to acquire this report, you’ve given yourself a powerful advantage in your understanding of the smart home market.

Join the conversation about this story »

Apple is tipped to join an AI ethics group that includes Google, Facebook, and Amazon (AAPL)

$
0
0

tim cook

There was one big name missing from the Partnership on Artificial Intelligence (AI) member list when the research consortium was announced last September. Google, Facebook, Amazon, IBM, and Microsoft all pledged to work together to ensure AI is developed safely and ethically but Apple refused to get involved.

Now it looks like the world's largest company and a tech giant renowned for keeping its research efforts a secret may have reconsidered its decision. Citing sources with knowledge of the situation, Bloomberg reported on Thursday that Apple is set to join the elite club, going on to say that its admission could be announced as early as this week.

Apple has been gradually building up its AI and machine learning capabilities and buying a succession of small AI startups. Last October, it hired Ruslan Salakhutdinov, one of the big guns of AI research.

Unlike Google, DeepMind, and Facebook, Apple has traditionally prevented its top researchers from publishing their work in open forums. Last November, Yann LeCun, Facebook's head of AI, said this could make some engineers think twice about working for the iPhone maker.

"So, [when] you’re a researcher, you assume that you’re going to publish your work," said LeCun. "It’s very important for a scientist because the currency of the career as a scientist is the intellectual impact. So you can’t tell people 'come work for us but you can’t tell people what you’re doing' because you basically ruin their career. That’s a big element."

AI has been tipped to improve many aspects of life, ranging from healthcare, education, and manufacturing to home automation and transportation. But it's also been described by the likes of renowned scientist Stephen Hawking and tech billionaire Elon Musk as something that could wipe out humans altogether.

Partnership on AI, a non-profit organisation, has pledged to work to advance public understanding of AI and formulate best practices on the challenges and opportunities within the field.

Apple did not immediately respond to Business Insider's request for comment.

Join the conversation about this story »

NOW WATCH: Science says parents of unsuccessful kids could have these 6 things in common

4 poker pros lost $1.8 million to an AI program

$
0
0

poker gambling

When it comes to poker, humans have traditionally had the upper hand on computers.

But this week, it was announced that four of the world's best poker players lost nearly $1.8 million (£1.4 million) to an artificial intelligence (AI) program developed by scientists from Carnegie Mellon University (CMU).

The professional players — Dong Kim, Jimmy Chou, Daniel McAulay, and Jason Les — took on the "Libratus" AI agent at a version of poker called no-limit heads-up Texas hold 'em.

The marathon match, held at Rivers Casino in Pittsburgh, Pennsylvania, lasted for 30 days but in the end the AI won $1,776,250 (£1,408,743) over 120,000 hands.

It involved the human players staring at a computer screen for 10 hours a day and being repeatedly trounced by Libratus, according to The Register.

The pros will split a $200,000 (£159,000) prize purse based on their respective performances during the event.

Rivers Casino

The victory is being hailed as a major breakthrough by those that developed the AI. Tuomas Sandholm, cocreator of Libratus and a machine learning professor at CMU, hailed the event as a landmark moment.

The researchers said that the victory was only possible thanks to a supercomputer, which the AI used to compute its strategy before and during the event.

In a statement on the university's website, Sandholm described how Libratus improved as the match went on.

"After play ended each day, a meta-algorithm analysed what holes the pros had identified and exploited in Libratus' strategy," Sandholm said. "It then prioritised the holes and algorithmically patched the top three using the supercomputer each night. This is very different than how learning has been used in the past in poker. Typically, researchers develop algorithms that try to exploit the opponent's weaknesses. In contrast, here the daily improvement is about algorithmically fixing holes in our own strategy."

Andrew Ng, chief scientist of Chinese tech giant Baidu, compared the victory to when DeepMind's AlphaGo agent beat Lee Se-dol at Go and IBM's Deep Blue, which became the first chess playing programme to beat a human world champion.

Join the conversation about this story »

NOW WATCH: Here's how Google Maps knows when there is traffic

Stephen Hawking and Elon Musk backed 23 principles to ensure humanity benefits from AI

$
0
0

stephen hawking

Cosmologist Stephen Hawking and Tesla CEO Elon Musk endorsed a set of principles this week that have been established to ensure that self-thinking machines remain safe and act in humanity's best interests.

Machines are getting more intelligent every year and researchers believe they could possess human levels of intelligence in the coming decades. Once they reach this point they could then start to improve themselves and create other, even more powerful AIs, known as superintelligences, according to Oxford philosopher Nick Bostrom and several others in the field.

In 2014, Musk, who has his own $1 billion AI research company, warned that AI has the potential to be "more dangerous than nukes" while Hawking said in December 2014 that AI could end humanity. But there are two sides to the coin. AI could also help to cure cancer and slow down global warming.

The 23 principles designed to ensure that AI remains a force for good — known as the Asimolar AI Principles because they were developed at the Asimolar conference venue in California — are broken down into three categories:

  1. Research issues
  2. Ethics and values
  3. Longer-term issues

The principles, which refer to AI-powered autonomous weapons and self-replicating AIs, were created by the Future of Life Institute.

The non-profit Institute — founded by MIT cosmologist Max Tegmark and Skype cofounder Jaan Tallinn, and DeepMind research scientist Viktoriya Krakovna in March 2014 — is working to ensure that tomorrow's most powerful technologies are beneficial for humanity. Hawking and Musk are on the board of advisors.

Her

"Artificial intelligence has already provided beneficial tools that are used every day by people around the world," wrote the Future of Life on its website. "Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead."

The principles were developed off the back of the Beneficial AI conference that was held earlier this month and attended by some of the most high profile figures in the AI community, including DeepMind CEO Demis Hassabis and Facebook AI guru Yann LeCun.

At the conference, Musk sat on a panel alongside Hassabis, Bostrom, Tallinn, and other AI leaders. Each of them were asked in turn what they thought about superintelligence — defined by Bostrom in an academic paper as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills."

When the panel was asked if superintelligence is possible, everyone said yes, except Musk, who appeared to be joking when he said no.

When asked whether superintelligence will actually happen, seven of the panel said yes, while Bostrom said "probably" and Musk again joked "no."

Interestingly, when the panel was asked whether it wanted superintelligence to happen, there was a more mixed response, with four people opting to respond "it's complicated" and Musk saying that it "depends on which kind."

The 23 Asimolar AI Principles

Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance:Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

6) Safety:AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

19) Capability Caution:There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities .

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Join the conversation about this story »

NOW WATCH: 9 iPhone tricks that will make your life easier in 2017

AI leaders: Machines will quickly outsmart us when they achieve human-level intelligence

$
0
0

Nick Bostrom and Demis Hassabis

Machines will quickly become significantly smarter than humans when they achieve human level intelligence, according to a high-profile panel of artificial intelligence (AI) leaders.

A YouTube video released by the Future of Humanity Institute this week shows Elon Musk, the billionaire cofounder of Tesla, SpaceX and PayPal, talking on a panel earlier this month alongside the likes of DeepMind CEO Demis Hassabis, who sold his company to Google for £400 million in 2014, and Oxford philosopher Nick Bostrom.

Musk and the rest of the panel were asked how long it will take for superintelligences to be developed when machines achieve human-level intelligence — something that is likely to happen within a matter of decades, according to Bostrom

"Once we get to human level-AI, how long before we get to where things start taking off?" asked MIT professor and panel moderator Max Tegmark, citing an "intelligence explosion." Tegmark added: "Some people say days or hours. Others envision it will happen but it might take thousands of years or decades."

Musk replied: "I think if it [an AI] reaches a threshold where it's as smart as the smartest, most inventive human, then it really could be a matter of days before it's smarter than sum of humanity."

Others on the panel predicted that it would more likely take several years for machines to become superintelligent but none of them said it will take more than 100 years.

ex machina movie artificial intelligence robot

"I think it partly depends on the architecture that end up delivering human-level AI," said Hassabis. "So the kind of neuroscience inspired AI that we seem to be building at the moment, that needs to be trained and have experience and other things to gain knowledge. It may be in the order of a few years, possibly even a decade."

Bostrom replied: "I think some number of years but it could also be much less."

Tegmark went on to say that "the timescale is something that makes a huge difference. If things happen quicker than society can respond then it's harder to steer and you kind of have to hope that you've built in good steering in advance."

He then asked the panel, which also included Skype cofounder Jaan Tallinn and futurist Ray Kurzweil, whether they would like to see the onset of superintelligence occur gradually so that society can adapt.

"Slow is better than faster," Tallinn replied.

Last October, Bostrom said that DeepMind is winning the race to develop human-level AI. The company, which employs approximately 400 people in King's Cross, is perhaps best known for developing an AI agent that defeated the world champion of the ancient Chinese board, Go. However, it's also applying its AI to other areas, including healthcare and energy management.

View the full video here:

Join the conversation about this story »

NOW WATCH: Trump’s executive orders are being turned into hilarious memes


JEFF SACHS: Here are the fiscal policies we need to implement so robots don't take our jobs

$
0
0

Jeffrey Sachs, the director of The Earth Institute at Columbia University and the author of "Building the New American Economy," discusses the robot revolution and what he says America should do in response. 

Follow BI Video: On Twitter

FULL TRANSCRIPT:

The automation of robots and artificial intelligence is actually pretty well advanced in certain industries. If you — if you've had a chance to go to an auto plant. If there's one nearby, go take a tour. Shake hands with the robots. You'll see an assembly line that's already robotics. The income now is shifting more and more to capital and away from workers. And that's part of this general widening of inequality in the United States, and there's more of that to come.

So, are robots a bad thing? No. Because robots enable us to be more productive, have a larger economy, and, if we use it right, to have more leisure time, to be able to do more good things, to have a safer, cleaner environment, say, with self-driving vehicles. But, we better share the benefits so they don't all end up in the hands of a few very, very rich capital owners. That's the basic point. Is yes, expand the productivity of the economy, use the technologies, but make sure that the income distribution doesn't go wild.

One very, increasingly common idea, for example, is make sure that every young person is given certain amount of basically financial assets or capital. It's like saying everyone can own at least one robot, so that as the robots get better and better in the future everyone's sharing in the benefits of an expanding economy. 

So I'm not afraid of the technology, per se — in fact, I love it. I think it's absolutely amazing that we can have artificial intelligence systems, and smart robots, and voice recognition, and machine learning for translation, and so many other fantastic breakthroughs. But the market economy will not distribute the results of that fairly. And a lot of people will get hurt unless we say, "We're all in this together — let's all share the benefits," through our fiscal systems and through being smart and decent in how we approach the robotics revolution.

Join the conversation about this story »

DeepMind: AIs have the potential to become 'aggressive' or work in teams (GOOG)

$
0
0

DeepMind ANTS

Artificial intelligence (AI) agents have the potential to become aggressive or work in teams, according to researchers at DeepMind.

A paper released by five computer scientists from the London-based company, which is owned by Google, used games to look at how AIs behave alongside one another.

Joel Leibo, a research scientist at DeepMind and the lead author on the paper, told Business Insider on Thursday: "We were interested in the factors affecting cooperation."

When asked about AI aggression, Leibo stressed: "We have to be careful not to anthropomorphise too much. These are toy problems aimed at exploring cooperative versus competitive dynamics."

Describing the study in a blog post on the DeepMind website, the researchers said that they used two basic video games called "Wolfpack" and "Gathering" to analyse the behaviour of AI agents.

"We needed two different research environments in which we could vary aspects of the learning challenge between them and observe their impact on the emergence of cooperation," Leibo told Business Insider. "For example, in Gathering it is easier to learn to implement a cooperative policy while in Wolfpack, it is harder. This difference cannot be captured by the classical models of social dilemmas in game theory."

Gathering is a two-player game that requires each player to collect as many apples as possible from a central pile. In the game, players have the option to "tag" their opponent with a laser beam so they can no longer collect apples. A video of the game can be seen below.

The second game, "Wolfpack," requires two players to find a third player in an environment littered with obstacles. It's possible to earn points either by capturing the prey or by being close to the prey when it is caught. A video can be seen below.

The DeepMind researchers found that AIs can behave in an "aggressive manner" or in a team when there is something to be gained. They also noted that AIs altered their behaviour to become more friendly or antagonistic depending on the situation in the game and what was at stake.

In the DeepMind blog post, the researchers wrote: "As a consequence [of this research], we may be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet — all of which depend on our continued cooperation."

Join the conversation about this story »

NOW WATCH: The fabulous and charmed life of 26-year-old self-made billionaire, Snap CEO Evan Spiegel

European politicians have voted to rein in the robots

$
0
0

Mary Delvaux

European politicians have voted in favour of a controversial report calling for regulation on robots and artificial intelligence (AI).

The vote, which took place in France on Thursday, was based on a report from the Legal Affairs Committee, which warned that there is a growing need for regulation to address increasingly autonomous robots and other forms of sophisticated AI.

The report passed 396-to-123, with 85 abstentions.

"MEP's (Members of the European Parliament) voted overwhelmingly in favour of the report," said a spokesperson for the European Parliament. "The report is not legislative but provides recommendations for the Commission. Now it goes to the Commission to act upon."

The report calls for a European agency for robotics and AI, as well as a supplementary fund for people involved in accidents with autonomous cars.

While MEPs voted in favour of the report, they rejected demands for a basic income for workers who lose their jobs and a tax on robots, Politico reports.

Before the vote, Mady Delvaux, the author of the report and the Socialists and Democrats member in the Legal Affairs Committee, put forward her recommendations to MEPs in Strasbourg.

After the vote, Delvaux said in a statement: "Although I am pleased that the plenary adopted my report on robotics, I am also disappointed that the right-wing coalition of ALDE, EPP and ECR refused to take account of possible negative consequences on the job market."

A protester dressed as a robot takes part in a march by Belgian public sector workers in central Brussels, Belgium, May 31, 2016.Politicians are concerned that robots will wipe out millions of jobs worldwide. There are also fears that superintelligent machines could harm humanity if they're not programmed in the right way.

"The next generation of robots will be more and more capable of learning by themselves," Delvaux said in an interview published on the European Parliament website.

"The most high-profile ones are self-driving cars, but they also include drones, industrial robots, care robots, entertainment robots, toys, robots in farming," said Delvaux.

"We have to monitor what is happening and then we have to be prepared for every scenario."

Tech firms and AI gurus who are keen to make the smartest machines possible will likely see any form of government regulation around AI as a set back at this stage.

Delvaux added: "We always have to remind people that robots are not human and will never be. Although they might appear to show empathy, they cannot feel it. We do not want robots like they have in Japan, which look like people. We proposed a charter setting out that robots should not make people emotionally dependent on them. You can be dependent on them for physical tasks, but you should never think that a robot loves you or feels your sadness."

Delvaux also believes that a separate legal status should be created for robots.

"When self-learning robots arise, different solutions will become necessary and we are asking the Commission to study options," she said. "One could be to give robots a limited 'e-personality' [comparable to 'corporate personality, a legal status which enables firms to sue or be sued] at least where compensation is concerned.

"It is similar to what we now have for companies, but it is not for tomorrow. What we need now is to create a legal framework for the robots that are currently on the market or will become available over the next 10 to 15 years."

Join the conversation about this story »

NOW WATCH: 7 little-known benefits of Amazon Prime

Google built an AI that will play piano duets with you (GOOG)

$
0
0

westworld piano robots ai hands music

Google is trying to create artificial intelligence (AI) capable of making art — and it has now taught it to play piano.

A new experiment from the Californian technology giant lets you play musical duets with a piano-playing AI.

The project, called "A.I. DUET," uses neural network technology to learn how to play the instrument in response to the user's input.

You just play a tune — as simple or complex as you want — and then the AI plays a response to your tune back to you.

"You don’t even have to know how to play piano — it's fun to just press some keys and listen to what comes back,"Google employee Alexander Chen wrote in a blog post.

"We hope it inspires you — whether you’re a developer or musician, or just curious — to imagine how technology can help creative ideas come to life."

I gave it a go, and the results were mixed: Sometimes, it sounded fantastic, like there was a real pianist responding to my music. Other times, it was nonsensical, jarring, or overly simple.

google ai piano duet

But what makes this experiment so interesting — and different to traditional piano-playing computer programs — is how it works, using neural networks. Its creators didn't program specific responses into it — they just gave it a load of music, and from that it taught itself how to respond to different tunes.

"We played the computers tons of examples of melodies. Ove time, it learns these fuzzy relationships between tones and timings, and built its own map based on the examples it's given,"Google employee Yotam Mann said in a video. "So in this experiment you play a few notes, they go to the neural net, which basically decides based on those notes and all the examples it's been given some possible responses."

He added: "It picks up on stuff like key and rhythm that you're implying, even though I never explicitly programmed the concepts of key and rhythm."

You can play it directly from your computer, using your keyboard or your mouse. Or if you're more musically inclined, you can plug a proper musical keyboard straight into your computer, and play with the AI that way.

And this isn't just a fun toy for anyone to play with — though it is that as well. It's part of a larger project from Google to try and create art and music using AI. The project is called Magenta, and it's all open source, so anyone interested can download the code and experiment with it for themselves.

There's a video of Yotam Mann talking about the experiment below, and you can play with it here »

Join the conversation about this story »

NOW WATCH: How to escape quicksand — it's easier than you might think

European Commission: 'Attempting to regulate AI as portrayed in Hollywood movies is probably too far-fetched'

$
0
0

Ex Machina year end

Politicians should not attempt to regulate the kinds of artificial intelligence (AI) that are portrayed in Hollywood movies like "Ex Machina," according to the head of a technology-focused department at the European Commission.

Roberto Viola, director general of DG Connect, the European Commission department that regulates communications networks, content, and technology, put forward his views on AI regulation in a blog post on Thursday.

His remarks came after the European Parliament voted overwhelmingly in favour of a report calling for more regulation in the fields of robotics and AI. The report warned that there is a growing need for regulation to address increasingly autonomous robots and other forms of sophisticated AI. It passed 396-to-123, with 85 abstentions.

Responding to the report, Viola wrote: "While many of these robots and AI systems are impressive and have progressed a lot recently, they are still very far from exhibiting intelligent, human-like behaviour or are indistinguishable from a human. In other words: they don't pass the Turing test yet. This futuristic vision would need a debate at a different level, including asking very profound ethical questions."

He added: "We have to be cautious and address concrete problems we are facing today and carefully assess if the current legislation is fit for purpose. For instance, now attempting to regulate human-like artificial intelligence as portrayed in Hollywood movies like 'Ex Machina' or 'I, Robot' is probably too far-fetched and speculative."

Viola's comments will likely be welcomed by tech firms and AI gurus who are keen to press ahead with AI developments and make the smartest machines possible. They will no doubt see any form of government regulation around hypothetical human-like AI as a setback at this stage.

Politicians are concerned that robots will wipe out millions of jobs worldwide. Some in the AI community have also expressed fears that superintelligent machines could harm humanity if they're not programmed in the right way.

"We have to monitor what is happening and then we have to be prepared for every scenario," said the author of the report, Mary Delvaux, in an interview published on the European Parliament website.

Join the conversation about this story »

NOW WATCH: Here's everything we know about the Samsung Galaxy S8 — Samsung’s most important phone

Why Facebook removed a line about monitoring terrorists on 'private channels' from Mark Zuckerberg's company manifesto (FB)

$
0
0

Mark Zuckerberg question mark

On Thursday, Mark Zuckerberg published a nearly 6,000-word letter about the future of Facebook.

The Facebook founder and CEO's lengthy manifesto mainly focused on Facebook's globalist mission to connect the world and develop "the social infrastructure for community" everywhere.

In one part of the letter, Zuckerberg talked about using artificial intelligence to keep terrorists and their propaganda off Facebook.

"Right now, we're starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization," he wrote.

But tucked within an earlier version of the letter, which was shared with news outlets before it was published, was another line about using AI to monitor terrorists on "private channels." Mashable first spotted the change.

Here's the original version of Zuckerberg's comment on AI (emphasis added):

"The long-term promise of AI is that in addition to identifying risks more quickly and accurately than would have already happened, it may also identify risks that nobody would have flagged at all — including terrorists planning attacks using private channels, people bullying someone too afraid to report it themselves, and other issues both local and global. It will take many years to develop these systems."

The Associated Press originally published the paragraph that included the mention of monitoring private channels, but its story has since been updated "to substitute a quote on artificial intelligence to reflect what was actually in the manifesto."

A Facebook spokesperson told Business Insider on Friday that the line was removed "because we are not yet sure exactly how AI will be used in the future," and that the company strongly values encryption.

“The line talking about the long term promise of AI was removed from the final version of the letter because we are not yet sure exactly how AI will be used in the future," the spokesperson said. "But our intention is definitely to use AI to fight terrorism. As noted in the letter, we will do so in ways that protect people’s privacy -- we are strong advocates of encryption and have built it into the largest messaging platforms in the world -- WhatsApp and Messenger.”

It's common for social networks to combat terrorism propaganda on their platforms. Still, the idea that Facebook could one day use AI to monitor seemingly-private conversations suggests that the company is willing to scan accounts for potentially terrorist activity in the future.

SEE ALSO: Mark Zuckerberg wrote a nearly 6,000-word letter about the future of Facebook that quotes Abe Lincoln — here are the key points

Join the conversation about this story »

NOW WATCH: This is how you're compromising your identity on Facebook

This website turns your cat pictures into photos — and the results can be nightmarish

$
0
0

cat doodle ai

Proponents of Artifical Intelligence say it will be used for everything from helping treat diseases to driving your car.

It can also make some seriously mind-bending art.

Using Google's machine-learning technology TensorFlow, Christopher Hesse has built a website that transforms doodled pictures of cats into photo-esque images.

Anyone can have a go for free on Edges2cats, and the results are varied. Some are surprisingly realistic. Others are hellish nightmare fuel.

Here's an example provided by Christopher Hesse. Looks pretty normal, right? Pretty good, even. Most don't look like this.



Now here's an example drawn by my colleague James Cook. Its eyes are screaming.



The results get even more surreal if you tweak cats' physiology.



See the rest of the story at Business Insider

Microsoft's new AI can code by stealing bits of code from other software (MSFT)

$
0
0

thief looter looting riot stealing rioter protests geneva

A new artificial intelligence (AI) program built by Microsoft and Cambridge University researchers is able to solve programming problems — by stealing code from other programs.

It's called DeepCoder, and by cribbing bits of code from other existing software, it can solve programming challenges its developers throw at it.

It was first reported on by New Scientist, and you can read the researchers' full paper below.

Right now, DeepCoder is basic, with limits to what it can do — but it's a significant step forward, and it already has potential (simple) real-world applications.

"We have found several problems in real online programming challenges that can be solved with a program in our language," the researchers wrote in a paper on their findings, adding that it "validates the relevance of the class of problems that we have studied in this work."

DeepCoder works by looking at inputs and outputs for different bits of code it has been given. It learns which piece can produce what output, and places them together accordingly to create new programs capable of solving the problem at hand.

Building AI capable of sophisticated coding would be a major breakthrough, one that could revolutionise the development of software. Anyone who wishes to build a program could simply direct it as to what they want it to do, even without technical expertise.

"A dream of artificial intelligence is to build systems that can write computer programs," the researchers wrote.

Here's the full paper on DeepCoder:

Join the conversation about this story »

NOW WATCH: Apple was supposed to move into its new $5 billion campus in January — here's what it looks like right now

The Chinese government is funding a new lab from China's most powerful AI company

$
0
0

Baidu China Chinese

Baidu, a Chinese tech giant making rapid advances in the field of artificial intelligence (AI), has received funding from the Chinese government for a new research project, Quartz reports.

AI has the potential to be of the most influential technologies that humanity has ever invented. Governments around the world are giving it increasing amounts of attention and Silicon Valley tech giants like Google and Facebook are putting hundreds of millions of dollars into AI research.

Now China's National Development and Reform Commission, a government organisation tasked with growing and restructuring China's economy, is pumping money into a deep learning lab that will be led by Baidu, according to the Quartz report, which is based on a post on Baidu's Chinese WeChat account.

The lab won't have a single physical presence but instead it will be a "digital network of researchers" working on problems in their respective fields from wherever they happen to be based in China, according to The South China Morning Post.

The lab's research areas reportedly include:

  • computer vision
  • biometric identification
  • intellectual property rights
  • and human-computer interaction

Baidu will reportedly work with Tsinghua University and Beihang University, along with other Chinese research organisations.

Lin Yuanqing, head of the Baidu Deep Learning Institute, and Xu Wei, a computer scientist at Baidu, will reportedly work on developing the lab, as will two representatives from The Chinese Academy of Scientists.

Baidu also has an AI research lab in Silicon Valley, which is home to the company's chief scientist, Andrew Ng.

The amount of funding allocated by China's Development and Reform Commission has not been disclosed.

Baidu did not immediately respond to Business Insider's request for comment.

Join the conversation about this story »

NOW WATCH: Meet the forgotten co-founder of Apple who once owned 10% of the company

Watch a computer beat one of the world's best 'Super Smash Bros.' players

$
0
0

Pretty soon, there won't be any games left in which the world's best human players can still beat a computer program.

After artificial intelligence managed to best human experts in both Chess and Go, one of the next games that machine-learning researchers have set their sights on is the classic Nintendo game "Super Smash Bros."

Here's video of an computer program — an artificial intelligence "agent"— beating some of the world's top players at "Super Smash Bros. Melee," a Gamecube game that came out in 2001. 

The computer is playing as Player 2, Captain Falcon: 

The AI agent playing as the black Captain Falcon was developed by a team led by Vlad Firoiu, a grad student at MIT studying artificial intelligence. The team published their findings and methods in an unreviewed paper on ArXiv earlier this week

What's notable is that the computer is hanging tough with real human players in what is a very complicated game — and it didn't require the resources of a big Google subsidiary to get to that stage, only a few researchers and a technique called deep reinforcement learning. That's because of a "a recent explosion in the capabilities of game-playing artificial intelligence," according to the paper. 

Vlad's AI, nicknamed Philip, uses well-known and well-studied "off-the-shelf" algorithms, including Q Learning and Actor-Critic, to teach a computer to play "Super Smash Bros." from experience. These aren't video game-specific algorithms, but they can learn how to play as Captain Falcon just the same, given enough data. 

In fact, the data come from the computer playing itself over and over again, which is similar to how DeepMind AlphaGo learned to beat the best Go players. 

The computer doesn't play exactly like a human, though. It was "hard to discern any real trial or error it was doing, let alone what strategies it was applying," Mafia, also known as James Lauerman, told Business Insider. (He starts playing in the above video at 3:00, and advises that the "situations that take place between 3:00 and 3:40 are nuts.")

"What I think is most interesting is the edge play. He gets me on the ropes on the right edge and I somehow get back. But I remember thinking, how is it so smooth sometimes?" he said. 

Ultimately, "it was weird," the 50th-ranked player in the world said.

Caveats

Vlad FiroiuOne reason why Mafia may have thought the agent was "moving so fast he's kinda just chilling in the same spot" is that the computer program has faster reflexes than humans. 

That's one of several simplifications the researchers decided to make to run its experiment. 

Here are a few other ways that the match is not like a real competitive match:

  • The agents react faster than humans: "The main criticism of our agents is that they play with unrealistic reaction speed: 2 frames (33ms), compared to over 200ms for humans," the authors write.
  • Another issue with the simulation is that it does not understand projectile attacks at all. Nearly every character — except for Captain Falcon — has an attack that sends a projectile. 
  • The agent doesn't play Smash Bros. the way a human does, by looking at the pixels on the screen and reacting. Instead, the software reads where the characters are from the game's memory that "consists of things like player positions, velocities, and animation states," Firoiu told Business Insider. 
  • There were also odd strategies that could beat the computer. "One particularly clever player found that the simple strategy of crouching at the edge of the stage caused the network to behave very oddly, refusing to attack and eventually KOing itself by falling off the other side of the stage," the researchers wrote. 

What's next? 

Captain FalconStill, the research has a lot of interesting findings that may mean more for AI researchers than Smash pros. 

One interesting finding was that transfer learning, a hot topic in deep learning, applied across characters. This means that an AI agent trained on, say, a character like Fox McCloud, found its skills also applied to characters like Captain Falcon and Peach as well.

"I suspect transfer learning works because many of the fundamentals (how to move, attacking in the direction of the opponent when they are close) are broadly applicable to any character," Firoiu said. 

Another interesting finding: The difficulty of training a computer to play with a given character corresponded with how difficult pros think each character is to play. 

"The data also reveals the overall ease of playing each character — Peach, Fox, and Falco all trained fairly quickly, while Captain Falcon was significantly slower than the rest. This to some extent matches the consensus of the SSBM community," the researchers wrote. 

Firoiu says he plans to continue working on the project. His next step is to play with human-level reaction time. "This would eliminate some of odd strategies the bot uses, and force it into the realm of play a human could relate to," he said. 

Check out the entire paper on ArXiv, and you can look at the researchers' code on Github.

SEE ALSO: DeepMind secretly unleashed its Go-playing AI online

Join the conversation about this story »

The UK government is planning to pump £17.3 million into AI and robotics research

$
0
0

People look at a RoboThespian humanoid robot at the Tami Intelligence Technology stall at the WRC 2016 World Robot Conference in Beijing, China, October 21, 2016. REUTERS/Thomas Peter

The UK government is planning to announce new measures to help artificial intelligence (AI) and robotics researchers to commercialise their breakthroughs.

The Department for Culture, Media, and Sport (DCMS) announced on Monday that it will include a number of AI-related proposals in its upcoming Digital Strategy document, which will be unveiled in Parliament on Wednesday.

As part of the Digital Strategy, DCMS said it expects to announce an AI review that will be led Southampton University professor Wendy Hall and ex-IBM scientist Jérôme Pesenti, who is now the CEO of London healthcare startup Benevolent.AI.

The government is also expected to announce a £17.3 million investment into robotics and AI that will be given to UK universities via the Engineering and Physical Sciences Research Council (EPSRC).

"There has been a lot of unwarranted negative hype around AI but it has the ability to drive enormous growth for the UK economy, create jobs, foster new skills, positively transform every industry and retain Britain’s status as a world leader in innovative technology," said Hall in a statement.

"I’ll focus on making recommendations and proposing actions that industry and government can take to promote the long-term growth of the AI sector to ensure this incredible technology makes a positive economic and societal contribution."

The government department did not specify how long the review is likely to take.

The UK has built a reputation for being one of the most advanced countries in the world when it comes to the development of AI. DeepMind, a London AI lab acquired by Google in 2014 for £400 million is leading the race to develop human-level AI.

CulKaren Bradley.ture Secretary Karen Bradley said in a statement: "We are already pioneers in today's artificial intelligence revolution and the Digital Strategy will build on our strengths to make sure UK-based scientists, researchers and entrepreneurs continue to be at the forefront.

"Technologies like AI have the potential to transform how we live, work, travel and learn, and I am pleased that Professor Dame Wendy Hall and Jérôme Pesenti will be leading this review."

Last November, Accenture estimated AI could add in the region of £654 billion ($814 billion) to the UK economy by 2035.

Join the conversation about this story »

NOW WATCH: 7 little-known benefits of Amazon Prime

The Japanese tech billionaire behind SoftBank thinks the 'singularity' will occur within 30 years

$
0
0

Softbank pepper robot

Singularity — the point when machine intelligence surpasses our own and goes on to improve itself at an exponential rate — will happen by 2047, according to Masayoshi Son, the Japanese tech mogul leading SoftBank.

Son was speaking on Monday at the Mobile World Congress conference in Barcelona, Spain.

He said: "I totally believe this concept. In next 30 years this will become a reality."

Son went on to say that our world will fundamentally change as a result of so-called superintelligences that will be able to learn and think for themselves, TechCrunch reports.

"There will be many kinds," said Son, whose company spent $32 billion (£26 billion) acquiring UK chip designer ARM last year. "Flying, swimming, big, micro, run, two legs, four legs, 100 legs."

Son added that he expects one computer chip to have the equivalent of a 10,000 IQ within the next 30 years, Bloomberg reported.

Japan's second richest man went on to highlight how SoftBank plans to invest in the next generation of technology companies that are developing AI with a new $100 billion (£80 billion) tech fund, which was ;announced last October and is called the SoftBank Vision Fund. Apple and Qualcomm have contributed to the fund, as has the sovereign wealth fund of Saudi Arabia.

"I truly believe it's coming, that's why I'm in a hurry – to aggregate the cash, to invest," said Son. "It will be so much more capable than us — what will be our job? What will be our life? We have to ask philosophical questions. Is it good or bad?"

Son added: "I think this superintelligence is going to be our partner. If we misuse it it's a risk. If we use it in good spirits it will be our partner for a better life. So the future can be better predicted, people will live healthier, and so on."

Son is not the only person who thinks superintelligent machines will become a reality. A panel of AI leaders including Elon Musk, the founder of PayPal, Tesla and SpaceX, and Demis Hassabis, the cofounder and CEO of Google DeepMind, agreed that superintelligence is likely to be developed in the coming decades.

The panel, which took place in January at the Future of Life Institute, had varying opinions about the exact time frame and the potential risks that could come about through such a breakthrough, while Hassabis voiced concerns about whether tech companies will work together in the lead up to the intelligence explosion he anticipates.

Join the conversation about this story »

NOW WATCH: What happens when you eat too much protein

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>