Quantcast
Channel: Artificial Intelligence
Viewing all articles
Browse latest Browse all 1375

These are the research projects Elon Musk is funding to ensure A.I. doesn’t turn out evil

$
0
0

arnold schwarzenegger 1991 4x3 terminator

A group of scientists just got awarded $7 million to find ways to ensure artificial intelligence doesn't turn out evil.

The Boston-based Future of Life Institute (FLI), a nonprofit dedicated to mitigating existential risks to humanity, announced last week that 37 teams were being funded with the goal of keeping AI "robust and beneficial."

Most of that funding was donated by Elon Musk, the billionaire entrepreneur behind SpaceX and Tesla Motors. The remainder came from the nonprofit Open Philanthropy Project.

Musk is one of a growing cadre of technology leaders and scientists, including Stephen Hawking and Bill Gates, who believe that artificial intelligence poses an existential threat to humanity. In January, the Future of Life Institute released an open letter— signed by Musk, Hawking and dozens of big names in AI — calling for research on ways to keep AI beneficial and avoid potential "pitfalls." At the time, Musk pledged to give $10 million in support of the research.

The teams getting funded were selected from nearly 300 applicants to pursue projects in fields ranging from computer science to law to economics.

Here are a few of the most intriguing proposals:

Researchers at the University of California, Berkeley and the University of Oxford plan to develop algorithms that learn human preferences. That could help AI systems behave more like humans and less like rational machines. 

A team from Duke University plans to uses techniques from computer science, philosophy, and psychology to build an AI system with the ability to make moral judgments and decisions in realistic scenarios.

Nick Bostrom, Oxford University philosopher and author of the book "Superintelligence: Paths, Dangers, Strategies," wants to create a joint Oxford-Cambridge research center to create policies that would be enforced by governments, industry leaders, and others, to minimize risks and maximize benefit from AI in the long term.

Researchers at the University of Denver plan to develop ways to ensure humans don't lose control of robotic weapons — the plot of countless sci-fi films.

Researchers at Stanford University aim to address some of the limitations of existing AI programs, which may behave totally differently in the real world than under testing conditions.

Another researcher at Stanford wants to study what will happen when most of the economy is automated, a scenario that could lead to massive unemployment.

A team from the Machine Intelligence Research Institute plans to build toy models of powerful AI systems to see how they behave, much as early rocket pioneers built toy rockets to test them before the real thing existed.

Another Oxford researcher plans to develop a code of ethics for AI, much like the one used by the medical community to determine whether research should be funded.

Here's the full list of projects and descriptions.

SEE ALSO: Google: The artificial intelligence we're working on won't destroy humanity

SEE ALSO: A Chinese artificial intelligence program just beat humans in an IQ test

Join the conversation about this story »

NOW WATCH: WHERE ARE THEY NOW? The casts of the first two 'Terminator' films


Viewing all articles
Browse latest Browse all 1375

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>