The government has released a report highlighting some of the benefits and challenges that are likely to come about as a result of advances in artificial intelligence being made by companies like Google, DeepMind, and Facebook.
Written by Chief Scientific Adviser Sir Mark Walport and published on Thursday, the report provides an overview of where we're at with AI before going on to highlight how it has the potential to fuel innovation and improve government services.
The report — titled "Artificial intelligence: opportunities and implications for the future of decision making"— also looks at how government should "manage and mitigate" any negative effects that may be brought about as a result of AI.
"It is important to recognise that, alongside the huge benefits that artificial intelligence offers, there are potential ethical issues associated with some uses," Walport writes. "Many experts feel that government has a role to play in managing and mitigating any risks that might arise."
Walport also wants to open up the conversation on AI to ensure that scientists in the field gain public trust. "Public trust is a vital condition for artificial intelligence to be used productively," he writes.
He believes that this can be achieved by introducing what he describes as "effective oversight." Walport adds: "Effective oversight will contribute to demonstrating trustworthiness. But at its core, trust is built through public dialogue."
Self-thinking machines have come a long way over the last couple of decades as computer chips have become more powerful, enabling them to process larger quantities of data and learn from that data.
Philosophers such as Nick Bostrom and scientists like Stephen Hawking have raised concerns that highly intelligent computers could pose a serious threat to humanity if they're not developed in the right way.
Companies like DeepMind and Facebook openly publish significant quantities of AI research that is carried out by their employees. However, firms like Apple and Amazon, which are also trying to make advances in the field, remain relatively secretive about the AI work they're doing.