Google is doing its part to fight online harassment with new tools powered by artificial intelligence.
A new piece from Wired's Andy Greenberg describes the technology, which was created by Google subsidiary Jigsaw. Jigsaw was previously Google's think-tank division and was spun out in February to focus on projects that use technology to solve geopolitical problems.
Called Conversation AI, Jigsaw's new software is aimed at blocking vitriolic or harmful statements online. The software uses machine learning to automatically catch abusive language, giving it an "attack score" of 100 (with 100 being extremely harmful and 0 being not at all harmful). The technology will first be tested out in the comments sections within The New York Times, and Wikipedia also plans on using it, though the company hasn't said how, according to Wired.
The technology will eventually be open-source, so websites or social media platforms could use it to catch abuse before it even hits its intended target. According to Wired, Conversation AI can "automatically flag insults, scold harassers, or even auto-delete toxic language."
It's not clear how accurate the technology is quite yet — Greenberg discovered some distinct flaws in the software in his own tests, butGoogle told Wired it has a 92% certainty and a 10% false-positive rate, and that it will continue to improve over time.
SEE ALSO: The subtle way Google plans to use its greatest skill to combat ISIS
Join the conversation about this story »
NOW WATCH: A regular guy tests out Apple’s wireless AirPod headphones — here’s what he thought