Relies on a mix of human checkers and software to confirm which posts should be removed
Facebook has said today that its artificial intelligence (AI) is becoming increasingly adept at keeping terrorist content off the social network.
Monika Bickert, Facebook’s head of global policy management, and Brian Fishman, head of counter-terrorism policy, also wrote how 99 per cent of material it removes about Al Qaeda and Islamic State is first detected by itself rather than its users.
“A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda,” they wrote.
The company said it had focused on Al Qaeda and Islamic State up until now because they represented the “biggest threat globally”, but cautioned that expanding the efforts to other groups was “not as simple as flipping a switch”.
But it acknowledged that while these efforts are are “bearing fruit”, it had to do more work to identify other groups.
Boss Mark Zuckerberg had first detailed his AI-based plan in February and had said it would take “many years” to fully develop the required systems.
According to reports, Facebook relies on a mix of human checkers and software to confirm which posts should be removed, but it said that the task was now “primarily” being carried out by its automated systems.
The technologies reportedly included photo and video-matching, in which previously identified imagery used by terrorist groups is automatically detected when it is reposted