Pando

Here's why it's so hard for tech companies to screen for "extremist content"

By Nathaniel Mott , written on January 29, 2015

From The News Desk

Google's public policy manager likened sifting through the 300 hours of video uploaded to its YouTube service every minute to "screening a phone call before it’s made" yesterday during a meeting with the European parliament about a counterterrorism action plan.

The scale of this issue highlights the difficulties technology companies like Google, Facebook, and Twitter face when governments ask them to screen -- or pre-screen, in this instance -- the massive amounts of content added to their websites every moment.

Yet these companies are coming under increasing pressure to monitor their users. France, for example, is considering a law which would hold Facebook and Twitter liable as "accomplices" to terrorists if they don't remove extremist content from their services.

And the companies are increasingly vigilant in the United States, where each has warned law enforcement agencies about threats against police officers following the killing of two New York Police Department members the night of December 20, 2014.

There is a limit to what these companies can do, however, especially as more people use their services. Even some of the best-funded intelligence agencies in the world find it hard to manage the data generated every day; private companies won't fare much better.

[illustration by Brad Jonas]