Pando

Flickr's and Google's algorithms are behaving badly

By Nathaniel Mott , written on May 20, 2015

From The News Desk

Flickr and Google are offering a case study in the dangers of faulty algorithms.

Flickr's problem stems from a system that automatically tags images shared to its service with what it sees in the photos. A black-and-white photo is tagged "blackandwhite," for example, while a nature shot can be tagged with "outdoor."

The problem is that this system automatically tagged several images -- one of a black man, another of a white woman -- with "ape." It also labelled the Dachau concentration camp with "jungle gym" and Auschwitz's entrance with "sport."

Here's a Flickr spokesperson denying blame for the problem to the Guardian:

We are aware of issues with inaccurate auto-tags on Flickr and are working on a fix. While we are very proud of this advanced image-recognition technology, we’re the first to admit there will be mistakes and we are constantly working to improve the experience. If you delete an incorrect tag, our algorithm learns from that mistake and will perform better in the future. The tagging process is completely automated – no human will ever view your photos to tag them.
Google's problem doesn't have to do with photos; it has to do with the search results displayed whenever someone searches for racist terms like "n***** king" and "n***** house." And some of the top results for those queries lead to the White House. Apparently enough racists have clicked on directions to the White House, or pages relating to it, that Google's algorithms thought it would be best to return information about the establishment with those queries.

Here's the statement from a Google spokesperson, again to the Guardian:

Some inappropriate results are surfacing in Google Maps that should not be, and we apologise for any offence this may have caused. Our teams are working to fix this issue quickly.
Notice how this spokesperson doesn't talk up the malfunctioning system, actually apologizes for its algorithm being so offensive, and then says a fix is coming. It's almost as if Google is, like, taking responsibility for the problem.

Flickr responded to criticism by saying its robot's just a baby and will continue to learn; Google responded by apologizing and promising to fix the damn robot. Only one of those responses is appropriate when an algorithm behaves badly.

[illustration by Brad Jonas]