facebook_lynchThis morning, a mass shooting occurred at Sandy Hook Elementary in Newtown, Connecticut where 26 people were reportedly killed, including 20 children. Once police initially identified the suspect of this unspeakable tragedy as 24-year-old Ryan Lanza, the race was on for citizen and professional journalists to scour social networks for photos, Tweets, and status updates from the suspect. (We can only assume this digital manhunt was intended to offer insight into what kind of person would do such a thing, even though such questions always go unanswered).

As it turned out, a Ryan Lanza hailing from Newtown, CT who looked around 24-years-old had a Facebook page. As links to the page circulated around Twitter, professional news organizations caught on: Slate Tweeted a link to the page with the comment, “CNN names suspect as Ryan Lanza the likely FB page.” (Slate later retracted the Tweet). Buzzfeed published Lanza’s Facebook picture with the irresistible headline, “First Possible Photo of Suspected Sandy Hook Shooter” (the page has been removed, but the URL is still live). Meanwhile, Gawker hedged their bets slightly with the headline, “Is this Ryan Lanza, the Connecticut school shooter?” (Gawker also changed its story, but proof of the original headline is still in the URL). And it wasn’t just new media outlets. According to Reuters’ Matthew Keys, an anchor for WCBS-TV in New York said, “That is the face of the young man, Ryan Lanza,” as the station flashed the Facebook photo.

I think you know where this is going: The Ryan Lanza whose name and face was suddenly plastered all over social networks and television screens was not the same Ryan Lanza killed at the scene and identified by police as the suspected shooter. And now, astoudingly, police have revealed that the shooter’s name is actually Adam Lanza, not Ryan Lanza.

Although the Ryan Lanza who was unfairly “doxxed” this morning had a private profile, his Facebook friend Andrew Fletcher began publishing screenshots of his Wall where Lanza proclaimed his innocence:

Screen shot 2012-12-14 at 12.56.52 PM

Facebook hate pages devoted to the death of Ryan Lanza also began to surface with the wrong Ryan Lanza’s photo attached:

Screen shot 2012-12-14 at 12.59.25 PM

The press didn’t make things better by spreading the Facebook profile prematurely, and there’s a lively, nuanced discussion about that taking place on Twitter and elsewhere. But even if the press hadn’t dropped the ball so colossally, it’s likely that the hate pages would still have propagated. It no longer takes press credentials to broadcast a rumor to a billion people. With that in mind, do platforms like Facebook which enable these broadcasts have a responsibility to curb rumor-inspired hatred? Should Facebook have locked profiles belonging to people named Ryan Lanza so only the user could post content to the wall? And should Facebook be more proactive in removing (at least provisionally) hate groups like the ones listed above?

Okay, before you start crying foul about free speech and the 1st amendment, it’s not like this would be the first time a platform has momentarily suspended activity to allow time for rumors to be debunked. In October, after unsubstantiated rumors that Microsoft planned to buy Netflix drove Netflix’s stock up, Nasdaq halted trading of the company. And that’s hardly the first time Nasdaq has halted a stock to address a rumor. Nasdaq rules dictate that it “may implement a temporary trading halt to allow for even dissemination of the information. A trading halt provides the public with an opportunity to evaluate material information and consider it in making investment decisions.” Sounds like a sound policy. If a platform is willing to suspend activity to protect a company’s market share from momentary panic, why isn’t the same courtesy extended when someone’s reputation and livelihood is at stake?

For Facebook to suspend profiles, even temporarily, in situations like this, is probably taking things too far. After all, when the press is doing its job, social networks can be helpful tools for verifying information. Fast Company’s social media editor Anjali Mullany wrote on Twitter that she doesn’t see the point of blocking profiles. “To prevent the press from looking for info? That seems wrong… I think the greater responsibility lies with the press not tripping over itself to be first in an unhelpful way.”

But what about an alert sent to every Facebook user named Ryan (or now Adam) Lanza that says, “Hey you might encounter a lot of hate thrown your way today, so you may want to update your profile page or privacy settings accordingly”? Or for those with private accounts, “Here’s how you can post a public message if you’d like to say something.” It would avoid the slippery slope of censorship while giving potential victims of misdirected hate a chance to preemptively defend themselves.

Yes, we can blame the press or blame human nature or blame the mob mentality for Internet rumors. But Facebook arguably wields more power to inform or misinform when it comes to identities than any single press outlet. And when you host discussions for one billion users, you might want to take responsibility when people use your platform to ignite a digital lynch mob, especially when all it takes is 10 minutes to save someone’s reputation.

[Illustration by Hallie Bateman]