The racists next door: Why Nextdoor's racial profiling "problem" isn't the company's fault -- it's society's

By Michael Carney , written on March 30, 2015

From The News Desk

Last week, on Fusion, neighborhood social network Nextdoor made the kind of headlines every startup founder dreads: “Nextdoor... is becoming a home for racial profiling.” Ouch.

Questions swirled around the Pando office, like: How widespread is this behavior, and is the company doing everything it can to combat it? Is this just another salacious, clickbait headline, or is it more serious, like Nextdoor’s equivalent of MySpace’s pedophiles or Airbnb’s methheads moments?

I spoke to Nextdoor, hoping to get to the bottom of these questions. What I found is that the company has taken most of the usual measures you’d hope to see from a company committed to fostering a safe, inclusive online community. Users have the ability to flag inappropriate comments and behavior, and to tag and annotate the exact reasons why they’re offensive. Any comment that is flagged as inappropriate gets reviewed by a real, live human being either on the company’s staff or the Neighborhood Lead, and those that violate the company’s member guidelines – and thus are deemed “unneighborly” – have their accounts suspended.

And yet, only one quarter of one percent of all the messages ever shared on Nextdoor have ever been flagged as abusive, according to a Nextdoor head of communications Kelsey Grady.

Still, racial profiling certainly happens on Nextdoor, at least in isolated incidents. The Fusion story proved as much – although, to be fair, the documented case was far from the most offensive example I’ve ever seen.  But to say that the online platform is “home” to such behavior, or any other type of systematic abuse, suggests that it’s commonplace and tolerated. And judging by the data, that simply is not the case.

There remains the possibility that everyone, or at least the vast majority of users on Nextdoor are such raging racists that heaps of blatantly offensive behavior goes unflagged. But that doesn’t make much sense either. Nextdoor has reached large scale, representing some 55,000 active neighborhoods, which means that its membership likely maps fairly closely to the US population in terms of demographics and ideology. Surely there are some racists on the platform, but unless you think most of America is not only deeply racist, but also willing to behave as such openly and in public, it’s unlikely that Nextdoor would be a venue where that behavior is widely tolerated.

And that raises another characteristic that predisposes Nextdoor against bad behavior: every user registers with their real name and address, as verified by postcard or reverse phone dial. The platform is comprised of micro-communities restricted to a few short blocks (or in some rural areas, miles) around each user’s home. These are people’s neighbors. They run into one another at the grocery store, at church, and at the local school. That’s about the least likely group to which you’d expect someone to mouth off with a blatantly racist (or homophobic, or sexist, or otherwise offensive) rant.

But, says Nextdoor, it’s not just the hyper-local community dynamics and content moderation that combat abuse. The company’s Neighborhood Operations Team (NOPS) works closely with the leader or leaders of each community, which can be the community’s creator or any user who is highly active in managing the community and is thus appointed by the creator into that role.

Nextdoor Director of Neighborhood Operations Gordon Strause provided Pando with the following statement:

Our members use Nextdoor every day to build stronger and safer neighborhoods. We are committed to fostering a sense of community in the many diverse places they live. The site guidelines we’ve established help ensure that members hold themselves and their neighbors to appropriate neighborly behavior. Members are expected to refrain from using profanity or posting messages that will be perceived as discriminatory or racist in any way. Members can flag content and contact Nextdoor about any behavior on the site that they believe to be harmful to a neighbor or the neighborhood. A violation of our site's guidelines could result in the immediate suspension of a member's account.
I asked the company whether it actively monitors the conversations on its platform; meaning, do humans or computers read every word and proactively seek out abusive behavior? After all, this is something that Whisper, Secret, and other online communities have taken to doing in an effort to combat the inevitable abuse which comes with opening your virtual doors to everyone and anyone with an internet connection. The answer is no.

According to Grady, Nextdoor chooses not to decide for its community members what constitutes “unneighborly” behavior. The company also doesn’t inject any content into the site. Each local community acts as a virtual town square that is purely a result of the members that gather there. This approach came out of the company’s earliest days when the product was still in beta and the company’s co-founders were actively involved in the first 150 or so communities. That experience convinced the founders that people were, in fact, using Nextdoor to build stronger and safer offline communities as opposed to using it as a forum for spreading hate. But it also led to the understanding that each community, and thus its behavior, culture, and tone, are entirely unique. Interjecting Nextdoor’s own involvement into those communities would affect their authenticity, the company claims. So Nextdoor takes a hands-off approach and allows its users to self-police.

All of which brings us back to the core question: Is the behavior of some users on Nextdoor -- like describing the appearance, including race, of an “unusual” character knocking on doors in your neighborhood -- inherently racist? That’s a nuanced question that’s difficult to answer definitively. Certainly, racial profiling is a very real problem, and there’s often racism underlying any suspicion of “the other.”  And yet, this kind of behavior -- reports of “suspicious” people who “just happen” not to be white -- isn’t exclusive to Nextdoor. Rather, it’s exactly the kind of talk you hear at offline neighborhood watch groups.

Nevertheless, it seems, at least based on my conversations, that Nextdoor is taking these accusations seriously -- as they should. Grady admits that the company was surprised by the story. But Nextdoor claims its Neighborhood Operations Team is working with community leaders, both in the community that was the subject of the Fusion story and in others across its platform, to assess whether end users feel racial profiling or other abusive behavior is a problem. Unsurprisingly, Grady tells me that the overwhelming response from these leaders is that racial profiling is not a significant issue:

It is our priority and we will continue to be vigilant in supporting our members so that they have constructive and respectful dialogues on Nextdoor. As deemed necessary, Nextdoor will evolve our member guidelines and our neighborhood moderation tools to meet the needs of our members.
As an active Nextdoor user, I can tell you that my local community mostly focuses on missing cats, raccoon mischief, and recommendations for local service providers. There has been the occasional security issue, like reports of break-ins or a burst pipe while a neighbor was out of town. But it’s hardly been a forum for wild fearmongering or racially-themed discussions, at least in my experience. But maybe my neighborhood is not emblematic of most.

Point being, while Nextdoor’s “MySpace pedophiles” moment may still be on the horizon, this isn’t it.

[photo by n i c o l a]

[Disclosure: Michael Carney has accepted a position as an associate at Upfront Ventures that begins in April. To the best of Pando’s knowledge, the companies in this post and their competitors have no affiliation with Upfront. This post went through Pando’s usual editorial process.]