Apple is famous for reaching out to unsatisfied customers. In fact, customers that don’t provide a glowing review of their Apple experience will reportedly receive a call from a store manager within 24 hours.
But is bending over backwards for cranky customers worth Apple’s time?
The problem, according to a report from analytics firm Foresee, isn’t that Apple takes too much time or energy to address customer concerns. It’s that Apple uses a faulty metric for identifying unhappy customers.
For years, Apple has used the “Net Promoter Score” (NPS) to identify customers who are likely to spread negative feedback about the company. The test is simple: Customers are asked on a scale of zero to ten, “How likely is it that you will recommend this product or service to a friend or colleague?” An answer of 6 or lower labels that customer a “detractor,” thus they will receive a call from a manager to talk it out. Other companies that use the Net Promoter Score include American Express, Charles Schwab, and General Electric.
But Foresee says the NPS methodology overstates the number of true detractors. Foresee uses a metric called the “Word of Mouth Index” (WOMI) which, in addition to asking customers how likely they are to recommend a product, Foresee also asks, “How likely are you to discourage people” from using a product. If customers answer a 9 or 10, Foresee labels them a “true detractor.”
So by how much does Foresee say Apple overcounts its haters? Under the NPS model, 12 percent of respondents are labeled detractors. Meanwhile WOMI puts that number far lower at only 1 percent. So what’s the impact of this purported discrepancy? Foresee’s report estimates the potential financial loss of calling all those “false negatives.”
“In the Apple example, a store manager making $60,000 a year who spends one hour a day following up on detractors (at $29/hr) costs Apple $7,500 annually. With more than 400 stores worldwide, this adds up to $3 million per year, $2.1 million of which is lost productivity because it is being spent following up with detractors that are not True Detractors.”
Now that math might be a little silly. Is each hour of the manager’s work really worth the same amount of money? But okay, we see their point. Nevertheless, there are a couple problems with Foresee’s survey questions. First off, research suggests that including “negative” questions, like Foresee’s “discourage” question, can easily lead to mistakes. In other words, someone sees “Do you not like Apple?” and their brain automatically reads it as “Do you like Apple?” Also, there are only 2 out of 10 possibilities for a person to express a negative sentiment in Foresee’s model: Either they pick 9 or 10. On the other hand, with NPS there are 6 possibilities to express a negative sentiment (1-6) and so of course Foresee’s results are less negative. In addition, (and this goes for the NPS test as well) relying too heavily on survey results when making business decisions can be problematic due to “non-response bias.” The only data they have comes from people who have elected to respond to the survey, which might make it unrepresentative of the whole.
In defense of Apple, I do question Foresee’s definition of negativity a bit. It only counts “True Detractors” if they give an answer of 9 or 10. But if I answered 7 or 8 when asked about discouraging people from buying a product, I’m probably not crazy about it. Also (particularly thanks to biases inherent in the survey) it’s better to err on the side of going too far for customers as opposed to not far enough.
That said, it seems a bit severe to assume that just because someone answers a 6 or lower on the NPS, it means they’re going to trash that brand. I don’t care at all about the vast majority of brands and would likely rank, say, Red Bull or Oreo as 5′s because, like, whatever man. But if Red Bull called me after the fact asking why I don’t love them more, well, then I might have something negative to say.
In other words, using surveys to assess customer satisfaction may leave much to be desired. Perhaps the future of customer satisfaction surveys lies with data-intensive startups like Retention Science. But if you must use surveys, and allocate company resources based on the results of these surveys, make sure you ask the right questions.
[Image via LJR.MIKE on Flickr]