or have you got the wrong customer survey?
Having been in the data-gathering and number-crunching game for more decades than I care to confess to, I have always been somewhat bemused by the legions of managers who seem to equate numbers on a page with absolute truth. I suppose Nature invented this breed to balance that equally large pool of managers who pay no attention to numbers whatsoever.
Lately, this balance seems to have shifted to the true believer side with the growing popularity of the Net Promoter Scam—oh, excuse me—the Net Promoter Score (NPS).
For the uninitiated, NPS is a customer survey method wherein respondents are asked how likely they are to recommend the subject company to others. It is scored on a zero to ten-point scale, with those scoring the question with a nine or ten considered to be highly desired promoters, and those dastardly sorts scoring a six or less are deemed to be detractors.
The appeal to this approach lies in its utter simplicity. With a silver bullet question, it promises to reveal just what your customers think of you. It all boils down to a single number, the percentage of promoters minus the percentage of detractors. As you might have guessed by now, I am not a big fan of the NPS. My concerns about it are many, beginning with the fact that organisations and relationships are complicated and distilling a dynamic, complex stew of human experiences and perceptions down to a simple, single number seems problematic at best.
We conduct annual customer surveys for a global company, and contrary to our advice they require the NPS question to be asked and scored in each survey. Our primary sponsor is Japanese; he runs the client’s Japanese operations. We have every reason to believe that he is excellent at what he does and that the Japanese divisions are among the best in the company. Yet we couldn’t help notice his embarrassment at the low NPS scores we kept seeing for his domain.
Numbers don’t lie … or have you got the wrong customer survey?
How could this be? By other estimates, these operations should be near the top of the heap. Then it struck us. Japanese tend to avoid responding with extreme scores on surveys for cultural reasons, so the nine’s and ten’s we might expect to see from, say, American clients were largely absent from the Japanese even though they might hold our client company in equal or even higher regard. Suddenly, the business of comparing one country’s results with the others was highly suspect. All of the countries in the survey have their own cultural norms.
Of course we could apply weights to the scores for each country to account for the cultural differences. We know the Japanese, the Dutch and the Australians tend to avoid giving top scores. But knowing this as a generalisation doesn’t tell us what those weights should be, and especially not for a specific survey intended for a specific kind of professional.
So what do we know? We know we should not use such data to compare one unit with another unless we are certain that cultural norms are not a factor.
But I would go further.
Survey results are never exact, nor do they provide the absolute truth. Yet when properly analysed, good surveys can provide potent indicators of issues can tell us the direction and magnitude of change.
We should neither worship numbers nor ignore them. As with most things in life, the middle course is best.