A strange kind of grief overwhelms me when I think about using NPS. I will admit, the idea of NPS is great: having a single easy-to-understand, widely accepted measure which can be used to gauge customer satisfaction sounds like a Product Manager’s dream, but the execution usually falls fantastically short. Here’s why NPS sucks, and how to make it better.
So what exactly is an NPS score? If you’ve ever taken a survey you will most likely have seen the following question:
On a scale of 1–10 how likely are you to recommend <product or service> to a friend or colleague?
Your NPS (or Net Promoter Score) is the result of taking all the people who rated your product or service a 9 or 10 (known as Promoters) minus all the people that rated it a 0 to 6 (known as Detractors) and is expressed as a percentage. Neutrals are people that gave a rating of 7 or 8, and have no effect on the score.
Seems like a fairly useful means to gauge user delight, surely?
Here’s why NPS sucks
Have you hit your target persona?
NPS doesn’t care whether your responder is a power user or a new user, or whether their job role (or general demographic) is correctly targeted. If you spent months designing and building a product for “Jackie the Developer” and then an Ops person gives you a low score, does it make a difference to your roadmap? If someone who fits your user persona perfectly gives you a low NPS, then you need to listen and listen good. Everything else is noise and this will skew your result.
Recommendations are contextual
Context is everything. Simply asking if I would recommend a product to a friend or colleague isn’t good enough. Would I recommend the restaurant across the street from me? That’s hard to say without knowing the circumstances. Would I recommend it to a friend who is in the East Village of Manhattan, looking for a reasonably priced sandwich for lunch? Sure. Would I recommend it for a romantic dinner for 2 when traveling from up-town? No. Context is key and details matter, remember that when writing the question.
Cultural differences can impact your NPS
A recent study into this phenomena yielded interesting results. It was found that response styles among the participating countries were very different. People from Brazil and China, for example, often gave extreme responses, while the Japanese leaned toward midpoint answers. The researchers concluded that data must be studied per country and researchers must not assume that countries in the same region respond in the same way.
Why did you give that rating?
The biggest no-no when it comes to collecting NPS is to get a rating without a reason. Just receiving an NPS score is effectively useless unless you understand the motivation for a rating. Without a reason it’s impossible to know where to improve the product: Perhaps the score was low because the user struggled to get the software installed which would indicate an issue with the documentation rather than the product itself. Without motivation you could spend valuable time and energy trying to guess what went wrong.
Sample size can be too small to be valuable
I typically find response rates to NPS surveys are low — with average rates of 10–20% of people responding to the survey. Small result pools can provide erratic results. In fact, there is a formula for calculating the margin of error from an NPS score based on the number of responses. With small numbers you can have an NPS margin of error of 10 or 20 points…enough to render the NPS useless. Make sure you have a large enough sample size of responses for any score to be meaningful.
Ok so now we’ve talked about the shortcomings of NPS, lets talk about how to make it work for you.
People assume NPS is a single question. In fact, making NPS multiple questions makes it significantly more valuable for you.
- Be sure to ask your NPS question with context! “How likely are you to recommend this software to someone using this product for the first time” or “How likely are you to recommend this software to someone with previous RDMS experience”. This makes sure people are set up in the right mental context when providing their answers.
- Make sure to ask the follow up question of why someone gave a particular score or rating so you can understand their motivations.
- Be sure to ask demographic questions so you can filter out the noise. “What is your role?” “What is the length of time you have been using the product?” “What is the size of your company”, etc.
- Always ask for an optional email address from the candidate in order to follow up on any answers that are interesting or confusing.
- Finally, make sure you have a large enough sample size for your results to be meaningful. For me personally, anything less than 100 responses is essentially a throw-away exercise.
In summary, being mindful of your audience and your wording can have a huge impact when conducting your survey. NPS can be extremely valuable when used in the right way.
So….On a scale of 1–10 how likely are you to recommend this article to a friend or colleague?
Liked what you read? Hit that “Clap” button and read my other posts: