Simply put, good customer experience surveys should tell you what to improve, how to improve it, and where to start

Insights

In 1992, an article in the Harvard Business Review introduced the world to the concept of NPS, or ‘Net Promoter Score.’ In the article title, Frederich Reicheld called the methodology “The One Number You Need to Grow.” Unfortunately, while the methodology has become surprisingly popular – and many have benefitted from the information it provides – many also seem to have misinterpreted that article as ‘The Only Number You Need to Know.’ As a researcher, I would argue that, in most cases, companies can do much better by using a more thoughtful and targeted approach in conducting loyalty or brand experience research.

What is NPS?

NPS has become a very popular tool for understanding the power of your brand and uncovering new ways to make it more meaningful to clients. At its heart, the process is simple. Respondents are asked: On a scale of zero to ten, how likely are you to recommend our business to a friend or colleague? From this question, customers are classified into brand promoters, detractors or passives. Customers are then asked a follow up question: Why did you rate our business the way that you did?

Overall, the method is simple and can be very powerful, as it provides a way to understand your brand and determine where to make targeted gains based on direct customer feedback. The approach can be particularly useful to smaller companies and those without access to more sophisticated market intelligence or brand insight tools.

Key Frustrations

While the basic idea of giving a direct voice to customers is a very good one, we frequently hear several relatively consistent frustrations with the methodology.

First, while the methodology is designed to facilitate easy comparisons between your brand and many others – including similar brands within your industry and beyond – the reality is that reliable competitive data is very difficult to collect. While many companies collect and calculate NPS scores, they are frequently reluctant to share it. Furthermore, companies often change the question to suit their purposes, and then include a mix of clients and non-clients in the survey sample. The result is that the final scores can vary wildly. In some cases, marketers may not even be aware of how the surveys are implemented or why comparisons fail. Situations like this simply provide a number with very little context. Is a score of +20% good or bad? The answer is that it very much depends on the context.

Second, the limited nature of NPS surveys frequently leaves good information behind. Customers don’t typically think about or understand the details, specific issues or logistics that help to make up a positive overall brand or customer service experience – particularly for more complex, higher-value or experience-oriented products. Nor should they. When only high-level or open-ended questions are asked about these products and brand experiences, customers don’t normally consider the full set of details that made up their overall view. Achieving detail on this level of experience is possible, but frequently requires more specific questions that tie in more closely to targeted details of the experience or to areas where improvement efforts are being made. From a company’s point of view, the frustration here is that NPS surveys frequently don’t do a good job of highlighting immediate priorities for improvement and are at best, reactive. This problem typically becomes worse in the long term as targeted improvements are made without identifying deeper issues. In these cases, a few simple but more targeted questions can go a long way to providing a much deeper level of insight without becoming onerous when customers respond to the survey.

Finally, not all consumers or customers are sufficiently engaged or knowledgeable enough about a brand or category to give strong or useful feedback. Some worry that their recommendations may be ill-informed, while others have simply never been asked. We even see moderately humorous responses to the question of why clients won’t recommend, such as “I’m just not that boring,” or “I try to not go to parties where people talk about their insurance policies.” Beyond this, an increasing concern about the privacy of financial accounts may also create challenges. All of these factors create a level of variability in response data that makes it difficult to compare brands. Even among well-liked brands, the fact that customers will never be convinced to recommend makes it difficult to track brand progress, further compounding frustration in using this approach or making useful brand comparisons.

A Better Way

So, what do we at Environics Research typically recommend as part of an effective feedback program? In general, I look for five key elements:

  1. Customize your survey and ask questions that matter. Develop your own questions, and focus on what matters to both you and your customers. Every question should reflect your customers’ expectations about the experience and any improvements they would like to see, and directly inform your goals.
  2. Mix high-level and more specific tactical questions. This will help you to understand not only how you are doing at a high level, but also what is driving results at a tactical level. We can even run analysis to determine how much each question’s score impacts your overall rating – a process that can help you to better allocate your precious time and resources.
  3. Keep your survey short and switch it up. Your questions should be quick enough to hold your clients’ attention and reflect their amount of engagement in the buying process. Cut questions ruthlessly, but remember to add targeted new questions as your goals and client needs change.
  4. Connect questions to the experience as directly as possible. Ideally, your questions should follow up on a direct contact experience while it is still fresh in a customer’s mind. This will help you to better understand the impact you are having on your brand.
  5. Track your results on an ongoing basis. It is amazing how frequently our clients come to us and ask if a change (sometimes even a subtle one) has made a difference to customer experience. Through a mix of targeted questions and continuous monitoring, we can look at specific time periods and offer deeper insight into what really motivates customers.

Simply put, good customer experience surveys should tell you what to improve, how to improve it, and where to start. The good news is that making some targeted gains to the way we run these surveys can add tremendous additional value to decision-makers without over-burdening the customers who take the time to offer feedback.

If you are experiencing some of the frustrations described above, and are interested in understanding how custom data can better drive your business decisions beyond NPS, you may be interested in our Touchpoint Evaluation process. The bottom line is that good questions drive good decisions and good brand results.

Find out how we can help your organization today

Explore more of our recent insights