March 19, 2020

How to Measure Customer Satisfaction

Why, exactly, is customer satisfaction such a hot topic these days? Why have countless authors written countless posts about its significance? 

It could be because today’s customers have set their own expectations for how they want to be treated; the brand’s job is to meet those expectations every time. It might be because it costs companies so much less to retain an existing customer than it does to acquire a new one. 

The truth is, there are a million reasons to focus on customer satisfaction, from kindness to cost. But we aren’t here to debate its significance, or even to provide a historical perspective on the rise of customer satisfaction as a corporate discipline. Today, we’re here to talk about just one really valuable component of customer satisfaction: how you measure it. 

Measuring Customer Satisfaction: Indulging the Indicators

It’s always the same when you try to measure any one particular aspect of your business: You need something to measure against. In other words, you need to establish a baseline and start setting goals. But, what you want to measure should be dictated by why you want to measure it. So, why do you want to measure satisfaction?

First, there are the simple reasons. If customer satisfaction generally trends positive, publicizing your metrics can boost morale. Your customers can act as evangelists and referrers leading to a nice boost in new revenue, too. But the outcome of sharing your metrics is not the real reason to measure satisfaction. 

You should measure customer satisfaction because of what it will teach you.  

Even the worst metrics reveal the opportunities for improvement across your service or product. The metrics you choose to measure should act as leading indicators for some parts of your business and key performance indicators for others. For instance, if your customer service department sent out a customer satisfaction survey, the results of that survey might very well be that team’s key performance indicator. But the same result could indicate another action down the line for Sales. Satisfaction could be a leading indicator for revenue, and dissatisfaction a leading indicator for churn

Measuring customer satisfaction is all about making connections in order to learn — connections between what your customers say and what they do, and connections between what you say and do and how your customers feel as a result.

So how do you approach it, in order to make the most of that opportunity?

Quantitative vs. Qualitative Data

Unlike most other business scenarios, wherein data is quantitative when objective and implicit and qualitative when subjective and explicit, customer satisfaction data is almost always subjective. 

Yes, the number of likes on a customer-service focused Facebook post is an objective measurement and a quantitative one. It may also indicate you have generally satisfied customers. But there’s also a possibility it isn’t related to your customers at all. 

Good customer satisfaction data can be quantitative or qualitative, and implicit or explicit. But what good is any of it, unless it’s subjective? After all, it’s your customers’ experience we’re talking about here. 

Tried and True Quantitative Data

The quantifiable aspects of customer satisfaction are simple to conceptualize. Common metrics include customer satisfaction score (CSAT), Net Promoter Score* (NPS) and Customer Effort Score (CES). Generally, companies working to obtain these metrics follow relatively standardized approaches: 

Customer satisfaction score

A customer satisfaction score is explicit information pulled from a survey to the customer. Such a survey asks the customer about their experience and satisfaction level directly and makes use of a scale for responses (1 to 5,” not satisfied at all” to “very satisfied”, for instance). 

You can use a CSAT any time there has been an interaction between your business and the customer, regardless of the scale of that interaction. You may use it after completing a single phase of a project, delivering customer service via a support line phone call or when a customer exits your onboarding process. However, the CSAT has one failing: it’s tightly related to the single experience — not the whole experience with your brand or company. 

For that big-picture insight, look to the Net Promoter Score (NPS). 

Net Promoter Score

You’re probably familiar with NPS. At the very least, you’ve received one of these surveys yourself. This score is developed to loosely quantify word-of-mouth marketing and help you gauge your customers’ overall satisfaction with the brand, product or service, as well as their greater loyalty to your company. 

It is dependent on the answers to the question: “On a scale of 0–10, how likely are you to recommend our product/services to a friend?” Based on their answer, respondents are categorized into three buckets:

  • Promoters are respondents who scored 9–10.
  • Passives are respondents who scored 7–8.
  • Detractors are respondents who scored 0–6.

To determine the company’s overall NPS, they then subtract the percentage of detractors from the percentage of promoters. Through examining this quantitative data year over year or perhaps quarter over quarter (depending on your service/product cycle), you can begin to discover trends and use the information to fuel better experiences for your customers.

However, like the CSAT, the NPS is limited in scope. It can’t account for the exact moments of satisfaction or dissatisfaction or allow you to understand whether or not the referral will actually happen in real life. It largely acts as a way for you to gauge customer sentiment overall, over time. And there’s one more score that might contribute to that sentiment: Customer Effort Score. 

Customer Effort Score

Customer Effort Scores came into the mainstream in 2010 after an article in HBR detailed a study about customer loyalty. As HubSpot put it, “The article is illuminating, if not for the quality and depth of the research than for the against-common-sense finding: The easiest way to increase customer loyalty is not through "wowing your customers," but rather through making it easier to get their job done.”

Survey respondents are asked how much effort they had to expend, in a format such as “How easy was it to get your problem solved?” 

A Customer Effort Score can be measured on a scale, usually from 1 (so easy I didn’t even realize I had a problem) to 5 (I feel like I just ran a marathon). 

Like NPS and CSATs, CES has its limitations. While often tightly tied to loyalty, it can be less applicable to the holistic brand experience than to a single moment in time. The best approach for quantitative data gathering is to create a feedback loop that incorporates all three survey types at opportune moments. (More on that later.)

Of course, quantitative feedback is only one half of the satisfaction picture. Qualitative information counts, too. 

A Qualitative Approach to Customer Satisfaction Measurement

Qualitative information is just as valuable to your understanding of your customers. It’s the information that reveals more, that digs underneath. As such, obtaining the information requires you to ask important questions and pay attention to various channels — places you may not initially turn to for data. 

New call-to-action

Asking the “why” questions

Qualitative data can be considered more personal to the respondent. Their willingness to share it shouldn’t be taken lightly. It requires your customers to ask something of themselves — to create a story about their experience. Asking questions about why they provided the quantitative feedback they provided can dig at this story. 

When gathering qualitative information, you should strive for the why. It’s what you can ask, and it’s what you can learn. Why did your customers decide to stop doing business with you? Why, on a scale of 1–10, did they rate your product an 8? What are they looking for? These questions can all be posed as “open fields” on surveys you’re already using. 

It’s important to note that these “why” questions can also be posed outside of quantitative surveys on their own. And it turns out that in some cases, this information can be garnered without even asking for it. 

Reviews, conversations and social trends

Much of your qualitative information about customer satisfaction can be discovered out in the open, exposed to the public. Check review pages for your company, log conversations between your service team and your customers and examine social posts featuring your product for trends. Information about how your customers feel is everywhere; you just have to look at it. 

In the end, there’s no debate here. You can’t just collect one type of feedback or the other. You need both quantitative and qualitative data to get any sort of true understanding of customer sentiment.

Designing Your Feedback Program

Start by identifying places in the customer lifecycle where you are already collecting feedback. The channel could be anything, from social media to review sites. The feedback might be quantitative or qualitative in nature. Categorize it accordingly. 

Next, examine what questions you’re asking, if any, and what questions you may have the opportunity to ask without adding too much friction. Consider the phrasing, the meaning. Consider the outcome of collecting the information. How is your company using the information today? How will you use it in the future?

Develop a set of goals for each metric. These goals should be time-bound and relevant. Set your baseline, choose where to ask the questions and then you’re ready to dive into a whole new world of measuring customer satisfaction. 

The great news is, there’s always something to learn. But there are definitely ways this effort can go awry. What should you avoid as you build your program?

How Not to Measure Customer Satisfaction

As with all data you gather, you’ve got to be careful. Not all information is the cleanest or most reliable. Not all questions are the right ones to ask. Don’t fall prey to these common mistakes: 

  1. Biased feedback due to survey format
  2. Obtrusiveness
  3. Ignoring the information

Mistake #1 is so easy to make. It’s not a death knell for your measurement program, but if you can avoid introducing bias into your questions, that’s the best route. There are many ways for a survey to introduce bias. It might be sent only to customers you know had a great experience. Or it might use a leading question. You want your questions to be specific, not leading. You want to get the answer, not give it. 

It’s also quite easy to become obtrusive with surveys. If they’re sent too frequently or at inopportune times, your surveys can annoy more customers than they solicit real feedback from. In some cases that causes low response rates. In some cases, it results in false negatives.  

Finally, perhaps the biggest mistake you can make with regard to customer satisfaction data is gathering the information and then never using it. As mentioned, you must take your goals into account when designing a program. You need a good reason to ask your customers to put effort into providing their stories. Have one! Decide that you’ll review the data at a certain cadence and what decisions you will allow it to shape at those times. 

Want more info about NPS and how to delight your customers? Subscribe to our blog for regular updates.

New call-to-action

Karin Krisher

Karin is Content Lead at New Breed. She specializes in developing content strategy and copy at every point in the creation process, from persona design to final edits

cta-pat

Ready to jumpstart your acquisition, retention and expansion efforts?

Request Assessment