How to Use Feedback Surveys to Improve Customer Retention

The most common reasons customers churn are perceived indifference and poor service. That’s why 82% of customers leave!

During this event, you’ll learn how to reduce churn using customer feedback. From building the survey to analyzing the data and acting on the results, attendees will leave with all the information they need to put a churn-stopping feedback program in place.

You'll learn:

  • The pros and cons of the 3 primary types of customer feedback surveys
  • How to share feedback analysis that people actually act on
  • How to join survey data with other data sets for rich insights on the customer experience
  • And more…


Peter: Hello, everybody. Thank you so much for joining us today to learn how to use customer feedback surveys to improve retention for your business. I'm going to introduce my co-host in a moment, but first, I want to run through a few housekeeping details, and then we will get to the introductions.

So if you have any questions during the event, and we want you to because we very much want to make this a discussion as we wrap up our slides, we want to invite you to ask them at any time through this interface on your screen. We'll save them for the Q&A at the end. Also if you're somebody who loves to tweet, we're actually taking questions via Twitter as well. If you ask a question via Twitter, just use that hash tag on the screen, #retainthem, we'll see that and respond to your tweets. And if you do tweet, we'll enter you into a draw to win a dozen delicious cupcakes which should brighten all of your days. We'll announce the winner of that after the QA wraps up, so please do stay through the whole presentation and discussion afterwards.

And the final housekeeping note here is the event is being recorded today, and we'll have that out to you before the weekend. So please, I'd love for you to focus on the discussion and your questions, you can always go back and re-listen again in the future. Okay, housekeeping out of the way. It's great to meet you all. My name is Peter Marinari, I am the Director of the Customer Success Team here at RJMetrics. We work with hundreds of the most exciting businesses on the Internet to help them understand how to use data to improve the performance of their business, and one of those factors is their retention.

I am so excited to introduce you to my co-presenter today. Her name is Lori Gauthier, and she's from Zendesk. Lori, I'd love to give you a chance to introduce yourself a little bit. So many of our RJMetrics clients are Zendesk users, as are we at RJMetrics. So this is something that we love to talk about, and it's really a lot of fun to be on the phone with you today. So why don't you introduce yourself and also let us know a little bit about Zendesk. And I'll advance the slides.

Lori: Sure. Thanks, Peter, and hello to everybody tuning in. I'm super excited to be here. I'm very happy to be a part of today's webinar. And I hope to share a bit of knowledge I've gained from conducting research for the past dozen years. That research began at Stanford where I obtained a PhD in communication but then broadened considerably as I work with clients in this vast retail, real estate and various other industries.

And it continues now at Zendesk where I'm a Director of Marketing Research. And where I investigate the very important and occasionally complicated relationship between customers and employees. Well, Zendesk is a Cloud based customer service platform and it's designed to bring organizations and customers closer together. And it's doing so at a time when the voice of the customer has never been more powerful.

Our customers include uber, Pinterest, Adobe, Fox, ESPN and SBC. And we believe very strongly in the power of data to help these and all of our customers develop and retain relationships that are more meaningful or personable and more productive. So let's get one slide closer to doing just that by passing the webinar back to Peter.

Peter: Thank you so much, Lori. So I will actually offer the same insight to RJMetrics for those of you who may not be familiar with our business. We build data infrastructure and analytics software for online businesses. And we work with clients across a variety of sectors. You're going to see one Harris Farm Markets in a moment, and I'm not going to spoil that too much. But they're one of Australia's biggest grocery store chains and a mutual client of both RJMetrics and Zendesk.

I'm going use how they've used customer feedback surveys to run through Zendesk and double their repeat orders. But also I want to tell you a little bit more about RJMetrics, that we're a wonderful way for you to consolidate your data from across several disparate sources to do just this kind of analysis. And I think the example will go a long way to prove that point to you.

But before we even do that, I'm going to kick us off by taking a few slides to take a look at the state of customer retention today. As many of you in customer success are aware, the retention game is challenging and it's always changing. And so the first slide we're going to look here is on the e-commerce side of things. I will in the next slide address the Software as a Service side.

So this is actually one of the first things I ever look at with a customer when they first come into RJMetrics. But I look at it at the single customer level. Here it is at an aggregate level from our benchmark research of more than 300 e-commerce companies. And what you're seeing on the slide here is that the first bar, that's up to 100 is a first purchase. And the second bar is the second purchase. And it is [inaudible 00:04:23] at 32%. So only 32% of e-commerce customers typically come back and make that second purchase.

All of that acquisition, money and effort that you put in to get the customer in the door to convert them, two thirds of that is evaporating away frequently for e-commerce customers. And it's equally as challenging for our friends on the SaaS side. So for Software as a Service, this one is from the Pacific Crest Survey, and it's a really robust research project into the inner workings of SaaS business. And we see here a slightly different view than we saw on the last slide, that was about repeat purchases. This is about customers who are retained based on contract length.

And we can see that companies who offer less than one year contract length experience 12% annual revenue churn. That is incredibly painful, you're not going to be able to scale a SaaS business when you're defending against that kind of churn, especially in your earliest years as you're trying to scale and scale quickly your customers and your annual revenue. So that was two quick hits, and that's the kind of thing that we talk about all the time with our customers at RJMetrics.

But here's what this webinar is about, why are these customers leaving? A lot of you may have some theories about this. Maybe they found a better price, a better product because a competitor releases something new. Maybe they've had an issue with your service, maybe they just forgot about you because you weren't continuing to engage them. But this is a much more productive discussion when we have data around it. So let's attack it with the data as we do here at RJMetrics.

The research around this says that the most common reason that customers churn is because they believe the company doesn't care about them. That's insane to me, and I think that many of you, as customer success professionals, because nothing we do or say is ever intended to communicate that we don't care about a customer. It's the antithesis of what we really believe in and what our job is. But the other thing that's interesting about it is the idea that somebody doesn't care about you. It's a very nebulous sentiment. It's hard to put your finger on or draw a line to what happened.

So you have to take the next question which is what are the factors about your business that made a customer feel like you didn't care? It's not necessarily bad customer service. It could be a lot of different things in this bucket that contribute to that feeling. And this is why feedback surveys are important and where they become so valuable. You need a qualitative way to figure out what "don't care" means and help the customers feel cared for. And more to the point, to help them feel cared for, not as a reaction, but as a proactive action so that they never feel uncared for.

So the role of customer feedback surveys and retention can be really powerful, and I want to break down a little bit how people can use that to their advantage. Probably something many of you have begun to experience and part of the reason you're with us today. So here's the first level of a customer feedback survey. They're great because when they're not anonymous you hear from one problem customer about an issue and you fix the issue. This is one-to-one customer success.

And while you're in a position to do this a lot of the time, it doesn't allow you to do two things, it doesn't allow you to scale your customer success team, and it doesn't allow you to get that one-to-many proactive action about fixing people's problems because you're going one at a time here. So this is, with a lot of responses, you start to see patterns. We've got three problem people here, maybe their problems are similar or the same. Now we're approaching the ability to do a really one-to-many approach to customer success management. And that's what we see here.

We want to make the changes that fix an issue at a systemic level. So we are rooting out things that make the customers feel uncared for and we want to explain this to you by showing you the way that Harris Farm Markets, one of our mutual customers, has taken action in just this situation.

So we actually just visited Harris Farm Markets, they're in Sydney, Australia, and for our U.S. listeners today, they're a lot like a Trader Joe's or Whole Foods in the U.S. And in Australia, there's a number of very big supermarket chains just as we have here. And so they are the smaller guy, and it's really important to them that they have fantastic customer service because their reputation is very much what they trade on, as is the case for all of us. And clearly because RJMetrics works with them, they also have an online component to their business that crosses over to the experience that people physically have in the stores.

So here is the challenge for them, Harris Farm Markets, they were running NPS, Net Promoters Score Surveys through Zendesk, and the surveys were great, they were hugely valuable. They got customer feedback, and they really looked at that as being core to their success, and that was one of the core tenets of their approach from the first day that they did online grocery delivery. But they had problems, they wanted context on the customer and what the customer's experience is, but the customer doesn't always give all the information.

If any of you have been on the receiving end of NPS surveys, sometimes you just get comments like, "It was terrible." Well, pronouns are not really our friend in that situation because we have no idea what the "it" was. All we know is that something terrible happened to them. And furthermore, they wanted to be able to ID the customers and look at the specific order to see, was the problem with the order? Was it based on the location? Was there a certain product that went awry for them?

And that second part of the problem was a big one because the customers not only could come from all different locations, but they could fill out the survey multiple times which means one very grouchy customer could really, really skew the results of the NPS and also skew their approach to the problem solving. Because if it was one customer creating all the noise, it could make one problem look disparately large compared to the other ones.

To solve this, if the answer is not becoming apparent to you, they needed a primary key, a single, universal customer ID that would link responses to a customer and fill in all of that contextual information. But how did they do that? Well, I promised I would talk a little bit about how RJMetrics can help with this, and that is what they did.

The data previously was in a vacuum and they brought it all together. So they brought that very rich Zendesk data in as well as MySQL data, Shopify data and Google e-commerce data. They pushed all of that to the RJMetrics Cloud BI platform. If the NPS was zero to six, we would sync with Zendesk and open a ticket, and then Zendesk would trigger an email alert that included all of the needed customer, information, even the date the order was placed and what was ordered.

So now they're taking action on what they were finding in that NPS Survey. So what does this do? It makes it easier for a customer agent to solve the specific problem and also to take a step back and begin to identify the systemic issues that were causing those individual problems. So they used Cloud BI to build out store level reports, that way individual managers could improve the physical store location performance for their own store. And then they also did a more high level analysis to categorize the reasons for low scores.

When they did that categorization, one thing that they found that was a really common problem was that customers received the wrong items in their deliveries. This of course is the death knell of grocery delivery if you're doing it too much. We've all had that experience where we try to order something and we get the wrong brand or the wrong product and then our dinner has gone caput.

So this is really important to them to fix, and they fixed it by investing in an app and in scanners for their in-store personal shopping teams. And that ensured that they didn't pick the wrong product and it solved this problem at the systemic level, too. So they have happier customers who are getting more personalized customer support at the micro-level. And at the macro-level, they are plugging the problem that was creating the lower NPS.

Now let's look at the result of that. This constant attention to improving the customer satisfaction paid off, and it paid off really quickly. I don't have to narrate this because you can see the chart climbing from Q1 of 2014 to Q1 of 2015 through just over the course of a year. The number of new customers acquired increased and the number of customers placing repeat orders saw a 2X improvement. That's incredible, and it's how we want to see our customers improving their lives with data.

So this is the perfect example of how powerful customer feedback surveys can be to a company that wants to improve their retention surveys but also just improve their retention. We're talking about a 51% lift in new customers it acquired, multiplied then by the 2X improvement of the repeat orders.

And I hope this has all of you feel excited. I hope you can feel my excitement for this. This is the kind of problem I love to solve, not only as a customer success director, but also as a customer success professional that's working with our clients. And it's a big commitment. Nobody is here to say that this is simple to do, but it can be easier to do, and it can be done within a specific framework.

And so what you don't want to do is ask for feedback and then do nothing. It can be worse than not doing anything at all. But I luckily here have a true professional in this with me, and that's Lori. So I'm going to hand it over to Lori, and she's going to talk you through the mechanics of putting the survey together that's going to get your organization rallied around taking action on customer feedback, and I'm so excited to hear that story. So, Lori, I'm turning it over to you.

Lori: Thanks so much, Peter. It's really exciting to see how data, when used effectively, when you consider the context, can make a huge difference to a company's bottom line. And there are all sorts of ways that you can survey for customer feedback. The options often include customer satisfaction surveys, customer effort, customer loyalty. And for today's webinar, since we are crunch time here, only an hour, we'll be focusing on one of these survey tools. And that's the tool that Peter already mentioned, NPS, which was designed to tap into customer loyalty.

Now, unlike surveys that measure satisfaction or effort that is spent during individual support transactions, NPS really zooms out and it surveys, provide organizations the opportunity to engage with customers before they come to you, to measure the health of the overall relationship with them, and to actually take action to improve that overall customer experience.

And the question typically looks something like this, customers are asked, "How likely are you to recommend us to someone you know?" And they then choose a point on a scale from zero, not at all likely, to 10, extremely likely. Those ratings are then grouped according to NPS guidelines into one of three buckets, customers with ratings of 0 through 6, they're considered detractors, 7 or 8, passive, 9 or 10 are promoters. The score itself is calculated by simply subtracting the percentage of detractors from the percentage of promoters. Passives are excluded from this equation.

So with that, you get the data. You're set to look at it, how do you actually analyze it in a way that will help you better understand your relationship with customers? First, I recommend that you track your data at three levels. Don't just focus on the score, go beyond it. You want to look at the actual score of course, but you also want to look at the group distributions that produce that score and the rating distributions that produced the groups themselves.

So why these three levels? Let's take a look at the specific graphic to see why. First the score, what does it actually tell you? It's a score of 51, but what do you know about your customers based only on that score? Not a lot, really. For example, you have no idea how many of your customers are actually promoters. In fact, only scores of negative 100 and positive 100 will give you that kind of information. A score of negative 100 means that all of your customers are detractors. It's a very sad state. Whereas a score of a positive 100 means they're all promoters.

So you definitely want to track at that group level. A score of 51 let's say, could be produced in any number of ways, like it is in the graphic here. You could have 64% promoters, 23% passive, and 13% detractors. Or you could just as likely have 51% promoters, 49% passive and 0 detractors. Those are two very different group distributions. Which would you prefer your customers to look like?

Now again, it may not be clear of a choice as you might think based just on these group distributions because, what do you really know about that 13% detractor group, for example? You don't know much unless you drill down another layer by examining those recommendation ratings your customers provided. Customers who report a rating of zero through six are considered detractors, remember. That 7 out of the 11 available ratings on the scale, including 3 of them that are at or hovering near the midpoint of the scale, you've got to always keep in mind, what did the customer experience in the survey?

NPS looks at these data in a way that is what is called from the survey methodology terminology a "bipolar distribution," it puts people in a very negative camp versus a very positive camp, whereas the question itself for NPS is really looking from a do nothing situation to a do something very positive situation. So people who are responding with four, five, and six, I don't really think of them as big detractors. I really like to focus and definitely recommend others focus on the people I think of as uber detractors. Those are the ones that are reporting ratings of zero, one, two and three. But you'll only get that information if you pay attention to the third level of NPS data.

Now that's a whole lot of information, and I'm just about to share with you another slide that contains even more. And so I highly recommend that you tweet and get those cupcakes because the sugar will help you keep going on this and retain the information. And then please share with me because I love cupcakes.

So in addition to really looking at these three levels of data, you also of course want to track NPS over time. So this slide is looking at those three levels, and it's doing so by looking at how the scores, the distributions, and the ratings change from quarter, to quarter, to quarter. You can also of course pick a different cadence for your own surveying. Perhaps it makes more sense to do it monthly or only yearly. That's something that each organization needs to decide upon for themselves.

Now I want to share a few tips on what you should be thinking about when you're tracking NPS over time. You really need to make sure that every sample from Q1, to Q2, to Q3 and so on is representative of your entire customer base. You want each sample to look like your population, and you want each sample to look like each other. Otherwise you'll be comparing apples to oranges, which is not great, especially if your population is made up of grapefruit. So make sure that what you're looking at is really what your population of customers look like.

And then as you go from quarter to quarter and you're seeing changes, movements up and down, don't panic and definitely don't start celebrating small changes. You want to see consistent improvement over time. Plus sometimes measures themselves can introduce what's called random measurement error which I know just thrilled everybody to hear data stuff, super, interesting, thrilling, verbiage. I'm obviously not a writer. I have no cute little nicknames for random measurement error. It is what it is.

And what it is, is it means potentially that the measure itself, the NPS question in this case, rather than anything to do with it the customer themselves is making the data fluctuate. Now we don't have time during this webinar to go into why this happens. Just know for now that it does, and that the NPS question in my experience, tends to introduce to random movements of about plus or minus five points.

So what does that mean? That means an NPS of 57 is really better understood as an NPS interval of say 52 to 62 whereas an NPS of 64 is better understood as an NPS of 59 to 69. The comparative reaction you'd have to results for Q2, Q3 and Q4, if you look only at the score versus looking at that interval, comparing the scores of that that 57, 64, 57. That sounds like you made great improvement and, oh my God, you lost everything. What happened? You'll likely freak out a bit because you really don't know what's going on.

But compare the interval, look at 52 to 62, 59 to 69, 52 to 62 and you'll see that actually the movements really aren't that big. A lot of that movement is due to the inherent volatility of the NPS question. Okay. So we talked about random measurement error volatility. Again, I think cupcakes are due, please tweet me lots of questions, I look forward to going into more detail with you on that.

Now in addition to looking at the three levels over time, I also recommend that you track and peak NPS against other key performance indicators, for example, feedback. If you're running a support or a contact center and you need a bigger budget, tracking NPS against these stats can really make a difference. Demonstrating that your CSAT improvements correlates with your NPS improvements help position your team as revenue generating, and revenue generators get more money than support centers. So definitely track against CSAT and any other key performance indicators for your team.

Now you also want to go a few levels down into the data by looking at your heavy hitters. By heavy hitters I mean really understanding what is most important for your organization's success. So who are your heavy hitters? Perhaps they're small...we're looking at small businesses and governments, and there's not so much of heavy hitters, maybe they aren't important to your organization's future.

And so give this graphic, you see that the government accounts and the smaller businesses, they're the ones producing the lower scores. So don't be as concerned about them. But of course if they're very important to your organization's future, you really need to be aware that those are the segments that represent your at risk customers. And what business metrics are key to your success? Revenue is universally a key metric for pretty much everybody with the exception of nonprofit groups.

But what other metrics are important to your organization? Whatever it may be, segment NPS by that metric, it could be add-ons, it could be monthly versus annual revenue, it could be anything. Make sure that you're really seeing do your NPS groups differ by different levels within that metric. And then I also recommend that you really look at the groups that tend to differ the most on NPS. When you need a big and fast improvement, focus first on the subgroups that vary the most and look for differences where a solution is already known.

The promoters can really tell you a lot about how to fix the detractors' pain points. Pick the low lying fruit first, do what the data tell you are the easiest, quickest solutions to pursue. Now a lot of people actually use NPS because of its benchmarking capability. And in fact, while I was an independent consultant, the number one request that I had from clients was how to use NPS, and it almost always came from the board of directors or the CEO having a real interest in being able to benchmark the company's performance against others within and outside of the industry.

And you can definitely do that if you are a low performer in your industry, if that's a benchmark within your industry. But if you're a high performer in your industry, it's best to look at other organizations outside the industry that you can strive to match on their score. Now I will have to say that I prefer to create my own NPS benchmark. And I do that by serving our competitors directly. That way I can be confident in the survey data because I know how, when and from whom the data were collected.

And I will likely even spend less money running that survey compared to purchasing external benchmarking reports. So if we're going to go with benchmarking, that's the recommendation I make, run your own survey.

And a cautionary note, when you're benchmarking, remember, not all scores are the same. An NPS with seven for one organization may be very different than an NPS with seven for another organization. Different combinations of detractors and promoters can result in the same scores. So in this case, which seven would you rather be?

I'm favoring right now the middle group, especially if I were to find that the detractors in that group, that 33% were mostly reporting scores of 4, 5 and 6. Not so many uber detractors there. But again, it all depends. So you need to move beyond that score and really look to see why that score's being produced.

Now in addition to all the fun numbers, NPS surveys typically include a follow up free form question. And that question asks customers to explain their rating. So recall that Peter pointed out that an overwhelming number of customers who leave your organization believe you just don't care. The information from this free form follow up question gets to explaining why my customers think that, why they think you don't care about them.

A very simple, step one approach to analyzing this text is to just use a program that's readily available on pretty much any computer and that's Excel. You can use filters in Excel to search on specific issues of interest. In this example I've focused on voice because that's one of our products that we've been tweaking quite a bit of late and just re-launched recently. But you can also search on anything that makes sense for you or for other organizations within your company. So you could do the same with words like support, or spam, or crash, or fast and so on. Essentially sort on whatever, or search with whatever makes the most sense to you.

Now you can use Excel and other text analysis tools to identify common feedback categories. Generally speaking, I prefer to focus on the categories that include at least 10% of the samples comments. Otherwise, you'll spend resources fixing issues that affect only a small proportion of customers. Then what you want to do is map those feedback categories by the three levels we talked about before, the Net Promoter score, the NPS groups, and the recommendation ratings.

Clearly with this example, when you're looking at the feedback categories by NPS, you see that the downward driver categories are speed, product and availability. So right away you know the biggest pain points, and you can focus on them first. But you also need to focus on the text by looking at the groups, those NPS groups. With this graphic we can quickly identify the category with the most comments from detractors. And that's priced, 34%. That should be a focus and given that all you'll likely not be able to lower your prices, be sure to examine what promoters are saying about price as well. You've got 39% of them that are talking about price.

Their comments from the promoters could actually help you craft a value message for the detractors, retrain them or provide a different perspective for them on looking at the value that they're getting for the price that they're paying.

And last, you want to map the same feedback categories against those recommendation ratings. With this graphic, we see that customers who talk about the competition also reported the lowest average rating of six. And why is that? You want to look into the text. You want to see maybe what are the biggest competitors most often talked about? Now, although only 2% of comments matching competitors in this example, those comments can be quite helpful when you examine it in relation to other categories.

For example, we see from this chart that there's fairly lower scores for channels and design at ratings of 6.8. Check that those comments in those two buckets to see how competitors are managing their channels and design. You can look at the competitors mentioned in one set of comments, and look to see how are those competitors managing their channels. What is the design of their interface for example?

You can learn fairly quickly by looking at what competitors are doing well and you can do that in a way that you may not be able to do had you only focused on your own organization. So zoom out, take what feedback customers are giving you about your competition and see how you might be able to improve other pain points mentioned in the NPS feedback.

Now, a huge note of caution, when you run text analysis tools, they're really great in identifying what customers think they're experiencing, but they often fail at identifying what customers are actually experiencing. Sometimes human judgment is key to accurately uncovering what the true pain points are.

For example, customers might think product failures are causing their frustrations when something else completely is to blame. Uncovering the real reasons enable companies to eliminate those pain points. For example, customer identified product failures could actually be opportunities to advising customers, retraining customers or even upselling customers. So be sure to use humans in addition to machines to get the most out of your NPS data.

Oh my God, I'm so exhausted, cupcakes, cupcakes, cupcakes, that's what I'm focusing on. We're done with the analysis, now what? Well, we need to actually do something with all of these findings. And I recommend paying particular attraction to your detractors, especially to uber detractors, the zero, one, two, threes, and the promoters.

Now are detractors always dissatisfied customers? Not at all. Remember, NPS is not a measure of customer satisfaction. It's a measure of recommendation and livelihood. And there are all sorts of things that go through a customer's mind when they answer the NPS question. Yes, customer satisfaction is part of it, but they also have to think about, "Do I have anyone in the world to talk to about blah, blah, blah?" Blah, blah, blah being your lovely product.

Or they also maybe thinking, "No matter how fabulous the blah, blah, blah is, I don't really feel comfortable recommending to anybody, it's just not my thing." So very satisfied customers might actually give you a low NPS rating. But very rarely do they actually go as low as zero, one, two three. That's why that uber group of detractors is really so important to focus on because they are the ones that are most likely to churn, much more so than just your regular everyday detractors giving you scores of four, five and six.

But don't focus on these uber detractors in a vacuum. You really need to compare and contrast them with your promoters. Retention programs often have tunnel vision. They obsessed about the customers at risk, meanwhile effective retention programs also focus on customers who are far, far away from being at risk.

How are your promoters different from your uber detractors? What issues do these two customer groups raise in your NPS feedback? What can you learn from your promoters that you can apply to your uber detractors? And don't forget to encourage your promoter so to evangelize for you. They are an excellent source for leads and online reviews.

Promoters tend to refer customers like themselves, with the same needs, the same concerns and so on. So promoters beget promoters who are much less likely to churn. It makes a lot more sense to avoid future retention efforts by attracting new promoters to begin with. Now, all of this, there's a whole lot of stuff going on, and I want to make one last important note here, and that is, you need to apply your findings from your sample to your population at large.

If you follow up only with your sample respondents, only with the customers who answered this question for you, you will not see NPS improvement as dramatic as you would if you actually applied them to everybody, everybody in your company. That means you'll need to identify common issues, create actionable goals, and actually collaborate across teams outside and inside the department.

For example, perhaps a common issue that you identify is product performance reliability where you need to create an actionable goal like improve reliability by X percent within X months. And you've got to get commitment from engineering to meet that reliability improvement goal.

And this last point, collaborating across teams. It is crucial. Systematic changes require systematic involvement, that's the need for rallying other teams to help you with less retention efforts. And speaking of collaboration, be sure to tie your survey and non-survey feedback together. Let's hear more about that from Peter who'll walk us through how to use that non-survey feedback to reduce churn.

Peter: Thanks so much, Lori. I just have to pull up two themes there because they're so relevant to the way that we measure [inaudible 00:36:16] data for customers all the time. And one that ran through all of that was that NPS, it's not monolithic, and it's not absolute. We've seen many companies who are extremely excited to have NPS data for the first time treated as this monolith. It's just this one number, it can't be broken apart, it doesn't matter if somebody was a six, or a one, or a zero. And it's just not the case, and you have, you know, a data doctor here telling you about it, and we'd love to talk more about it, but I just can't emphasize enough how often it comes up and how a salient point it is.

And the other thing is that it's truly not a satisfaction measure so much as it's a promoter measure. And as long as you keep it fixed in your mind that you're measuring evangelism more than you're measuring dissatisfaction, the numbers have so much more weight and relevance. I loved hearing those themes and the things we care a lot about here too at RJMetrics.

But now I do want to bring it back to our initial Harris Farm Markets' observations. And with all of what Lori said in mind, I want to tell you, this is not really about e-commerce or produce delivery, it's about executing on the themes that Lori brought up.

So hopefully I know some of the questions on Twitter which have been amazing have been, can this apply to Ed Tech? Can this apply to physical events where we can't capture [inaudible 00:37:26] online? And I want to tell you that that's where we want to talk about here, we don't want to talk about how this can apply in a general way. So let's dive a little bit deeper.

What this additional data was doing was it allowed Harris Farm Markets to identify where the processes were breaking down. And you can do this without survey data. If you're using Zendesk, you're very lucky, you have a wealth of customer support data that you can use to better understand your customer lifecycle. You have the whole history of the customer's tickets, you have how long it's taken to respond and resolve them, the rating that the tickets were given. You know a lot of context. And that might be the place to start with improving retention before you even send out a survey, right? Because the actual actions and interactions on your support tickets might be just as good as the survey data you're going to get to proactively take action on some of the issues that customers are experiencing before you even ask them for feedback.

And that's true of many means of contextual data that you have other than just Zendesk or customer support data, we've got to be able to get them in one place and measure them. So another common way our customers are analyzing that support data is around products. So this is a little bit more useful for e-commerce, but it also applies to SaaS. For SaaS, the products can represent different types of combinations of your subscription products at different price points or different platform access. So don't think that it doesn't apply to you as well.

You might find that a new product line has issues or that in the case of e-commerce, a dress that looks one color online, is much a different color in person. We've all seen the blue and black versus white and gold dress, that one would create a lot of customer support issues. But the same can be true for SaaS. When somebody thinks they're getting a certain kind of service that's described a certain way and then they subscribe and it's not everything they thought it would be, that one product could be the one that is driving the questions and issues for your customers.

So one of our e-commerce clients did this analysis and found that their petite sizes were causing issues, they did not fit true to fit. And they found this out by seeing that there was more return data around those and that there were more customer support interactions around those.

So this interaction, it does not have to be super complex. I know a lot of what you're hearing today sounds like, "Wow, I've got to build a big machine." But a big machine starts with one gear turning, and that's what some of these charts are about. So this can be a simple bar chart that just looks at customer support interaction types by a given product, whether that factor is size, color, level of features available.

The next thing to look at is how customer support impacts behavior. Every customer journey was tagged as one of the big catch phrases of last year. But a lot of people didn't really internalize what that means, and part of it is really looking at what a customer behavior is and how it changes after interaction. So this analysis can be particularly valuable for SaaS companies. We did this at RJMetrics and found customers that filed support tickets had a higher lifetime value.

It's almost counter-intuitive in a way, you would think that people that ask for support more might be lower NPS scores, but it's not counter-intuitive because it actually turned out that they were the people who were the most satisfied and engaging with the product and with the people the most. So here's a sample of what that could look like for your business, you would essentially just bucket users by lifetime value based on whether or not they filed a ticket. Sorry, stumbled over some words there.

So let's just look at this first blue and red. In March, 2014, for the people that hadn't filed tickets, their customer lifetime value was shy of $40,000. For people who had filed tickets, their customer lifetime value was at about $50,000. And as the months proceed in this chart, you see that the blue is wavering but it's never really growing whereas the orange actually is, by the end of the chart, showing some growth.

So e-commerce companies can absolutely analyze their Zendesk data in the same way. But I have to warn you, it's just more likely that in that environment you would see a negative correlation between tickets filed and repeat purchases. Because unless you have the kind of e-commerce product where people have to ask how to use it, like a machinery type of product, people who are filing support requests are not usually filing them to tell you how great the red of the dress is.

Now we are at our promised Q&A. Oh my, do I have a lot of questions from you. So I'm just going to field these, as many as I can in our time remaining, and some combination of Lori and I will answer them. And if you have more questions, please chime in the chat here or continue to ask on Twitter. Our behind the scenes moderators, Daniel and Janessa will help get those questions to me, even though I'm the one talking.

So, Lori, here's one that I definitely want to hear you talk about. What is the difference between NPS and CSAT? And when should they each be used distinct from the other?

Lori: There are definitely big, big differences between the two. So CSAT as [inaudible 00:42:13] suggests is about measuring satisfaction, it's one specific issue that you're measuring, it's most often measured at a transactional level, for example, "How satisfied or dissatisfied were you with your interaction with today's customer support team?"

NPS, however, really again, zooms out and looks at the entire relationship. And then asks the question that is a bit more complicated because that question, to answer it, the respondent, the customer needs to go through three questions in their mind. Now of course people aren't thinking, "Okay, I now need to go through three questions in my mind," but this potentially is what is happening. If they answer let's say, "No," to any of these three questions, their rating is going to go down. So they ask themselves, "Am I happy with this product?" If no, rating goes down. "Do I know anyone to talk to about this product?" If no, rating goes down.

"Do I feel generally comfortable talking about products that I'm happy with?" If no, rating goes down. And the second bit, "Do you know somebody to talk to about it?" This is the component of an NPS question that explains why B2B companies deliver, or rather generate, much lower NPS scores than do business-to-consumer companies, B2C companies.

So for example, using ourselves, Zendesk, now I know a lot of people that are hopping on the train going home talking about customer service platforms. No, it's just not a topic that naturally occurs. So there are going to be maybe fewer people who are extremely satisfied with Zendesk, actually out there recommending.

So those are the biggies. CSAT is typically transactional. It's a measure of one narrowly defined situation. NPS is much broader, it's global, and it's not just about customers who come to you with an issue, it's everybody and anybody that you want to reach out to, and it's a proactive engagement.

So I highly recommend using both. You don't want to use one or the other. There is absolutely no one measure that would do it all for you. Anyone tells you that, you tell them they're wrong.

Peter: Excellent. I want to collect a couple of questions related to the practice of NPS from Twitter. One from Dolly who I got the chance to have a little conversation with, and she was asking, should companies consider NPS data that doesn't just come from digital? And my initial answer is yes. Like if you're a customer, or if you're a company that spans from digital to in person, and Lori and I were talking about this a little bit before the call, absolutely if you have an event or you have a chance to collect that information on a sweepstakes, you can try to do that.

But the question I want to pose to Lori is we find that that frequently becomes an influence in the answer. We used to collect NPS very early in our life time by physically asking a person for an NPS on the phone. And of course that creates a lift if they're really enjoying the phone conversation regardless of how they feel about the service. So is it valid to collect NPS physically for the use in this process, Lori?

Lori: I recommend not doing it because I see exactly what you just pointed out. Context matters. You've got to remember that all these numbers, each data point is a person. It's a person that's answering a question or fulfilling some kind of action for you. Whether it's the survey data or non-survey data, data are people too essentially. So you need to remember that they are not in a little cell on an Excel spreadsheet. They are out in the world, and everything that's happening around them has an opportunity to influence their decisions, their responses to a survey.

So keeping in mind that context matters, recognize that if you ask people an NPS question during a time that they are loving you for some other reason, you're going to see an inflated score. Now it's not that you absolutely shouldn't be asking that question, you just need to be very clear with yourself and everybody else that you share that score with that this is an inflated score.

Same thing if you're using NPS in a transactional situation, you need to make sure that if you ask surveyors something like, "Based on today's interaction with our support team, how likely are you are you to recommend us to someone you know?" That NPS rating, that recommendation rating is going to fluctuate based on that individual transaction. So you want to make sure that you're aware of it. In that case, NPS is really only about that very specific level of transaction. It is not a score that is about your overall relationship with the customer.

Peter: Right. And the other NPS related questions, Simon just articulated this so well in our chat. And actually it's really similar to something I was talking to Andrea about on Twitter while you were speaking, Lori, which is that, should people really just be capturing and NPS via that standalone question? It could be an email invitation, I know we've seen a lot of companies do it with an online pop up, or can it be effective to embed the NPS question in the body of a longer survey, like making it the first or last question in a 10 question survey? Because I know once you get into survey design, a whole other host of questions come in. So can it still be valid in that setting?

Lori: Absolutely. And so for example, when we do other research on our competition, we have included the NPS question. And you have to keep in mind they are the, as you said, the concept of the survey will matter, so you need to be really careful about when you ask the question. I tend to ask NPS questions before I get into any more specific questions. So for example, if I'm going to be asking NPS questions and a satisfaction question, maybe measuring overall effort a customer experience which is dealing with a specific company, I'm going to ask the NPS question first because otherwise, if I reverse the situation, they're going to have satisfaction on their mind and effort on their mind. And that's okay, but what we see is you really want to ask more global questions before you ask more specific questions.

Peter: That's a wonderful guideline.

Lori: Yeah, you can absolutely embed NPS with others. I highly recommend that any time you're spending money or resources to talk to your customers or your competitor's customers, anybody in the world, that you maximize that opportunity and ask the questions that you need to ask. On that point, ask only the questions that you need to ask.

I'm not a big fan of long drawn out surveys. They end up reducing response rate and making the quality of the data come down considerably. So if you have an answer to a question somewhere else, whether it's internally or from another survey or from wherever, don't ask the question again, don't use your customers or the general public in that way. Don't waste their time and don't waste your time. Ask only question you need to ask.

Peter: Wonderful. I'm going to do a couple of quick hit questions here. One was, "Does RJMetrics work in China?" Absolutely, we do have some clients in mainland China who use RJMetrics, both in e-commerce and SaaS space. I would encourage you actually to connect directly with me or with a member of our account development team after the call so we could talk through the specifics to you.

f course there's a lot of individual questions there, like how does Google relate to you, how you're working in China? How does your event tracking relate? So that's an absolute yes, we can work in China. A question for Lori, that's a quicker hit question is, do we always give an NPS number question with an open text field? Is there any instance where we would only ask the number questions?

Lori: You can definitely only ask a number question. I would just emphasize that you're not going to get much understanding of why the score is the score. You can also follow up with a forced choice formatted question. So for example, you might ask the question, why did you give us that rating? And already have a specific pick list of reasons. And those reasons could be something like, "I'm not satisfied with your product." It could be, "I don't feel comfortable recommending in general." It could include, "I have nobody to talk to about your product." And then just an "other" reason bucket so you can capture other things that might be happening.

So you can go with NPS, the first question on its own, or with an open feedback, or with the first choice option feedback question. Whatever makes sense for you.

Peter: Here's another quicker hit question, what is the normal conversion rate that somebody can expect on NPS surveys? How many people truly engage and fill these out if we're hitting all of our customers?

Lori: Okay, that's something to depend on many different factors. B2B customers tend to respond at a lower rate than do B2C customers. But they also tend to be more likely to give feedback on the follow up question, and that feedback tends to be much more constructive and more effective to act upon.

For Zendesk, we have a response rate of about 5% from our NPS question. And then about 40% of those respondents provide us comments, really valuable comments. But that depends a lot on your industry, your field, your relationship with your customers. If you have people, for example, that are really just passive, they're just right there in the middle, they don't love you, they don't hate you, they're much less likely to be responding. So you're going to see higher response rates when the customers love you and/ or hate you.

Peter: I want to ask two similar questions too that have popped up over the course of the chat. They're related, so I'll give you both at once. One question is if when you're launching a new product, how long should you wait, if at all, to ask that first NPS question? And the other one is, what if you've got a rhythm for NPS, is quarterly the right rhythm? Should it be more frequent, less frequent? And how should you be measuring any kind of drop-off in response rates? So in any order it makes sense to address those two.

Lori: Sure. So yeah, this is a very common question, and it's particularly tricky for product based companies rather than service based companies where there could be long intervals between purchases. Whereas when you're working from a...where you have an actual service, people are using the tool quite frequently. And so it's tricky.

What I recommend, this is the most conservative approach by the way, I recommend for a product only company waiting for at least two weeks to one month to asking the question. And then in general for both product and service companies, I recommend only asking a specific customer the NPS question once every three months. And here's why I make those recommendations, NPS, the question if you think about it again, they are people too. What's happening with the respondent when you ask them these questions?

It's about you, it's not about them. It's about the company and what our customers can do for us essentially, "How likely are you to recommend us to someone you know?" And that can feel...especially when it's asked repeatedly like, "You know? Guess what, I already give you money. I'm happy to talk about you because you do a great job for me. But enough with this, it's not about you, you, you. It should be about me, me."

And if you ask that question too soon in the relationship, it feels really awkward. It's like you're dating someone, your first date is fabulous, the second date is pretty good, the third date feels great. And you pop the question, "Let's get married." And the other person is like, "Oh, my God, this person's crazy." And they take a random call and get out the date ASAP. So you don't want to ask too early because it feels really weird. So yeah, just remember the respondent is a person and keep in mind how they perceive that question.

Peter: That's fun. Thank you so much for that. And, Caitlin, hopefully that was a good answer to your question. I wish we could keep going with questions, these were some of the best ones I've had on any of the webinars I've helped with at RJMetrics. But we do have to move towards wrapping up and talking about those cupcakes.

I will say that both Lori and I are on Twitter, our names are at the top, but Lori is datadocgauthier, G-A-U-T-H-I-E-R. I am @Krisis spelled with a K, K-R-I-S-I-S. And these are conversations that I'm sure we could continue to have with you, and I would love to. I especially saw one of my favorite questions, "Should you focus on retention or acquisition first?" And if you want tweet me, I can unload a novel on you about that.

But I do want to keep moving to respect all of your calendars here. So we have the cupcake winner, who is our cupcake winner, my fabulous behind the scenes organizers? I see that it's like a machine tabulating except we're in Slack. Andrea Mozo [SP] I've been actually talking to you on Twitter. Andrea, you have some delicious cupcakes and a little sugar rush coming your way to help you with those Ed Tech questions. I'm really excited to have that sent to you. You can email to claim your prize. So thanks so much for asking questions and being a great conversation partner to me.

We are going to leave you with a poll actually. And just like we said that customer satisfaction surveys and NPS are important, so is your opinion of today. But the poll we're actually asking you is about what you'd like to be in touch with us about. So if you want to learn more about Zendesk, if you want to learn more about RJMetrics, we're going to give you the chance to let us know that, and that way we can reach out to you with the right information.

So I want to thank you all so much for being a part of this. I hope that we'll continue this conversation. And, Lori, thank you so much for participating today and lending such enormous experience on the realm of NPS to this conversation.

Lori: I love, love [inaudible 00:57:09] even without getting a cupcake. So thank you very much and thanks to everybody who joined us.