We have enabled more tags. When you post a discussion or reply to a question, add tags. They have full search capability. Try it!
Listen to the main takeaways from our latest TSIA World Envision conference at TECHtonic podcast now!

How are you measuring Case Deflection?

PatrickMartin Founding Analyst | Expert ✭✭✭

Hello TSIA Community!

I am curious to know more about how you are measuring case deflection and/or self-service success. We currently are measuring 2 different measures:

1- % of customers who abandoned the case creation process after consulting a document

2- % of customer who did not log a case after consulting a Support Knowledge article, whether they started logging a case or not

Both these measures are at different ends of the spectrum. The first one is in the 7%-10% and the other is in the very high 90s. The first measure clearly indicates our case deflection once the customer reaches the case creation page (because we can safely state that the customer intended to log a case), but we know for sure that our deflection rates are higher because customers search our community and content before reaching the case creation page. They can find their answers/solutions even prior to actually needing to log a case. This should also be considered as deflection but is not included in this measure.

The second measure is missing the intent. Our objective here was to measure Support's contributions to self-service. However, we can't 100% say that customer's intent was to log a case in the first place.

This being said, I would like to know more on how you measure deflection, how you capture intent and any other information. you might want to share around this self-service measure.

Looking forward to the discussion.


  • David Baca
    David Baca Member | Guru ✭✭✭✭✭

    Hi Patrick - great topic and one that I know many of TSIA's members struggle with. To help move the conversation a bit further, I thought it might be helpful to share the definitions for Self-Service Success and Self-Service Deflection:

    Self-Service Success: the rate that customers indicate they complete their desired self service transaction and/or found useful information. Self service success does not imply that the customers need for live assistance was fully satisfied.

    Self-Service Deflection: the rate that self service resources and/or content eliminates a customer’s need for live agent assistance.

    As you highlighted, a key to determining Deflection is to determine intent. One common way of determining intent is via click-stream analysis to determine % of clicks that landed on the 'case creation web page'.

    What other deflection methods and ways of measuring Intent are out there?

  • Greg Higgins
    Greg Higgins Member | Enthusiast ✭
    edited July 2020

    This is great stuff, Patrick. At Splash, we currently have a blind spot in deflections so we simply look at average tickets per user over a period of time to inform any efficiency/quality gains from our process, people and product changes. We're currently working to measure deflections in Q3, hence my attraction to your post.

    I'm writing as I was recently reviewing Zendesk's content on deflection: https://www.zendesk.com/blog/ticket-deflection-currency-self-service/. I thought their Self-Service Score: Self-service score = Total users of your help center(s) / Total users in tickets was a simple and clever metric to review. You get your current ratio and then make changes to try to increase it - an increase indicating more engagement/value with your help center relative to users who go on to submit tickets.

    I'm looking forward to hearing more from the community. As I have success implementing, I will share my results. Let me know if I can help in any other way.

  • PatrickMartin
    PatrickMartin Founding Analyst | Expert ✭✭✭

    @David Baca Thanks for the reply. I understand that the easy way to capture intent is to look at the visits to your case creation page, but as I mentioned in my initial post, our community allows users to search our entire community content from the get-go, which means we are certainly "deflecting" potential cases without capturing intent, which leads me to the self-service success measure and not case deflection. What I am trying to achieve here is to identify intent (or at least assume it), using AI. If our AI solution is able to identify the user's intent/context and tailor the experience and content relevancy, then we might get closer to a precise measurement on case deflection. This is where my thought process is headed as we now have the tools (with AI) to capture this information much more precisely than just relying on who visits the case creation page. I am a strong believer that many customers will find the information they are looking for prior to even reaching the case creation page, which means we are not measuring these in our ROI.

    In my particular situation, my case deflection is between 7-10%, while my self-service success is in the high 90s, so I would assume that if I can accurately detect intent through AI and report on it, I would have my true measure, or get close to it.

    My end goal here is to measure 2 things:

    1- Cost savings driven by self-service

    2- ROI on our Knowledge Management efforts, at least from a self-service perspective

    @Greg Higgins your formula from Zendesk is interesting, however, I still think it is a high level measure that does not take intent into consideration. By capturing intent, we can better understand customer needs and tailor the experience based on it. Then, we can measure/identify which content is more efficient in which scenarios. If you would like to discuss further, you can reach out to me directly.

  • David Perrault
    David Perrault Founding Analyst | Expert ✭✭✭

    @Patrick Martin it is indeed much more practical to measure intent when customers are in the process of opening a ticket with Support; and the most effective way to do this is to track if KB articles presented during the ticket creation stopped the customer from opening a ticket within a set period of time (need to leave time to customer to actually follow the steps in the article).

    I also think that when a customer does browse your KB - outside the process of opening a Support ticket - and does not open a ticket in the same set period of time, you have to assume the customer would have most likely opened a Support ticket if he had not found the information he was looking for, i.e. your customers are unlikely to browse your content just for fun.

  • PatrickMartin
    PatrickMartin Founding Analyst | Expert ✭✭✭

    @David Perrault yup, that's what we are doing right now, kind of measuring both. With our solution (Coveo), we are able to detect intent with AI, and I will be working with our product management team to see if the detected intent can be made available in our reporting to see the difference between our numbers. Thanks for chiming in.

  • DavidHarbert
    DavidHarbert Founding Analyst | Scholar ✭✭

    Hi All,

    Great question. This is an area we are starting to dive deeper into but still early days. We are still early in our thinking and planning how to approach. We have been working hard to improve our digital experience which has been successful (based on customer feedback) just hard to measure.

    Some challenges we are struggling with are:

    • We have two halves of our business: Consumer and Enterprise. A single end user could be both depending on the issue they are reaching out for. Our consumer side is a different set of systems then our enterprise side. So providing a great experience is tricky as many enterprise customer buy and use consumer services so not cut and dry.
    • I like the idea of Self-Service Score as mentioned in @Greg Higgins comment. Thinking how I might be able to use that.
    • We are also looking at those two end goals @Patrick Martin. It is great when customers can self-serve also curious if there is a measure around the product/solution where it continues to improve so the root query/question was never needed to be asked.
    • @David Perrault you raise an interesting point about a customer generally not browsing for fun.

    What is/are the technology(ises) you are using for share knowledge with customers? @David Perrault said they are using Coveo.

    Thanks for insights.

  • PatrickMartin
    PatrickMartin Founding Analyst | Expert ✭✭✭

    @David Harbert We are using a Salesforce community with the Coveo for Salesforce package for our search, personalization and recommendation, as well as our case creation page. With Coveo, the content will be tailored to your users and can come from different sources, so this might be an option to standardize your customer experience with consumer and enterprise. Let me know if you would like to know more on how we use it.

  • DavidHarbert
    DavidHarbert Founding Analyst | Scholar ✭✭

    Thanks Patrick. I'd love to learn more how you are using Coveo.

  • Jaime Farinos
    Jaime Farinos Founding Member | Scholar ✭✭

    In addition to the ones mentioned, it is also worth noting, the simple tracking of how cases evolve over time (increase or decrease) compared to the increase in the number of customers. Having enough historical data would help predict the number of cases you'd expect to have if no self-service measures are implemented. This method however would not help in establishing the source of the most efficient case deflecting strategy, just the overall global impact.

  • Hi All,

    I am reviving this email thread - many interesting comments and directions to take.

    I've implemented a homegrown AI Solution to our Customer Support Portal. Beforehand, customers used to log in to the Support Portal olnly to create Support cases, since our KB is publicly accessible and doesn't require being logged in.

    Inside the Support Portal, we've introduced a Google style Search bar that invites customers to self serve, before they click on the "tickets" section where they can either access past/existing cases or create a new one. If we've missed this first possible case deflection scenario (ie customer hasn't browsed the search bar, or clicked on one of the suggestions), once inside the ticket creation form, a second case deflection effort is intended by suggesting the customer more options, based on the title of case inquiry (and before he creates his case to an agent).

    I am reading about all of the different techniques of measuring case deflection. I did plan to implement the one recommended byZendesk, as @Greg Higgins suggested, by adding up the inquirires through our AI (both on home page and at case creation level) and dividing it by the total # of cases created.

    Given my specific use case of customers logging in to the Portal only to create a case, I assume I am catching the intent even by applying Zendesk's calculation. I understand that I am missing out users/customers directly consutling our public KB - however our AI solution indexes many data sources, on top of the public KB. Moreover, my assumption is that I will see a change in the case creation trend vs. usage of the AI tool.

    I am interested on your thoughts on the subject. Thank you and happy 2021.

  • PatrickMartin
    PatrickMartin Founding Analyst | Expert ✭✭✭

    Hi @Peroline Moran ,

    Thanks for your insight on how you are using your homegrown solution. I believe that it is fair to assume customer intent in your situation as your customers came to your portal only to log tickets in the past. However, if you ever decide to broaden the scope of your self-service portal to include product documentation, trainings, communities, etc., it will become more challenging to detect intent as customers will come to your portal for reasons other than just submitting a case. Just something to lookout for, depending on the strategic plan of your support portal.

  • @PatrickMartin I am solving a similar problem and landed exactly on the same note as you did, wondering how you solved the intent problem - are there any white papers on this topic?

  • PatrickMartin
    PatrickMartin Founding Analyst | Expert ✭✭✭

    @Somendra we haven't cracked that nut yet on isolating intent. However, TSIA have brought up the notions of implicit and explicit case deflection, which I believe help with this. @DavidBaca has done lots of research on this.

  • @PatrickMartin How do you assign deflection credit when there are multiple properties involved?

  • PatrickMartin
    PatrickMartin Founding Analyst | Expert ✭✭✭

    @Somendra by properties, you mean what exactly? Are you referring to case submission form, community, self-service sites (docs, training, etc.)? If this is the case, you would need to look at each separately as they are all different and calculate total deflection in the end. The case submission form is the easiest one (referred to as explicit deflection in TSIA benchmark) as you know the intent and can easily measure deflected cases.

    Communities are a little more tricky as you need to isolate threads that were answered by customers and/or partners versus those answered by your company resources. In those answered by customers/partners, how many did not necessitate a case in the end? Those would be your deflected cases.

    As for docs, training or KBs, according to TSIA's Implicit Deflection definition, you need to measure the deflection on documents (usually KBs) that were rated as useful for which a case did not result from this session within a certain time period. This one is a little more complex to measure as you need the right tracking to compare self-service sessions to submitted cases.

    I hope this is what you were referring to. If not, let me know and I'll see how I can assist further.

  • @PatrickMartin - This is helpful

  • David Baca
    David Baca Member | Guru ✭✭✭✭✭
    edited July 2022

    Hello @Somendra - Going back to your original question on "intent", there are many (many) assumptions that companies make related to determining self-service user intent. And while each 'intent' method is made using educated assumptions, the resulting self-service metrics that are derived from 'intent'-based assumptions make it difficult to measure true self-service performance.

    You can learn more about TSIA's self-service research that @PatrickMartin referenced via a public webinar that Sara Johnson and I delivered last summer titled, "Create an Exceptional Self-Service Customer Experience with Knowledge Management". The On Demand webinar link is located at: https://www.tsia.com/webinars/create-an-exceptional-self-service-customer-experience-with-knowledge-management.

    And while the TSIA Self-Service Best Practices research report is only accessible to member companies with an active Support Services Research subscription, @Diane Brundage and @VeleGalovski are scheduled to meet with your Product Support Group Leader (Diana B.) later this month on a research briefing. It might be possible for you to join their research briefing call to learn more about the self-service research that we have conducted and that is available for our Support Service member companies.

    I hope you find the webinar helpful and my community response helpful.