Metrics are important, there’s no doubt about it. With new service management tools, we can pull out, analyze, and report on nearly every aspect of the work we do. We may be tempted to keep adding and adding more and more metrics, thinking that the more information we have, the better off we’ll be. With the
amount of data we can now gather, that might be equivalent of emailing a spreadsheet of all the businesses in town when someone asks where the nearest grocery store is.So here’s the point: The questions your organization asks determine the metrics you should report. If we had listened to the question in my example, we would only have provided a spreadsheet of the grocery stores in town, not all the businesses. So, what does your organization want to know?
As HDI member Steve Hultquist put it in a recent SupportWorld interview, “Things that are easily measured very rarely get us to what we really want.”
The shift from quantitative to qualitative
By and large, support centers have focused on quantitative metrics: How many phone calls, emails, chat sessions and so on were initiated? How many tickets were resolved? How long did it take? These questions are answered by the familiar metrics we often talk about: volume, average handle time, analyst utilization, and so on. While necessary for the monitoring and operation of the support center, these metrics don’t help very much—if at all—in demonstrating value. They are focused on activities.
Fixes aren’t fast if they aren’t effective
What is called for is more of a focus on qualitative metrics. These metrics are focused more on outcomes. If, for example, we analyze our tickets looking for what I call same customer/same issue, we may find that analysts, knowing that they are expected to close a high number of tickets in a day, week, or month, have been opening a new ticket for a customer instead of reopening one that already exists. Reopen rate is also key in tracking down contacts that were marked as first call/first contact resolution, only to be reopened later when the issue recurred. (In other words, the outcome was not what the ticket record would indicate, at least at first glance.)
Audacious goal: 0 incidents, plenty of service requests
Another way to look at quality from the customer’s perspective is to watch the number of incidents versus the number of service requests (excluding password resets, which—at least in my opinion—do not add any value). A service request is, generally speaking, a request to partake of services offered. Those services add value to the flow of business in your organization. They provide access to applications, storage space, printing and databases, as well as fundamentals such as email. Incidents, on the other hand, indicate that something has broken or is defective. The entire IT department should be focused on reducing or eliminating incidents and any associated problems. The effect of incident and problem reduction is felt directly by the customer: fewer work interruptions, fewer malfunctions, fewer outages, fewer lockouts, etc. As the number of incidents decreases, you should see a decrease in impacted user minutes (IUM), which is a great metric that keeps support’s focus on the customer. It can even enable getting to the grail: cost of downtime, which shows real business impact, not just support center cost.
Accurate customer satisfaction measurement
Too often, IT departments find that despite high customer satisfaction survey results, the scuttlebutt around the organization is that IT is not good. How can that be?
Mostly, it stems from asking the wrong questions or asking the right questions the wrong way. Surveys are so focused on individual performance in most cases that the customer feels left out of the equation. Sure, Sally the analyst was knowledgeable, but why (for example) did my email inbox stop showing new mail in the first place, and why did it take IT one and a half days to fix it while I was stuck on wimpy webmail? Your customers probably would very much like to rate the reliability of the IT services you provide as opposed to the individual performance of one analyst, and often will give inflated ratings so “the analyst doesn’t get in trouble.” Again, it is the outcomes that matter, and outcomes should always be determined—and measured—from the customer’s perspective.