Blog

3 Reasons Your Service Desk Metrics Are Measuring the Wrong Things

Posted by on February 06, 2019

Service Desk Metrics

You probably already know the score—there’s a lot one can measure when it comes to assessing IT service desk performance. In fact, your IT service desk probably has a good-sized basket of metrics and key performance indicators (KPIs) related to incident management and service request fulfillment efficiency and effectiveness. Plus, many others that relate to how well your service desk is being run. You might even feel like it’s akin to “death by metrics.”

However, you might also have a different feeling about your ITSM metrics and question whether you’re actually measuring, and reporting on, the right things. Or, even if you are reporting on the right things, whether they’re unfortunately hidden away among a variety of other metrics that aren’t as important.

So, if you’re questioning the validity of your current IT service desk metrics, then this blog is for you—we’ll examine the service-desk-metric status quo, along with the three top mistakes IT teams make when establishing KPIs, and finally, share a simple approach for finally moving your metric portfolio forward.

What ITSM Metrics Are Other IT Service Desks Measuring?

Of course, the correct answer here is: “It depends.”

For instance, for any given IT organization, it might depend on factors such as business strategies, objectives, and demands. The source of any IT service management (ITSM) best practice—such as ITIL—that’s employed also matters. The reporting and analytics capabilities of any ITSM tool that’s leveraged, as well as the organization’s size and the maturity of its ITSM capabilities and operations, are important factors. And perhaps even the vertical and geographies in which it operates.

But, with something like an IT service desk, it’s safe to say that there will be a lot of similarities in at least some of the employed metrics. And hence we can look to ITSM industry research to understand what the most popular metrics are (although these aren’t necessarily the best).

The Most Commonly Adopted Metrics for IT Service Management

A good source of industry data for the IT service desk is the annual HDI Practices and Salary report, but please note that this is likely to be North-American biased. The latest report states that the top five metrics tracked/measured by support centers are:

  1. Average time to resolve tickets

  2. Abandonment rate (phone)

  3. Customer satisfaction with ticket resolution

  4. Customer satisfaction with support overall

  5. Average speed to answer (phone)

With other commonly adopted metrics including average number of tickets resolved per staff member, average talk time (phone), and average handle time (phone).

So, these are—bar the customer satisfaction measures—very much related to “How quickly?” and “How many?”

Looking elsewhere, European-biased Service Desk Institute (SDI) survey data—from a 2018 report called “Measuring and Making the Most of Metrics”—shows that, in addition to a geographical variance, there’s also a big drop off in the most commonly used metrics. That's a sign that there are definitely significant differences in the metrics employed by organizations.

  1. Number of incidents (96% usage level)

  2. Number of service requests (89%) — please note that the above HDI analysis takes a combined ticket approach

  3. Customer satisfaction (74%)

  4. First contact resolution (FCR) (66%)

  5. Average resolution time for incidents (65%)

And again—bar the customer satisfaction measure—it’s very much related to “How quickly?” and “How many?”, i.e. the operation of the service desk.

So, these are the metrics that are most commonly used—and indeed, they are very valuable. But why do so many ITSM teams struggle to identify and implement the kinds of improvements that matter most to the business? The answer to this question lies in the three most common mistakes committed by service desk teams when they establish their broader portfolio of metrics.

RELATED: IT Service Desk Software

Mistake #1: Your IT Service Management Metrics Are Too “Service Desk Centric”

One of the things that you probably already appreciate re your current portfolio of IT service desk metrics is that they’re inwardly focused—on the aforementioned “How quickly?” and “How many?” metrics that pertain to efficiency and an inward-looking view of effectiveness.

While these might be great from an IT-organization perspective, they probably mean very little to the rest of the organizations. For instance, a ticket-volume metric—say, 10,000 incidents handled last month—that’s proudly shared might be received (by business colleagues) as “What? IT has failed us 10,000 times this month?” Or an average speed to answer of 30 seconds might mean little, experience-wise, in the context of other factors such as service desk agent attitude or the total time to resolution.

Obviously, balance is key here—appreciating that metrics are connected and that doing well in one might have an adverse effect on others. But there’s still more that can be done by employing metrics that are meaningful to business colleagues. For example, a manufacturing business will likely be interested in how IT availability and the quality of IT support positively, or negatively, affects production—think about the number of widgets produced each month and any drops in volume caused by IT issues and support ineffectiveness.

This is the first area to consider for new KPIs and metrics—business-level results that can be tied back into IT service desk performance. It might be a somewhat negative snapshot of the pain IT has caused business operations in the last month, but even this is a platform from which to drive both IT and business improvement.

If this is too difficult as a first step into value-based metrics, then look to employ experience level agreement targets over more traditional service level agreement targets—with a focus on what’s important to key business stakeholders, from senior personnel to frontline employees. This is where the agreements and associated targets focus on the quality of experience as well as the quality of service. There’s more on this focus on experience in a minute.

Mistake #2: Your Value Metrics Don’t Have an Understanding of Cost

Value is a very subjective topic. Because not only do different people have a variety of views as to what’s valuable—for instance, the VP Sales might want something very different to the VP Finance—there’s also an important business constraint: cost.

When delivering IT support, and trying to deliver the best quality service and the highest-possible employee experience, there’ll no doubt be operational budgets that limit what can be done. It’s a little like life—where, while many of would like a Porsche, say, most of us would struggle to justify spending the money on one (at least at certain points in our lives).

But what things cost also plays an important part in value definition—in that something might be considered “of value” until the costs involved are communicated. Then whatever was once deemed value might be reassessed as expendable due to disproportionate costs.

The key point here is that too few IT service desks understand what it costs to handle an incident or a service request ticket (according to the aforementioned SDI report, these metrics are employed by only 17 percent and 16 percent of organizations respectively). Plus, we can’t assume that this is a true reflection of what IT support costs. For example, and this is also very true when benchmarking IT support costs with industry best practice, what is and isn’t included in the total costs of operations (that are then spread across incident and ticket volumes) is often inconsistent.

For example, is the fully inclusive annual ITSM-tool cost included? Are other tools, especially those that reduce operational costs such as password-reset automation? Facilities costs—from the floor space, through utilities, to the cost of facilities-based support? All the relevant people costs—from service desk agents to the proportion of the Head of Service Delivery’s time spent on support-related matters? This is why it’s so hard to compare your costs to industry benchmarks unless you follow a very specific costing formula (and, even then, it will depend on the incident-complexity mix), and it’s also a driver for internal trend-based analysis where it’s easier to “compare apples with apples.”

Ultimately, once you’ve established the unit costs for IT support, plus IT service delivery, it becomes so much easier to have value-based conversations. For instance, how reducing the average speed to answer (the phone) by 50 percent might increase the cost per ticket by a disproportionate amount. Or it could also factor in the employee and business cost of people waiting in queues such that the increased IT support cost can be seen to be less than the productivity impact.

There’s much more that could be said related to cost’s importance to understanding business value but hopefully, I’ve done enough to get you thinking here.

RELATED: Understanding the Differences Between a Help Desk & Service Desk

Mistake #3: Your Service Management KPIs Focus on Customer Satisfaction Over Customer Experience

Customer satisfaction (CSAT) is an interesting IT service desk metric. Much has been written about the issues that service desks face when measuring it—from the lack of survey completion (often at less than 10 percent) to the disparity between the target-meeting CSAT scores and how employees and customers really feel about their experiences with IT support.

The key word in that last sentence is: “experiences.” And this is an important point to understand when assessing the value of your current CSAT mechanism and feedback. Think about your CSAT status quo—does your survey really allow you to get an understanding of how well the employee or customer felt they and their issue/need were handled? Or does it merely garner more data on the mechanics of IT support operations?

Let’s use a restaurant as an example to demonstrate this. Its CSAT survey might ask:

  • Were you quickly seated at your table?

  • Did your server introduce themselves?

  • Did your food arrive in a timely manner?

  • Was the food value for money?

All of which relate to the operation of the restaurant—and what it thinks is important—rather than assessing whether it was a good experience and if the customer is likely to return (because the answers to all the above questions could be “yes” but the customer is still unhappy).

It’s also worth considering the percentage of the overall employee or customer base that the CSAT feedback covers. For example, if only 10 percent of employees or customers provide their feedback—and this might only be those who had a really great or bad experience—then this isn’t a true reflection on service desk performance. Then what about those employees or customers who choose to find their solutions elsewhere, possibly because of a previous poor IT support experience? These people aren’t proactively adding their opinions of the IT service desk into the mix either.

So, the current CSAT methodology is probably flawed for many IT organizations. Of course, everything that has been called out above can be addressed through countermeasures such as making surveys easier to complete (maybe starting with sad face/happy face click buttons), through asking smarter questions, to proactively understanding why the service desk might currently be avoided unless a call is definitely necessary.

However, there is another approach that’s gaining interest—one that focuses on employee experience over the other traditional metrics based on the assumption that if employees are happy with IT support, then everything else must be good too.

Finnish employee experience management experts HappySignals offer a good example of this—providing a service that allows both internal IT service desks and outsourced service providers to better understand how they’re delivering against customer needs. While it’s still a relatively new company, their customers to date have proven the above assumption and the shared, aggregate employee-experience statistics on their website provide interesting insights into what employees think of IT support. For instance, that IT self-service (as it is now) provides the worst employee experience and also causes the highest level of employee non-productivity (while seeking a fix). With the latter definitely going against the ethos of self-service.

Again, as with above metrics related to value and cost, it’s important to employ what helps your IT organization and business the most, with the key point here being to add a suitable metric or two to your IT service desk metrics portfolio. Even if just on a trial basis.

Don’t Just Add These New Metrics Into the Mix

If you’re taking the time to assess what really matters to business stakeholders, then also look at the status quo—assessing how valuable each current metric is. There will, of course, be some metrics, which you deem vital to IT service desk operations, that stakeholders will never be interested in. But there will no doubt be others that are measured, and communicated, because they were once considered best practice (or maybe simply because your ITSM tool can easily produce the relevant statistics). So why are they still in play?

The trick here is to focus on what’s really important to business operations and then IT operations. Which metrics tell a meaningful story and offer a platform for improvement, or flag up serious concerns about current or future operations?

If you’ve already spent the time on understanding what business colleagues find valuable, then you’ll be in a great position to assess whether certain elements of your metric portfolio offer little insight and value (and can be killed off).

There's still so much more to be said about improving IT service desk metrics (in the context of modern business needs) that hasn’t been touched on in the previous 2,000 words. For more information, download the Gartner report on how to design and deploy ITSM metrics that Support Business Services.

Download Now