Service desk metrics
Tracking the right service desk metrics is straightforward. Interpreting what they mean — and knowing what to do when they move — is where most teams get stuck. This guide covers the six metrics worth tracking, what good looks like for each, and how to use them to drive decisions rather than just produce reports.
Understand what each metric is actually measuring beneath the number
Know what good looks like — industry benchmarks for FCR, SLA, and CSAT
Diagnose whether a metric movement points to process, people, or tooling
Use the health check to find the operating weakness behind the KPI gap
SLA attainment measures the percentage of tickets resolved within the agreed response and resolution targets. Most service desks target 90% or above. Consistent performance below 85% usually points to one of three causes: triage is inconsistent so priority is being set incorrectly, resourcing is insufficient relative to incoming demand, or the SLA targets themselves were set without a clear view of what the desk can realistically achieve under normal operating conditions.
The mistake most teams make is treating SLA as a single number. Breaking it down by priority, by category, and by time of week is far more useful. A desk that is hitting 92% overall but consistently breaching P2 SLAs between 4–6pm on Fridays has a resourcing and shift pattern problem, not a general performance problem. That specificity is what turns SLA data from a compliance measure into a management tool.
First contact resolution (FCR) measures the percentage of tickets resolved at first interaction without escalation or reopening. High-performing service desks typically achieve 70–85%. Rates below 60% usually indicate weak knowledge management, inconsistent triage, or requests being handled through the wrong channel — often the incident queue when they should be processed through request fulfilment workflows.
FCR is one of the most sensitive indicators of knowledge base quality. A desk with strong, maintained knowledge articles will consistently resolve more contacts at first touch because agents can follow a documented resolution path rather than improvising or escalating. When FCR drops without a corresponding change in ticket complexity, the first place to investigate is whether knowledge articles have become outdated or whether new issue types have not yet been documented.
CSAT measures user satisfaction with the service experience, typically via a post-resolution survey. Most service desks aim for scores above 85%. The important nuance is that CSAT measures the experience of the interaction, not just the technical outcome. A ticket that took three days to resolve but was communicated clearly throughout will often score better than one resolved in two hours with no updates provided.
When CSAT is falling while SLA performance is stable, the issue is usually communication rather than resolution speed. Users are unhappy because they do not know what is happening with their issue, not because it is taking too long. That distinction matters because the fix is different — it is a communication discipline and expectation management problem, not a resourcing or technical capability problem. Improving update frequency and closure communication often lifts CSAT faster than reducing resolution time.
Backlog measures the number of open tickets not yet resolved. Unlike the other metrics, backlog is a leading indicator — it shows where the desk will be under pressure before the SLA data confirms it. A growing backlog at stable demand volume means resolution capacity is insufficient. A growing backlog at growing demand volume may mean the team is absorbing more than the operating model was designed to handle.
The most useful way to track backlog is as a trend across a rolling four-week period, segmented by priority. A P3 and P4 backlog that grows slowly over weeks is often a sign of effort prioritisation — the team is focused on higher-priority work and lower-priority tickets are ageing. Left unmanaged, this creates a visible SLA problem in the period data when those tickets breach. A weekly backlog review with a clear threshold for escalating aged low-priority tickets is one of the simplest operating disciplines a desk can introduce.
Repeat demand measures the volume of tickets that are recurrences of previously seen issues — the same user raising the same problem again, or the same root cause affecting multiple users in different tickets. High repeat demand is the clearest operational signal that problem management is weak. The desk is resolving symptoms rather than causes, which means the same effort is being spent on the same issues week after week.
Reducing repeat demand typically requires three things: consistent identification of recurring incidents at the point of logging, a working problem register that tracks known errors and their workarounds, and knowledge articles that allow agents to resolve known issues without re-investigating. A desk that reduces repeat demand by 15% typically sees a corresponding reduction in total ticket volume, freeing up capacity for higher-value work without any increase in headcount.
Mean time to resolve (MTTR) measures the average time from ticket creation to closure across all tickets. It is distinct from SLA attainment — a desk can meet 90% SLA compliance while still having a high MTTR if a minority of complex tickets take a long time to close. MTTR is most useful when analysed by category or configuration item, because it reveals where the desk is systematically slow rather than just showing an average across everything.
A high MTTR in a specific category often points to a knowledge gap, a supplier dependency, or an escalation path that is adding delay without adding value. When MTTR for a particular application or service is consistently double the average, that is a signal worth investigating — either the knowledge base for that service needs improving, or the escalation to the resolver group is consistently delayed. Both are fixable without additional headcount.
Service desk metrics show what is happening at the output level but not why. A drop in FCR or a rise in backlog can have multiple causes — weak knowledge, poor triage, demand growth, or resourcing gaps. Without a structured operating assessment alongside the KPI data, the same metric movement can be misdiagnosed and lead to the wrong fix. A team that hires more staff to address a rising backlog caused by poor problem management will find the backlog grows again once the new hires are absorbed.
The missing layer is a view of the operating model underneath the numbers — how consistently processes are followed, how well knowledge is maintained, whether the right controls are in place. A maturity assessment gives that view in a structured, benchmarked form. Running it alongside the KPI data shows not just what is moving but what is structurally weak underneath — which is what makes the improvement conversation with leadership credible and the fix more likely to stick.
What are the most important service desk metrics?
The most important service desk metrics are SLA attainment, first contact resolution (FCR), customer satisfaction (CSAT), backlog trend, repeat demand rate, and mean time to resolve (MTTR). These six cover quality, speed, and stability. Volume alone is not a useful primary metric without resolution quality data alongside it.
What is a good first contact resolution rate?
A good first contact resolution rate is typically between 70% and 80%. High-performing desks with strong knowledge management often achieve 80–85%. Rates below 60% usually indicate weak knowledge, inconsistent triage, or requests being handled through the wrong channel.
What is a good SLA attainment rate?
Most service desks target 90% or above. Consistent performance below 85% usually points to triage problems, resourcing gaps, or SLA targets that were not designed to reflect the actual operating model.
How do you reduce repeat demand on a service desk?
Repeat demand is reduced by improving problem management (finding root causes rather than closing recurring symptoms), strengthening the knowledge base, and using the service catalogue to route repeatable requests through consistent workflows rather than incident queues.
Why do service desk metrics not tell the full story?
Metrics show what is happening at the output level but not why. A drop in FCR or a rise in backlog can have multiple causes. Without a structured operating assessment alongside the KPI data, the same metric movement can be misdiagnosed and lead to the wrong fix.
Related guides
Next step
Run the free 10-minute health check to get a benchmarked maturity score across 7 ITIL-aligned areas. Understand the operating gaps behind your KPI data, and leave with the top 3 fixes ranked by impact.