ITSM Dashboard Metrics That Matter: The Second Reading Most JSM Teams Miss
A pragmatic guide to the ITSM dashboard metrics worth tracking — and the second reading behind each one that flips the diagnosis for JSM service desks.

Most service desk dashboards are built to be reassuring. I've come to think that's the problem.
I spent an hour last month looking at a JSM dashboard with the manager who owns it. Internal IT, about 40 agents, three projects, the usual mix of incidents, requests and a small change queue. Her executive dashboard was almost entirely green. SLA compliance: 94%. Backlog rate: 9% — low, healthy, the kind of number you put on a slide. MTTR: trending down for two quarters straight.
And her customers were unhappy. Loudly. The CIO had started forwarding complaint emails, and nobody on her team could square that with the board, because the board said everything was fine.
It took about twenty minutes to find it. The backlog rate was low — but the closure rate had quietly dropped from around 96% of inbound to roughly 71% over the same two quarters. The queue wasn't healthy. It was frozen. New work came in, old work sat, and the ratio happened to stay flat because both numbers moved together. Nine percent of "not many tickets" looks identical to nine percent of "we've stopped finishing things." The dashboard couldn't tell the difference. Neither could she, because the dashboard was the only thing she was reading.
Every metric on that board had a second reading. The first reading is what the chart shows you — the obvious one, the one the tile is built to deliver. The second reading is what the metric implies: the non-obvious consequence, the thing that flips the diagnosis. This piece walks the metrics worth putting on a JSM service desk dashboard, the second reading behind each one, and the rule for reading the whole board so the green cells stop lying to you.
Why a board full of green can still describe a service that's failing
Let's be fair first: dashboards aren't broken, and the metrics on them aren't wrong. Backlog rate, MTTR, SLA compliance, first-contact resolution — these are sensible things to measure, and a service desk that measures none of them is flying blind. The problem isn't the metrics. It's what happens to a metric the moment it becomes a target.
There's a law for this — Goodhart's Law — and it's one of those ideas that sounds like a party trick until you watch it eat a team alive: when a measure becomes a target, it ceases to be a good measure. The moment "keep the backlog under 10%" is the goal — on a slide, in a standup, in a bonus formula — the backlog number stops describing the service and starts describing the team's relationship to that number. People aren't even being cynical about it; reclassifying a ticket, pausing an SLA clock, splitting one messy problem into five clean "resolved" ones all feel locally reasonable. The aggregate just stops meaning what you think it means.
Service-management circles have a name for the result: watermelon metrics — green on the outside, red on the inside. An SLA at 95% while the people the SLA exists to protect are furious. It usually gets blamed on gaming, but I think that lets the dashboard off the hook. The deeper issue is that the board was built to answer "are we hitting our numbers?" — which is a different question from "is the service getting better or worse, and where?" The first question produces a reassuring chart by construction. The second produces an uncomfortable one on purpose. Most exec dashboards answer the first.
And there's one more thing, the one that actually does the damage day to day: every metric on the board is read in isolation, one tile at a time, when its meaning only resolves against a complement. Backlog rate is meaningless without closure rate. A falling MTTR is meaningless without the leading indicators underneath it. A created-vs-resolved ratio of 1.0 is meaningless without knowing whether you have any slack. Read alone, each tile gives you the first reading. Read in pairs, you get the second.
What "the second reading" actually is
It's a habit, not a tool. You read every metric on the board twice. The first time, you read what it measures — the obvious reading, the one the chart hands you. The second time, you read what it implies — the consequence the number is consistent with, the thing that has to be true elsewhere for this to be good news.
Put plainly:
- First-order reading — what the metric measures. The obvious value. The one the dashboard prints, the one you screenshot.
- Second-order reading — what the metric implies. The non-obvious consequence. The one that flips the diagnosis when you check it against the metric that should move with it.
A green metric, on its own, isn't a verdict. It's a question.
A green metric isn't a verdict. It's a question: what would have to be true elsewhere for this to be good news — and is it?
The diagram below is the whole move. One number, two readings, and a fork: the first reading is the one the tile is built to give you; the second reading is the one you have to go and check.

The second reading lives in the pairing — read a metric alone and you get the face value; read it against its complement and you get the diagnosis.
The metrics that matter — and the second reading behind each
Here is the working set: the ITSM dashboard metrics actually worth a tile on a JSM service desk board, the first reading each one hands you, the second reading it's hiding, and the metric you have to read it next to before the first reading means anything. The last row is the one most teams never put on a dashboard at all — and it's the one that tends to change the most minds.

- Backlog rate — "Are we keeping up?" First reading: open tickets as a share of total raised; the single number most leaders check first. Second reading: a low backlog rate can hide rot — if closure is also low, the queue isn't healthy, it's frozen; the metric is optimising for the metric, not the outcome. Read it next to: closure rate / throughput.
- Age distribution — "How old is the queue?" First reading: a histogram of open tickets by age — how much of the backlog has been waiting too long. Second reading: stale tickets distort every other metric — they inflate average age, mask true MTTR, and age out of SLA in silence; sweep them before you measure anything, or every improvement runs on bad data. Read it next to: a stale-ticket count (90 days+).
- Maturity vs. performance — "Design gap or execution gap?" First reading: two scores per process — how well it's designed and how well it's executed; the gap points to where to invest. Second reading: a zero gap at low scores is the dangerous reading — you've capped both, so closing the gap won't help; you have to raise the ceiling first. Performance can't structurally exceed maturity. Read it next to: the absolute scores, not just the gap.
- Leading + lagging KPIs — "How fast, how well?" First reading: MTTR, SLA, FCR, rework, escalation — speed and quality, side by side, per process. Second reading: leading metrics predict the lagging ones; today's bad FCR is next quarter's bad MTTR. Good lagging numbers on top of deteriorating leading numbers means you're past the peak, not winning. Read it next to: the leading set, always — never lagging alone.
- Created vs. resolved — "Demand vs. throughput?" First reading: two lines over time — tickets created, tickets resolved; the volatility is the ratio between them. Second reading: a ratio of 1.0 isn't success — it's stasis; a team running perfectly balanced has zero shock absorption. Real health runs below 1.0 with slack, so the system can absorb release weeks, incidents, an acquisition. Read it next to: your slack / spare capacity.
- Status dwell time — "Where do tickets get stuck?" First reading: a from-status × to-status matrix — the average time tickets sit in one status before moving to the next. Second reading: the bottleneck is where tickets are idle, not where they're busy; active work feels productive, waiting feels like nothing, so managers add capacity to the visible step — and make the real constraint worse. Theory of Constraints, applied to a service desk. Read it next to: wait time vs. touch time per stage.
- WIP aging by priority — "Old tickets — risk or noise?" First reading: a stacked age histogram split by priority; old + high priority is audit risk, old + low priority is noise. Second reading: aged low-priority work is the tell that you have no slack capacity — if routine work piles up, the team has nothing left for the unexpected; the fix isn't urgency, it's automation, so humans stay free for what actually needs them. Read it next to: your automation / auto-rule coverage.
- Activity-path SLA — process mining · "the big one" First reading: cluster every closed ticket by the exact sequence of statuses it walked through; calculate SLA compliance per cluster. Second reading: the "best path" is rarely the one in your runbook — your team has invented better workflows in spite of the documented process; the runbook is a fiction the data quietly contradicts, and path mining is the truth-check and the prompt to rewrite the process around what actually works. Read it next to: the documented workflow — and your willingness to change it.
None of these are exotic. Backlog, age, MTTR, SLA, created-vs-resolved are on most JSM boards already. The shift isn't adding metrics — it's reading the ones you have against the column on the right.
How to read a JSM dashboard: pair every metric with its complement
The rule is short: read every metric on the board twice — once for what, once for so-what — and never let a number sit on an exec dashboard without the complement that tells you whether it's real. Backlog and closure rate. Created and resolved, with the ratio called out. Age distribution and a stale-ticket count. Lagging KPIs and the leading ones that predict them. It's twice the tiles and half the false comfort — and the false comfort was the expensive part.
Two practical moves make it stick:
1. Look at the green cells, not the red ones
If you go hunting for trouble on a dashboard, the red cells are the wrong place to start. The breached SLAs, the spiking incident count, the one team with a visible backlog — those already have an owner; somebody's in a meeting about them; they're priced in. The trouble is in the green cells nobody reads twice — specifically in the gap between what a green metric measures and what it implies. A red board is a team that knows it has a problem. A green board is a team that's about to find out it had one.
2. Get an outsider to read the board cold
Take the reassuring dashboard into a room with someone who isn't responsible for those numbers — a frustrated customer, a different team's lead, your own future self in six months — and have them read it without context. Every place they go "wait, but…" is a green cell with a second reading you've been skipping. Two minutes, one outsider; it's the cheapest review you'll ever run.
The decision tree below is the test you run on any green tile.

Run this on the three greenest tiles on your board first. If you can't even name the complement, that's the gap to close before anything else.
What this means for JSM teams
If you run service management on Jira Service Management, you start with a real advantage on the first reading and a real gap on the second. Out of the box, JSM gives you queues, SLA goals, the Workload and SLA dashboards, and the Atlassian reporting widgets — and they are good at exactly the obvious version of each metric. Backlog: there. SLA compliance: there, with a clean traffic light. Resolution time: there, trending.
The complement view is the part JSM's native widgets mostly don't do. Closure rate next to backlog rate — you have to build it. Created-vs-resolved with the ratio and the slack called out — you have to build it. Age distribution with a stale-ticket sweep before the chart is trustworthy — you have to do the sweep. Status dwell time — that lives in the issue changelog, not in any default report, and JSM's status categories (To Do / In Progress / Done) collapse exactly the detail you need to see where work goes idle. And "Resolved" in a JSM workflow can mean a dozen different paths through your statuses — the default SLA report averages them all together, which is precisely what hides the activity-path reading: the one path that quietly beats your runbook.
So for a JSM team, the second reading isn't automatic — it's reporting work. Pull closure rate and throughput from the same issue data that feeds the backlog tile. Plot created and resolved as two series and annotate the ratio. Build the age histogram, and run the 90-days-plus sweep first. Reconstruct status dwell time from the changelog. Cluster closed issues by their status sequence and read SLA per cluster. That's the work that turns a JSM dashboard from "are we hitting our numbers?" into "is the service getting better, and where?" — and it's the work a dedicated JSM reporting layer exists to do, because the native widgets stop at the first reading.
The honest tradeoff
The second reading costs you something, and it's worth being straight about what. It's slower — you can't skim a paired board the way you skim a wall of green tiles. It requires judgment — "is the complement healthy?" isn't a threshold you can RAG-colour and walk away from. It doesn't compress into a single status for the steering committee slide. And every so often it will tell your exec that the green board they've been presenting was wrong, which is not a fun conversation. A dashboard you can read in ten seconds is a dashboard that's hiding something — that's the trade.
But the alternative is the thing I watched in that QBR room: a manager with a board full of green and a CIO forwarding complaint emails, with no way to connect the two — because the connection was a second reading nobody had taken. A red board is a team that knows it has a problem. A green board, read once, is a team that's about to find out it had one. I'd take the red one.
The next step is small. Open your current dashboard, pick the three greenest tiles, and write down the complement for each — the metric that tells you whether the green is earned. Then go check. If you get stuck on one because you can't name the complement, you've found the first thing to fix.
See the second reading on your own JSM dashboard
Charts & Reports for JSM builds the complement views JSM's native widgets don't — backlog vs. closure, created vs. resolved with the ratio, age distribution, status dwell time, and activity-path SLA from the issue history. Built for Jira Service Management, on the Atlassian Marketplace.
Explore Charts & Reports for JSM →
Faizal Moidu has spent most of two decades around IT service management — building service desks, sitting in the QBRs where the dashboards get presented, and watching teams act on the first reading. He writes about FitSM, ITSM and AI for view26.
Related · More from the view26 blog · ITSM Navigator for JSM
Faizal Moidu
Founder and CEO of View26 GmbH. 10+ years in the Atlassian ecosystem.