Due to high demand, TrainedTeam is operating on an invite-only program.Request yours

All articlesMeasurement & ROI

Training completion rates are the wrong metric. Here's what to measure instead.

"87% complete" means nothing if your team can't do the job. A better framework for measuring whether training actually changed behaviour.

29 March 20267 min read

Every quarterly review I have sat in starts the same way.

“Training completion is at 87 percent.”

And then we move on. Nobody asks whether any of the training did anything. Nobody asks whether people can actually do the thing the training was about. We all nod at the number and file it away.

This is training theater. And 87 percent is its signature metric.

What completion rate actually measures

Completion rate measures one thing: did someone click through the content.

Specifically, it does not measure any of the following.

  • Whether they read any of it
  • Whether they understood any of it
  • Whether their behaviour changed
  • Whether their work got safer, faster, or more consistent
  • Whether the cost of delivering the training was worth it

A team with 100 percent completion can still have:

  • Injuries caused by the procedures the training was supposed to cover
  • Incidents traced back to skills the training was supposed to build
  • New starters asking basic questions the induction was meant to answer

I have watched an organisation celebrate hitting 94 percent completion on a compliance module - in the same quarter they were penalised for the compliance failure the module was supposed to prevent. Completion was up. Compliance was not. These were different things. Nobody noticed they were different things.

Why we measure it anyway

Two reasons.

First, it is easy to measure. Anything with a button is cheap to count. Training platforms bill their progress dashboards as insight, but what most of them surface is button-press telemetry.

Second, in a lot of industries - particularly regulated ones - “we did the training” is the documentable evidence. An inspector asks if staff are trained; you show 87 percent completion; the checkbox is ticked. The metric was designed to satisfy an external audit question, not to measure whether training worked.

That is fine as an inspection artifact. It is a disaster as a management metric.

Three metrics that actually matter

Instead of completion rate, measure these three. You will need a bit more work to collect them - but you will know something real at the end.

1. Pass rate on non-trivial knowledge checks

A knowledge check means a quiz between training content and doing the real task. Not a “did you watch this?” checkbox - an actual test of understanding.

The important qualifier is “non-trivial.” If 100 percent of your team passes every test on the first try, your test is too easy. Good knowledge checks have a realistic failure rate, often 10 to 30 percent first-attempt, because they are testing judgment, not memory. “What would you do if the temperature reading is two degrees over spec?” is a better test question than “what is the temperature spec?”

Track first-attempt pass rate, median attempts to pass, and questions with anomalously high failure - those identify teaching gaps.

2. Time-to-competence for new starters

When someone new joins, how long is it before they can do the work without asking an experienced colleague?

Most organisations have no idea. The data is collectible if you want it.

  • How many days until they complete their first solo task of each type?
  • How many questions are they asking senior staff in week one versus week four?
  • When does their quality or speed match the team average?

This metric is harder to game. You cannot fake “working without supervision” by clicking through a module.

3. Rate of incidents traceable to training gaps

This one is lagging - you only find out something was a training gap after the incident. But tracking it over time is the cleanest answer to “is our training working?”

For every quality incident, safety near-miss, or customer complaint, ask: was this a training gap, a process gap, a tool gap, or a person having a bad day? Count the training gap ones, month by month. If that number is flat or dropping while your training investment is constant, your training is effective. If it is rising, or flat while you are expanding training, something is off.

When completion rate does matter

Be fair about this. Completion rate is the right metric in two specific cases.

Legal acknowledgment. If you need a record that someone read a policy by a certain date - GDPR, data protection, code of conduct - completion rate via e-signature acknowledgment is exactly the thing to track.

Initial rollout tracking.When you have just published a new procedure, “has everyone even opened it yet?” is a reasonable first question. Completion rate answers it.

Beyond those two cases, do not let completion be the metric you report up. It tells you nothing you need to know.

What to change tomorrow

If you are the person presenting training metrics, retire “completion rate” from the top-line slide. Replace it with one of the three above, even if the data is rougher. It is better to have an approximate answer to the right question than a precise answer to the wrong one.

If you are the person being presented to, ask: “what does this number tell us about whether the training worked?” Watch the room go quiet.

The hardest habit to break in training operations is the habit of measuring what is easy instead of what matters. But the payoff is real: you stop producing training that makes a metric go up and no difference to the work. That is the whole point.

Related: our take on how to actually measure training ROI and how member-level analytics work inside TrainedTeam.

Stop writing SOPs from scratch

Try TrainedTeam free

AI writes the instructions. Your team proves they followed them. Starter pack of 60+ templates tailored to your industry, ready to edit in minutes.