A maintenance lead I worked with retired last spring after twenty-two years on the same shop floor. His leaving lunch was well attended. His knowledge transfer document was two pages long. Six weeks later, three separate people were calling his personal mobile because nobody still on the team knew why the calibration on the second line drifted every Tuesday.
Your team's knowledge walks out the door every time an experienced person leaves, retires, transfers, or takes extended leave. The alarming thing is not that this happens. It is that most organisations accept it as inevitable and then act surprised at the cost.
The cost is real. Rework from undocumented workarounds. New hires asking the same question for the fourth time because nobody wrote down the answer. Compliance gaps nobody knew existed until an auditor noticed. The phone call to the retired guy who still, charitably, picks up.
Why traditional knowledge capture fails
Wikis, shared drives, internal handbooks. Every organisation has some version of these. Almost none of them work as knowledge capture for operational work. The reasons are predictable.
Capturing knowledge is a separate exercise from using it.You ask an experienced operator to “document the process” as a Thursday afternoon project. They sit in front of a blank Confluence page. They write three paragraphs, hate them, and go back to actual work. The page stays a stub. The knowledge stays in their head.
Writing is the bottleneck. The people with the most to capture are usually not the people who enjoy or are fast at writing. Asking a twenty-year operator to type out everything they know is asking them to do a slow, awkward version of a task they already do well in the medium of speech and demonstration.
No one ever finds the captured knowledge when they need it. A handbook exists. Someone updated it in 2023. You are on the shop floor at 11pm trying to remember what to do when the line jams. You are not logging into Confluence. You are asking the nearest colleague.
Structure collapses under the weight of exceptions. Every real process has edge cases. Wikis are bad at edge cases. They either bury them in sub-pages or flatten them into the main doc and lose readability. The structure of a good operational procedure is specific - steps, triggers, exceptions, verification - and most knowledge bases impose none of it.
The net effect is that traditional knowledge capture is a tax on the experienced with a payoff so delayed and diffuse that rational people decline to pay it. This is not a motivation problem. It is an economics problem.
What changes when AI does the structural work
The economics change when you can separate two things that used to be bundled: capturing what someone knows, and structuring it into something usable.
Capture, with modern tooling, is cheap. A ten-minute recorded walkthrough on a phone. A thirty-minute conversation with a colleague on Loom. A chat thread where an expert answered four questions this morning. A set of inline edits someone made to an existing document explaining why they changed it.
Structuring used to require a human writer. That was the expensive step. AI now does a competent first pass: pulling a transcript, identifying discrete steps, drafting prose, flagging the places where the speaker was clearly hedging or uncertain. The expert still reviews, still corrects, still adds the institutional context the AI cannot know. But the first draft is not theirs to produce.
The effect on capture incentives is significant. Asking someone to “walk me through this while I record” is a ten-minute ask. Asking them to “write a document about how you do this” has historically been an hours-long ask that many experienced people will politely decline.
Three recommendations that actually work
Capture as part of doing, not as an exercise
The strongest pattern I have seen is to build capture into the moment the knowledge is being used, not into a separate documentation sprint. A few examples of what that looks like in practice.
When an experienced operator trains a new starter on a task, the training session itself is recorded, ideally with permission and a clear reason why. That recording becomes the source for a structured SOP, not a replacement for one. Same effort as the training itself; you now have an artefact.
When someone solves an awkward problem - a machine fault, a customer escalation, a regulatory question - the resolution is written up briefly before the day ends, tagged against the procedure it relates to. Five minutes, while it is fresh.
When a team lead corrects another team member's work, the correction is logged against the SOP that was supposed to cover the task. Over time, this surfaces the gaps in your documentation more reliably than any audit.
None of this requires heroics. It requires the tooling to make each of these captures take a few seconds rather than opening a separate app and naming a new file.
Make recording trivially easy
The friction on recording is where most capture efforts die. If getting a usable video requires installing something, finding a quiet room, framing a shot, and uploading to a separate platform, it will not happen at scale. It will happen for the one documentation week in March and then not again.
What works is a phone-first flow. Someone narrates while doing the task. The video goes somewhere your SOP tooling can ingest. No separate hosting, no separate login, no separate upload step. Loom works. Direct uploads work. YouTube links work. The important thing is that the person doing the work does not have to think about the pipeline.
Related: if your recording pattern requires the senior person to talk to camera in a studio setup, you have designed something only a few people will use. Design for the messiest operator talking over machine noise with a gloved hand, and you will get something that scales.
Use AI to do the structural work
The last piece is what happens between a raw recording and a usable SOP. Without tooling, that gap is owned by a writer and it is where most capture efforts stall. With AI, the flow looks different.
Transcript, then proposed step structure, then draft body copy, then embedded clips at the right timestamps, then a human review. The human is still essential. They add the safety language, the organisation-specific context, and the call on where the AI misunderstood. But the blank page problem is gone. You are editing, not writing.
In practice this means a recorded walkthrough that used to sit in a Drive folder until someone found the energy to transcribe it can become a structured, searchable SOP in twenty minutes. The economics of capture, on the retention side, finally match the economics of leaving.
What to do with the captured knowledge
Capture is not the finish line. Retrieval is. An SOP that exists but is not findable when a new starter needs it at 2pm on a Tuesday has not solved the knowledge walkout problem.
Three habits make captured knowledge actually retrievable. First, title SOPs by the task the reader is trying to do, not by topic. “Reset the packaging line after a jam” beats “Packaging line maintenance.” Second, organise by workflow or role, not by internal department structure. Third, make the tool you capture into the tool the team already opens during the work, not a separate knowledge base.
If a new starter has to open three different systems to find a procedure, they will ask a colleague instead, and your documented knowledge will quietly go back to being tribal.
The honest caveat
AI-generated drafts are not perfect. They miss context, they occasionally rephrase important details in ways that soften them, and they do not know your organisation. The review step is not optional. What has changed is not “AI writes the SOPs” but “the cost of the first draft fell by an order of magnitude, so the experts can spend their time on the part that actually needs them.”
That shift is enough. The reason knowledge walks out the door is not that experienced people are unwilling to share. It is that the ways we have historically asked them to share have been slow, frustrating, and thankless. Change the economics, and the culture follows.
A more concrete look at what this looks like in practice is over on the AI hub - including the specific video-to-instruction flow that most teams start with.