My recent work has got me thinking.

Over the past year I’ve been embedded in two organizations that couldn’t be more different on the surface. One is a global sporting event watched by hundreds of millions of people, operating across dozens of venues, staffed by a largely freelance workforce assembled specifically for a three-week window. The other is a 3-million-member professional association running its most complex annual gathering — a deliberative body with hundreds of delegates, real-time voting, and decades of procedural history behind every decision.

Different industries. Different scales. Different stakes.

Same problem.

In both cases, the organization was sitting on an enormous amount of operational knowledge — procedures, contacts, approval chains, incident protocols, institutional memory accumulated over years or decades. And in both cases, the people who needed that information most couldn’t reliably access it when it mattered. They asked a colleague. They searched a shared drive. They guessed. Sometimes they got it right.

That’s when I started thinking about this less as a technology problem and more as a risk problem.


The Information Is There. The Access Isn’t.

Let me be specific about what I mean. In a large-scale operational environment, the knowledge exists. It lives in documents, in SOPs, in org charts, in the institutional memory of people who’ve been doing this for years. The problem isn’t that organizations haven’t documented their procedures — most have. The problem is discoverability under pressure.

When a crew member at a venue needs to know who approves a new camera position, they don’t have time to navigate a shared drive, find the right folder, open the right document, and scan for the answer. They need to know now. So they ask someone. And that someone either knows or they don’t.

When a new staff member needs to understand the incident reporting procedure after someone gets hurt, the last thing they should be doing is searching for documentation. They need a clear, immediate answer — because getting it wrong isn’t just inefficient, it’s a liability.

This plays out hundreds of times a day across large operational deployments. Most of the time it works out. But at scale, across a temporary workforce, under time pressure, the compounding effect of small information failures becomes a material operational risk.


A Risk That’s Been Accepted as Unavoidable

Here’s what’s interesting: organizations spend significant resources managing other forms of operational risk. Insurance. Redundancy systems. Contingency planning. Safety protocols. These investments are made because the cost of getting things wrong is understood and taken seriously.

The institutional knowledge risk has largely been accepted as an unavoidable cost of doing business. You hire experienced people, you do your best to onboard them quickly, you hope the institutional memory in the room is enough to cover the gaps. When the operation ends, the knowledge walks out the door, and you start over next time.

This has been the reality not because organizations don’t recognize the problem, but because until recently there wasn’t a practical solution. You couldn’t put a researcher at the elbow of every crew member. You couldn’t make every SOP instantly searchable and conversational. You could build better intranets and SharePoint sites, and organizations did, and the problem persisted anyway.


The AI Overlay: Making Existing Systems Conversational

The shift that’s happening now isn’t that AI replaces existing information infrastructure. It’s that AI makes that infrastructure accessible in a fundamentally different way.

The information already exists. The org charts, the procedures, the contact lists, the incident protocols — they live somewhere in the organization. What AI enables is a conversational layer on top of that existing knowledge. Instead of navigating to it, you ask for it. In plain language. In the moment you need it.

“Who approves a new camera position?”
“What’s the incident reporting procedure if someone gets hurt on site?”
“Who do I call if we lose the signal feed?”

These aren’t complex questions. They’re the kind of questions that get asked dozens of times a day in any large operational deployment — and answered inconsistently, by whoever happens to be nearby, based on whatever that person happens to know.

A well-scoped AI assistant trained on an organization’s actual operational knowledge answers these questions reliably, instantly, and consistently. For every member of the workforce. From day one. Enterprise users report saving 40 to 60 minutes per day from AI-assisted information access, and 75% report being able to complete tasks they previously could not perform without escalating to a colleague or supervisor. The productivity case is real. But in high-stakes operational environments, the risk mitigation case may be even stronger.


The Honest Conversation: What Can Go Wrong

Any serious discussion of AI in operational settings has to address the obvious objection: hallucinations. AI systems can and do produce confident, plausible, wrong answers. In a low-stakes environment, that’s an annoyance. In a high-stakes operational deployment, it’s a genuine concern.

It’s a concern that the industry takes seriously. Research shows that 77% of businesses cite hallucinations as a top barrier to AI deployment, and nearly half of enterprise AI users report making at least one significant decision based on inaccurate AI output. Those numbers shouldn’t be dismissed.

But here’s the context that often gets left out of that conversation: the alternative isn’t accuracy. The alternative is a workforce of people answering questions from incomplete knowledge, under pressure, based on whatever they happen to remember. The baseline isn’t perfect information — it’s managed uncertainty. The question isn’t whether AI introduces risk. It’s whether a well-designed AI system introduces more risk than the status quo it replaces.

The answer depends entirely on how the system is built. A general-purpose AI tool deployed broadly across an organization is a fundamentally different risk profile than a tightly scoped assistant trained on a curated, controlled knowledge base with clear boundaries on what it will and won’t answer. The former is where most of the horror stories come from. The latter is what good implementation actually looks like.

There’s also a broader failure pattern worth acknowledging honestly. Research from MIT found that 95% of enterprise AI pilots fail to deliver measurable impact — not because the underlying technology is weak, but because organizations force AI into existing workflows without adapting either the tool or the process around it. They treat it as a technology project when it’s actually an operational transformation. The failure rate is high precisely because most implementations skip the hard work of scoping, governance, and change management.

The organizations that get it right do a few things consistently: they start narrow, with a clearly defined problem and a controlled knowledge base; they build human oversight into the workflow rather than treating AI output as authoritative; and they partner with specialists rather than building from scratch. Research consistently shows that specialized partnerships succeed roughly twice as often as internal builds.

None of that is rocket science. It’s just discipline — the same discipline that separates a well-run operational deployment from a chaotic one.


The Pattern Is Bigger Than Any Single Industry

What I’ve come to understand is that the organizations where this matters most share a specific set of characteristics. They’re not defined by industry — they’re defined by structure.

They operate at scale, often with workforces that expand dramatically for a defined period and then contract. They rely heavily on freelance, contract, or temporary staff who bring skill and experience but limited organizational context. They have hard deadlines — the event happens, the assembly convenes, the project delivers — and there’s no room to be learning procedures in the middle of execution. And they carry significant institutional knowledge that currently lives in people rather than systems.

Broadcast operations. Large-scale membership organizations. Healthcare systems. Major construction projects. Disaster response organizations. Any operation where the complexity of the work outpaces the permanence of the workforce faces a version of this same problem.

The technology to address it exists today and is more accessible than ever. Two-thirds of organizations that have deployed AI thoughtfully report measurable gains in efficiency and operational performance. The gap between those organizations and the ones still running on shared drives and institutional memory is widening.

The organizations that frame this as a risk management challenge — rather than a technology adoption question — will be the ones that close it first. That reframe matters beyond strategy: risk has a cost that finance understands. “We’re investing in AI” is a difficult budget conversation. “We’re closing a documented operational and liability gap” is a much easier one. The language you use to describe the problem determines whether you get the resources to solve it.

The knowledge doesn’t have to walk out the door anymore.


Tyler McConvill is the founder of Top Rope Media, a digital marketing and software development agency, and a partner at Closed System Media & Design. He builds AI-enabled operational tools for complex organizations.