Back to Thoughts
May 16, 2026

Before You Add AI, Try Explaining the Job

If you’re thinking about where AI fits into your work, there are two things worth getting clear on before you start. The first is whether you understand the task well enough to hand it to something that has zero context about your business. The second is whether you understand enough about how the tool works to set it up for a useful result.

The Random Person Test

I use a thought exercise with clients when they’re considering integrating AI into their business processes. Imagine you’ve pulled a random person off the street. Someone with no knowledge of your industry, your business, or the task you want performed. Now think about the job you want AI to do.

Could you explain the task clearly to this person? Could you define what a successful outcome looks like? Could you outline the boundaries and constraints? Could you teach them how to interpret the data they’d need to work with? And beyond just explaining it, is any of that written down somewhere you could point them to? Is it in one place or scattered across a dozen documents, wikis, and people’s heads? Is it written down at all?

The burden of clarity is entirely on you.

If a random person couldn’t sit down and figure out how to do this job with what you’ve given them, a model can’t either. It has no context about your business. It doesn’t know your customers, your internal processes, your edge cases, or your definition of good. Everything it needs to do useful work has to come from you, and it has to come in a form that’s explicit enough for something with zero institutional knowledge to act on.

This is the same problem I’ve written about before with scoping projects. A vague SOW kills a project before it starts because nobody defined what success looks like. AI makes this worse, not better, because the model will always produce something. It won’t tell you that your instructions were unclear. It’ll just give you a confident-looking output that may or may not have anything to do with what you actually needed. The burden of clarity is entirely on you.

The random person test usually reveals more about your own processes than it does about AI. If you can’t articulate the task, the boundaries, and the expected outcome clearly enough for a stranger to attempt it, that’s not an AI problem. That’s a process problem. Fix that first.

The Intern Experiment

The second question is about the tool itself, and most people integrating AI into their work don’t have a clear picture of what’s actually happening under the hood. Not at a research level. Just at the level of understanding what you’re working with and where the limits are.

Here’s another thought experiment. You have a massive book of knowledge that needs to be understood. To accomplish this, you can hire interns who will study the content and be available afterwards for questions.

Option A: hire one intern who reads the entire book. They’ll have complete context, but given the volume of information, their understanding of specific details might be more general.

Option B: hire multiple interns and divide the book into sections. Each intern deeply studies their assigned portion. You capture more specific detail, but you lose the interconnections between sections.

All the interns have identical intelligence and capabilities. There’s no time pressure. The goal is complete comprehension of the content. Which approach leads to better understanding?

There’s no clean answer, and that’s the point. Each intern is a context window. This is the tradeoff at the center of how large language models process information. Every model has a limit on how much information it can process at once, and how you work within that limit involves real tradeoffs. One large context gives you breadth but loses detail. Multiple smaller contexts give you depth but lose the connections between pieces. How you chunk the information, what you include and what you leave out, how you stitch the results back together, these are architecture decisions that directly affect the quality of what comes back.

AI is not magic. It’s a tool that rewards clarity and punishes the absence of it.

Most people don’t think about any of this. They just paste something into a chat window and expect the model to figure it out. When the output is shallow or misses something important, they assume the technology isn’t ready. Sometimes it isn’t. But often the real issue is that they handed the tool a problem without thinking about how the tool actually processes information.

Do The Boring Work First

AI is not magic. It’s a tool that rewards clarity and punishes the absence of it. If you don’t understand your own problem well enough to explain it to a stranger, the model isn’t going to understand it either. If you don’t understand the constraints of the tool, you’re going to hit walls that feel like failures of the technology when they’re really failures of approach.

Before you add AI to anything, try explaining the job to a stranger. If that goes well, think about how a tool with a fixed window of attention would process the information you’re planning to give it. If you can get clear answers to both of those, you’re in a much better position than most.