Back to Thoughts
April 24, 2026

AI Coding Tools: It's Complicated

I’ve seen a few people I genuinely respect write off AI coding tools. The concern I hear most often is the one I’ve written about before. Offloading the work means you stop developing the judgment that comes from doing the work, and when the tool fails, you don’t have anywhere to stand. That’s real. I’ve fallen into it myself and written about it. So I want to be upfront that I take the critics seriously.

I also want to be upfront that I’ve been using these tools heavily, and for me, they work.

Bad code is not an AI problem. It’s a pervasive problem that predates AI by several decades.

Before I get into what that looks like, it’s worth saying something about the baseline. A lot of the critique of AI-generated code compares it to idealized human code. I’ve spent most of my career working on code other humans wrote, and I’ve seen plenty of human-generated code that would have you thinking it was the early days of AI. Bad code is not an AI problem. It’s a pervasive problem that predates AI by several decades. That doesn’t excuse bad AI-generated code, but it does complicate the comparison.

What works

The first thing is that coding isn’t actually my job. My job is to think about systems, make architectural decisions, and lead people. Coding is instrumental to that work, not the substance of it. The tools let me prototype an idea in an hour or two instead of a day, which means I can actually run my normal development loop (build a thing, use the thing, decide I hate it, rewrite it) at a faster cadence. My colleagues used to joke that I go in for a rewrite every six months on any active project. The tools make that cycle tighter.

The second thing is that software is never done. There’s always bugs, there’s always architectural work that should be happening, there’s always the next feature. Most of that work doesn’t get done because the team is too busy shipping new things. If the tools can absorb routine bug fixes and basic feature requests, that frees up real attention for the harder problems. How do you structure modules so they stay manageable as scope grows. How do you trace exceptions in a large event-driven app. That kind of work is more interesting than the mechanical part, and is extremely beneficial for long-term health.

The third thing is that I’ve always hated the typing portion of coding. I appreciate the craft and I’ve practiced it for a long time, but I have better uses for my time. When I’m working with decent context, owning the architectural decisions and overall direction, and using good tooling around the model, it works really well. I’m not handing off the thinking. I’m handing off the mechanical execution of thinking I’ve already done.

What they don’t fix

The thing AI coding tools don’t fix, and I don’t think they can, is poorly defined work. From what I’ve seen and what a lot of my colleagues have seen, the real bottleneck in most engineering orgs isn’t coding speed. It’s that the work coming into the team lacks basic context about what it’s for and what success looks like. No amount of model capability translates a poorly defined problem into a good solution. You can produce code faster, but you can’t produce the right code faster if no one has been clear about what the right code would do.

That’s an organizational problem and it’s the one I’d push on if someone asked me where to actually invest.

Your mileage will vary

I want to be careful about generalizing from my own experience here. I’ve been building and shipping software for a long time. I have opinions about architecture, I know what bad code looks like when I see it, and I’ve developed a strong internal sense of when a tool’s output is going to hold up and when it isn’t. A lot of what makes these tools work for me is the context I’m bringing to them, not the tools themselves.

It took real time in the tool to figure out, which is itself a kind of expertise.

One small proof point on that. I used to be on the highest tier of Claude that was available. I’ve gotten efficient enough with my token usage that I’m planning to downgrade. That efficiency came from building patterns around how I use the tools. Curating context well, using something like Runbook to handle the determinism I want, and staying in the loop on architectural decisions. None of that was obvious when I started. It took real time in the tool to figure out, which is itself a kind of expertise.

Closer to home, a family member got a copy of Claude Code and used it to build a small web app on GCP to make his work easier. He got it doing what he needed. What he didn’t have was a sense for how the files should be broken out for maintainability, how to track the thing in version control, or how to tell whether the deployment was safe. When he started struggling to add features, he sent the code over. It was a single 5000-line HTML file with all the JavaScript and CSS inlined that I read through with some amusement. My experience let me verify the deployment, direct Claude Code through the cleanup, and hand him back something he could keep working on. Same tool he had. I just knew what to ask for.

So when I say the tools work for me, I mean specifically, they work for me, given the way I work and the experience I’m bringing. I’m genuinely not sure how much of that transfers to someone who’s earlier in their career or working in a different mode. If you’re a skeptic, I’m not going to try to talk you out of it. If you’re a believer, I’d encourage you to notice how much of the value you’re getting is coming from what you already know.

The tools are neither a panacea nor useless. They’re a real capability that rewards the context you bring to them, and punishes its absence.