The Question I Ask Every Candidate
I ask every candidate the same opening question. Talk to me about a technical system or solution you were responsible for architecting or building. What was the business problem it was solving, and then walk me through the solution, going as deep into the tech as you can.
It’s not a trick question. There’s no right answer. I’m not running a quiz. I’m asking someone to describe their own work, in their own words, at whatever depth they’re comfortable going to. As they talk, I ask the kinds of follow-ups any architect would ask. Why did you pick that technology over the alternative. How did you know the system was healthy. What did you monitor. What broke, and what did you do about it.
What comes back, more often than I’d like, is shallow. The candidate can usually name the cloud services they used, typically whichever provider they’re aligned with and certified on. AWS, GCP, Azure, the vocabulary is fluent. What’s missing is the layer underneath. Why those services and not the alternatives. How they knew the system was actually working. What broke and what they did about it. Candidates who can recite the reference architecture but can’t describe a single operational decision they made. Technologists who clearly haven’t been close to a running system in a long time, if they ever were.
How Certs Became the Credential
Certifications have been a fixture in tech for a long time. Cisco, CompTIA, Microsoft, Red Hat. The industry has been hiring against credentials, and building muscle memory around screening for them, for decades. None of that was inherently a problem. A network engineer with a CCNA in 2005 had probably touched real gear. The cert lined up with something.
Cloud certifications fit into that existing pattern. AWS launched its certification program in 2013, and the other major providers followed. Bootcamps were already a well-trodden entry point. None of it was new. The cloud certs are tied to a specific vendor’s product surface, and even the ones that nominally cover architectural patterns are mostly testing whether you can map a pattern to the right product applied in the prescribed way.
Cloud providers also run partnership programs, and the tier a partner sits in is partly a function of how many certified people they employ. That matters for consulting shops, professional services firms, SIs, anyone whose business depends on being a Gold or Premier or whatever-the-top-tier-is-called partner. So those companies were incentivized to find people who already held certs, or were willing to go get them.
That demand reshaped hiring in the broader market too. Certs had always been one signal among several. You’d see them on a resume next to a real portfolio of work, and they were treated as a useful supplement. Somewhere along the way, the supplement became the credential. Hiring teams started filtering on the cert directly. Recruiters built their pipelines around it. The signal got more and more weight until, for a lot of roles, it was the only thing being checked.
Fast-Tracked Past Building
Then came 2020. Suddenly a much larger group of people wanted into tech, mostly because they wanted to work remotely, and the on-ramp the industry had already standardized on was sitting right there. Cert up and get a job in tech.
The industry, hungry for headcount and pattern-matching on signals it already trusted, fast-tracked a lot of these people into roles that historically required years of practitioner experience. Architect. Engineer. Titles that used to imply you’d built and operated enough systems to have real opinions about why one approach beats another.
The result is a generation of workers in solutioning roles whose entire model of “designing a system” is picking the right boxes from a vendor’s whiteboard template. Ask them what they’d do if the constraint were different, or if the platform weren’t available, and the answer often falls apart.
To Be Fair
The people in these roles aren’t villains. Many of them are doing what the industry rewarded. They optimized for the signals that got people hired and promoted, and those signals were “sounds like an architect” more than “can architect.” If you’d told a smart, ambitious person in 2020 that the way up was to stop building things and start talking about them in the right vocabulary, a lot of them would have done exactly that.
For a stretch, the gap between practitioners and the “thought leader” class didn’t cost the latter much. If your job was to produce decks, set direction, and translate vendor narratives into internal strategy, you didn’t really need to be able to build the thing. There were practitioners under you who would handle that part.
The Title Inflation Got Absurd
For a while this worked, in the sense that nothing visibly broke. Then the titles started outrunning the work by a margin that was hard to ignore.
I’ve met people who went from intern to Architect with nothing in production behind them, just school projects and a cert track. This was especially common at the big cloud providers and the partner ecosystem around them, where the incentive to staff up on certified headcount was strongest. Engineer is the title you earn by building and operating things. Architect is the title you earn by having built and operated enough things to have real opinions about trade-offs.
The industry hit a saturation point. There were a lot of people in solutioning roles who could talk fluently about systems they’d never actually built, run, or fixed. That worked as long as someone underneath them could absorb the gap. The question was always going to be what happens when the cover gets pulled back.
AI Changes the Math
The judgment lives in the practitioner, not the tool.
The promise people are reading into AI coding tools is that they reduce the value of practitioner skill, because now anyone can produce code. That’s not what I’ve observed. These tools reward the context you bring to them and punish its absence. The person who already understands the work gets dramatically more out of the tool. The person who doesn’t gets convincing-looking output they can’t evaluate.
I’ve written before about a family member who used Claude Code to build a working app, then hit a wall the moment it needed real structure.
If you ask the AI for something, it will produce something. Whether that something is right is a judgment call, and the judgment lives in the practitioner, not the tool.
What this means, I think, is that the value of being able to actually do the work is going back up. The people who genuinely understand the systems and concepts they’re working with, rather than parroting back the industry standards, are going to matter more.
I don’t think this corrects overnight, but the inertia argument is weaker than it looks. The forces that protected the gap assumed companies could keep carrying that layer at the size they’d built it to. They can’t. The major tech companies are shedding employees by the thousands every six months, and that alone breaks the pattern that let the gap stay hidden. The candidates who do well in my interviews are the ones who actually understand what they built. They can reason about the trade-offs. They can answer the probing questions, or they’re honest enough to say they’re not sure. The ones who don’t really understand what they built fill the space with vocabulary and almost no depth.
The practitioner is coming back.