The Vibe Coding Paradox

My last PR for Nudges was +96 −312, touched 38 files, and was about 90% vibe coded. And I’m confident in it.

While I was gliding through Hyrule, two different AI agents were quietly refactoring Kafka consumers, validating payload shapes, and committing changes—clean, scoped, and production-ready.
When I came back, I scanned the diff, nodded at the function names, double checked a few critical spots, and told Claude Code to submit a PR.

And here’s the thing: it feels good.

Because in that system, the AI isn’t just guessing—it’s working inside a system I built with intention.
A system where I own the architecture, the message formats, the telemetry, the consequences.
Nudges is mine. So when the code writes itself, it feels like an extension of my will.

But that same experience—AI as co-developer, frictionless and fast—feels completely different in my contracts.

Because there, I don’t own the system.

I don’t even trust the system.
I’m a contractor navigating 300,000 lines of barely-working code, where architectural debt is measured in years and every “improvement” risks a cascade failure.
In that context, AI doesn’t feel like empowerment.
It feels like a shield against cognitive overload.

It still writes the code.
But I can’t always afford to care if it’s the best code.

And that—more than anything—is the paradox of this new era:

AI removes the friction.
But friction was where we used to decide what mattered.

The Copilot Moment

It started with one of those tickets that shouldn’t exist.

A client wanted the UI to throw warnings if a form was in just the right state. One with rules that that overlapped with a dozen others.

It was a mess. A logic snarl wrapped in inconsistent naming conventions and years of “we’ll get to it next release” tech debt.

I opened the React component. 1100 lines. Most of it conditionals.

No test coverage. No documentation. Just vibes and nested ternaries.

I pulled up Copilot, hit Win+H, and started talking.

I explained what I was trying to accomplish, why it mattered, and where to find context.

Then I leaned back and rubbed my eyes.

An AI agent wrote the rest.

A fresh set of variables and checks.

Each one memoized. Each one defensive.

Each one carrying just enough semantic weight to let me keep going without refactoring the whole thing.

And here’s the seduction: it was more precise than what I would have written.
Because I was tired.
Because after thirty years of writing code, I don’t always have it in me to do it the “right” way when the AI’s “extra work” is technically correct.

So I kept it. I checked it. And it worked.
The code read well.
The diff looked professional.

But I knew—deep down—I’d just added another 30 lines to a file that should’ve been 300 on its own.

I was perpetuating bad patterns.

The kicker?

I submitted the PR.
Because it was correct.
Because that was one less thing to worry about.

The Cognitive Calculus

I could’ve extracted it.
Could’ve split the validation logic into a dedicated module, wired up a test harness, and structured it all so “Go to Reference” is all you need to understand it. Hell, I could have just written it myself.

But that would’ve taken two hours of understanding the mess.
And what I needed was ten minutes of momentum.

That’s the math now.

On some contracts, I’m making dozens of these tradeoffs every week:

  • The “right” way: extract, document, test — 2 hours.
  • The “good enough” way: keep the naming clean, make it work, ship — 5 minutes.
  • Multiply that by countless decisions a week.

You start to see the shape of it.

I don’t write code with AI because I can’t do it myself, or because I’m lazy or don’t care.

I write with AI because I’ve learned where my energy is best utilized.

And this is where the game has changed.

Because now it’s trivially easy to do the “good enough” thing.

It offers clean names, smart guards, reusable patterns.
The five-minute path looks like the two-hour path—until you zoom out.

That’s the danger.

You stop noticing you’re doing triage.
Because the bandages look like real skin.

The Pattern Replicator

To quote Richard Campbell: “Computers are amplifiers.”

So let’s be clear:
AI doesn’t improve your system. It continues it.

If the pattern is clean, it scales clarity.
If the pattern is broken, it scales dysfunction—beautifully formatted, semantically named dysfunction.

That’s the danger.

It replicates everything:

  • Inline functions that should be services
  • Defensive props on components that should be deleted
  • Patterns you meant to fix later, now baked into every new line

And it does it without resistance.
Because the code looks right.

No red flags. No typos.
Just subtle misfits stacking on top of each other until you’re buried in clean, wrong logic.

You used to feel it—used to wrestle with the system.
Now the system slides forward like it’s on rails.
And if you’re not paying attention, it takes you somewhere you never meant to go.

The Contrast

The difference isn’t the tool.
It’s the context.

In Nudges, I own the system.
I care about how it’s built because I’ll still be inside it six months from now.
Every event, every topic name, every module boundary—I built those.

So when AI adds something, I see it instantly: Does this belong here? Does this match the goal of the system I meant to build?

And if it doesn’t? I stop. I fix it. I nudge it back into alignment.

Because I want it to hold.

In that context, AI is an amplifier of care.
It lets me move faster without compromising my standards—because those standards were embedded in the design from the beginning.

But in my contract work?

I don’t own the system.
I inherit it.
I patch it.
And when AI offers a solution, I don’t stop to ask “Is this best practice?”
I ask “Will this work? Will it ship? Will it survive the next merge?”

Because that’s the job.
Because the cost of caring is higher than the cost of being correct.

And the reward is the same either way.

It’s not about skill. I’m not too old. I’m not slower.
What’s changed isn’t what I can do—it’s what I choose to do, and when.

The Ghost in the Machine

I merged that PR for Nudges. The one the agents wrote while I was playing a video game. It was beautiful. It was right. But it was right because I spent two years building the constraints that made it right.

Tomorrow, I’ll tackle another ticket for the client. I’ll open a file that smells like 2018 and regret. And I might ask an AI to patch a hole I should probably fix at the source. It will be ugly. It will be “good enough.” And it will be exactly what the system deserves.

Because if the code writes itself, our hands are off the keyboard. But our fingerprints are everywhere.

Similar Posts