homeLachyFS
·7 min

Observations on AI-Assisted Development

After months of using AI coding tools daily, here's what actually changed about how I work.

I've been using AI coding assistants heavily for the past year. Not as a novelty, not as a party trick to show people, but as a core part of how I write software every day. Cursor, Claude, Copilot - I've tried most of them and settled into a workflow that actually works. Here's what I've noticed, stripped of the hype.

The Honest Productivity Picture

Let me be specific about what changed, because vague claims about "10x productivity" aren't useful to anyone.

I write more tests now. This is probably the biggest concrete change. Before AI tools, writing tests felt like a chore - not the logic, but the boilerplate. Setting up mocks, writing assertion scaffolding, handling edge cases I know about but can't be bothered to type out. Now I write a function and immediately ask for tests. Not because I should, but because the friction is genuinely gone. I describe the edge cases I'm thinking about and get runnable test code in seconds. My test coverage has roughly doubled, and it happened without any conscious effort to "write more tests."

I prototype faster and throw away more code. When I'm evaluating whether an approach will work - say, testing whether a particular data structure handles my access patterns, or seeing if a library's API fits my mental model - I can sketch it out in minutes instead of spending an afternoon. This means I prototype more aggressively and discard more experiments. The ratio of code I write to code that ships has gone up significantly, and that's a good thing. Cheap experiments lead to better architecture decisions.

I read more code than I used to. This one's counterintuitive. You'd think AI tools would mean reading less code, but the opposite happened. AI can explain unfamiliar codebases quickly - "what does this module do?", "how does this state machine transition between modes?", "why is this using a WeakMap here?" - so I'm more willing to dive into dependency source code, read framework internals, and actually understand what's happening under the hood. I used to treat libraries as black boxes. Now I read their implementations because the cost of understanding is so low.

I context-switch less while coding. Before: hit a syntax question, open a browser, search Stack Overflow, get distracted by three other tabs. Now: ask the question inline, get an answer, keep coding. The compound effect of not breaking flow state dozens of times per day is significant. It's hard to measure, but I notice it.

What Didn't Change

This part matters just as much.

Architecture decisions are still hard, and AI makes them harder in a subtle way. AI can implement any pattern you ask for. Want a Redux store? Done. Prefer Zustand? Also done. Event sourcing? Sure. The problem is it'll do any of them competently, which means the decision of which pattern is right for your specific constraints - your team size, your scale requirements, your maintenance budget - is still entirely on you. Worse, AI makes it easy to over-engineer because implementing complexity is cheap. You have to actively resist the urge to add abstraction layers just because you can.

Debugging complex issues is still slow. AI helps with the obvious stuff - "this variable is undefined because you're accessing it before the async call resolves." But when you're tracking down a race condition that only manifests under specific timing, or a state bug that requires understanding five components' interaction, or a performance regression buried in a render cycle - you're still doing the detective work yourself. AI can help narrow the search space, but it can't replace the systematic hypothesis-testing that real debugging requires.

Code review matters more, not less. AI-generated code looks correct. The variable names make sense, the logic flows reasonably, the edge cases appear to be handled. It often is correct. But "often" isn't "always," and the bugs are subtle. I've caught off-by-one errors, incorrect boundary conditions, and race conditions in AI-generated code that looked perfectly fine at a glance. You have to actually read what it wrote, line by line, with the same scrutiny you'd apply to a junior developer's pull request. Maybe more, because the code's surface-level quality makes it easier to wave through.

Understanding fundamentals still matters. AI can generate a debounce function, but if you don't understand closures and timing, you won't know when the generated version is subtly wrong for your use case. AI can set up a database migration, but if you don't understand indexing strategies, you won't catch the missing index that'll tank your query performance at scale. The tools amplify your existing knowledge. They don't replace it.

The Workflow That Actually Works

After a lot of iteration, here's the process I've settled into:

  • Think first, prompt second. I spend a few minutes thinking about what I want before I start typing. What's the interface? What are the edge cases? What's the simplest version that works? The better my mental model, the better the output.
  • Describe intent, not implementation. "Write a function that retries a fetch request with exponential backoff, max 3 attempts, and throws after the final failure" works better than "write a for loop that calls fetch and waits longer each time." The more I describe what I want and why, the better the result.
  • Read everything it generates. This is non-negotiable. I read every line. Not skim - read. I'm looking for logic errors, missing edge cases, unnecessary complexity, and style inconsistencies with the rest of the codebase. This step catches maybe 20% of generated code that needs changes.
  • Edit by hand when it's faster. Sometimes the AI gets 90% right and the remaining 10% is faster to fix manually than to describe in another prompt. I switch between prompting and typing fluidly. There's no rule about which to use when - it's a feel thing.
  • Test immediately. Since generating tests is fast now, I test every significant piece of generated code before moving on. Catching bugs at this stage is cheap. Catching them in production is not.
  • Where I Think This Is Going

    I'm not going to make grand predictions about AI replacing developers. What I will say is this: the tools are getting better fast enough that the workflow I described above will probably look quaint in a year. The gap between "describe what I want" and "get working code" is shrinking, and the definition of "working" is expanding to include more edge cases, better error handling, and more idiomatic patterns.

    The developers who benefit most aren't the ones who use AI the most - they're the ones who know enough to use it well. Domain knowledge, architectural taste, and debugging skill become more valuable, not less, because they're the things AI can't provide.

    The key insight, if there is one: AI is a force multiplier, not a replacement. It makes good developers faster and lets them take on more ambitious projects. It doesn't make bad decisions into good ones.