←back to thread

Development speed is not a bottleneck

(pawelbrodzinski.substack.com)
191 points flail | 1 comments | | HN request time: 0s | source
Show context
marginalia_nu ◴[] No.45139149[source]
I would reconcile the seeming paradox that AI-assisted coding produces more code faster, yet doesn't seem to produce products or features much faster by considering that AI code generation and in particular CoPilot-style code suggestions means the programmer is constantly invalidating and re-building their mental model of the code, which is not only slow but exhausting (and a tired programmer makes more errors in judgement).

It's basically the wetware equivalent of page thrashing.

My experience is that I write better code faster by turning off the AI assistants and trying to configure the IDE to as best possible produce deterministic and fast suggestions, that way they become a rapid shorthand. This makes for a fast way of writing code that doesn't lead to mental model thrashing, since the model can be updated incrementally as I go.

The exception is using LLMs to straight up generate a prototype that can be refined. That also works pretty well, and largely avoids the expensive exchanges of information back and forth between human and machine.

replies(2): >>45139921 #>>45140673 #
1. flail ◴[] No.45140673[source]
Whenever a development effort involves a lot of AI-generated code, the nature of the task shifts from typing-heavy to code-review-heavy.

Cognitively, these are very different tasks. With the former, we actively drive technical decisions (decide on architecture, implementation details, even naming). The latter offers all these decisions made, and we first need to untangle them all before we can scrutinize the details.

What's more, often AI-generated code results in bigger PRs, which again adds to the cognitive load.

And some developers fall into a rabbit hole of starting another thing while they wait for their agent to produce the code. Adding context switching to an already taxing challenge basically fries brains. There's no way such a code review to consistently catch the issues.

I see how development teams define health routines around working with generated code. Especially around limiting context switching. But also retaking tasks to be made by hand.