←back to thread

Development speed is not a bottleneck

(pawelbrodzinski.substack.com)
191 points flail | 2 comments | | HN request time: 0s | source
Show context
thenanyu ◴[] No.45138802[source]
It's completely absurd how wrong this article is. Development speed is 100% the bottleneck.

Just to quote one little bit from the piece regarding Google: "In other words, there have been numerous dead ends that they explored, invalidated, and moved on from. There's no knowing up front."

Every time you change your mind or learn something new and you have to make a course correction, there's latency. That latency is just development velocity. The way to find the right answer isn't to think very hard and miraculously come up with the perfect answer. It's to try every goddamn thing that shows promise. The bottleneck for that is 100% development speed.

If you can shrink your iteration time, then there are fewer meetings trying to determine prioritization. There are fewer discussions and bargaining sessions you need to do. Because just developing the variations would be faster than all of the debate. So the amount of time you waste in meetings and deliberation goes down as well.

If you can shrink your iteration time between versions 2 and 3, between versions 3 and 4, etc. The advantage compounds over your competitors. You find promising solutions earlier, which lead to new promising solutions earlier. Over an extended period of time, this is how you build a moat.

replies(13): >>45139053 #>>45139060 #>>45139417 #>>45139619 #>>45139814 #>>45139926 #>>45140039 #>>45140332 #>>45140412 #>>45141131 #>>45144376 #>>45147059 #>>45154763 #
trjordan ◴[] No.45139053[source]
This article is right insofar as "development velocity" has been redefined to be "typing speed."

With LLMs, you can type so much faster! So we should be going faster! It feels faster!

(We are not going faster.)

But your definition, the right one, is spot on. The pace of learning and decisions is exactly what drives development velocity. My one quibble is that if you want to learn whether something is worth doing, implementing it isn't always the answer. Prototyping vs. production-quality implementation is different, even within that. But yeah, broadly, you need to test and validate as many _ideas_ as possible, in order take make as many correct _decisions_ as possible.

That's one place I'm pretty bullish on AI: using it to explore/test ideas, which otherwise would have been too expensive. You can learn a ton by sending the AI off to research stuff (code, web search, your production logs, whatever), which lets you try more stuff. That genuinely tightens the feedback loop, and you go faster.

I wrote a bit more about that here: https://tern.sh/blog/you-have-to-decide/

replies(4): >>45139232 #>>45139283 #>>45139863 #>>45140155 #
add-sub-mul-div ◴[] No.45139232[source]
I think people are largely split on LLMs based on whether they've reached a point of mastery where they can work close to as fast as they think and the tech would therefore slow them down rather than accelerate them.
replies(2): >>45139589 #>>45145091 #
no_wizard ◴[] No.45139589[source]
The verbose LLM approach that Cursor and some others have taken really annoys me. I would prefer if it simply gave me the results (written out to files, changes to files or whatever the appropriate medium is) and only let me introspect the verbose steps it took if I request to do so.

That’s what slows me down with AI tools and why I ended up sticking with GitHub Copilot, which does not do any of that unless I prompt it to

replies(3): >>45142018 #>>45142053 #>>45143248 #
daliusd ◴[] No.45143248[source]
So you want Aider, Claude Code or opencode.ai it seems. I use opencode.ai a lot nowadays and am really happy and productive.
replies(2): >>45145124 #>>45162442 #
1. tharkun__ ◴[] No.45145124{3}[source]
I really wanted to use Aider. But it's impossible. How do people actually use it?

Like, I gave it access to our code base, wanted to try a very simple bug fix. I only told it to look at one service I knew needed changes, because it says it works better in smaller code bases. It wanted to send so many tokens to sonnet that I hit the limits before it even started actually doing any coding.

Instant fail.

Then I just ran Claude Code, gave it the same instructions and I had a mostly working fix in a few minutes (never mind the other fails with Claude I've had - see other comment), but Aider was a huge disappointment for me.

replies(1): >>45152381 #
2. daliusd ◴[] No.45152381[source]
I don't know about Aider, I am not using it because of lack of MCP and poor GitHub Copilot support (both are important to me). Maybe in the future that will get better if that will be relevant. I am using opencode.ai with Claude Sonnet 4 usually. Sometimes I try to switch to different models, e.g. Gemini 2.5 Pro, but Sonnet is more consistent for me.

It would be good to define what's "smaller code bases". Here is what I am working one: 10 years old project full of legacy consisting of about 10 services and 10 front-end projects. As well tried it on project similar to MUI or Mantine UI. Naturally on many smaller projects. As well tried it on TypeScript codebase where it has failed for me (but it is hard to judge from one attempt). Lastly I am using it on smaller projects. Overall question is more about task than about code base size. If the task does not involve loading too much context when code base size might be irrelevant.