Most active commenters
  • thenanyu(16)
  • flail(13)
  • scarface_74(4)
  • franktankbank(3)
  • croes(3)

←back to thread

Development speed is not a bottleneck

(pawelbrodzinski.substack.com)
191 points flail | 84 comments | | HN request time: 0.206s | source | bottom
1. thenanyu ◴[] No.45138802[source]
It's completely absurd how wrong this article is. Development speed is 100% the bottleneck.

Just to quote one little bit from the piece regarding Google: "In other words, there have been numerous dead ends that they explored, invalidated, and moved on from. There's no knowing up front."

Every time you change your mind or learn something new and you have to make a course correction, there's latency. That latency is just development velocity. The way to find the right answer isn't to think very hard and miraculously come up with the perfect answer. It's to try every goddamn thing that shows promise. The bottleneck for that is 100% development speed.

If you can shrink your iteration time, then there are fewer meetings trying to determine prioritization. There are fewer discussions and bargaining sessions you need to do. Because just developing the variations would be faster than all of the debate. So the amount of time you waste in meetings and deliberation goes down as well.

If you can shrink your iteration time between versions 2 and 3, between versions 3 and 4, etc. The advantage compounds over your competitors. You find promising solutions earlier, which lead to new promising solutions earlier. Over an extended period of time, this is how you build a moat.

replies(13): >>45139053 #>>45139060 #>>45139417 #>>45139619 #>>45139814 #>>45139926 #>>45140039 #>>45140332 #>>45140412 #>>45141131 #>>45144376 #>>45147059 #>>45154763 #
2. trjordan ◴[] No.45139053[source]
This article is right insofar as "development velocity" has been redefined to be "typing speed."

With LLMs, you can type so much faster! So we should be going faster! It feels faster!

(We are not going faster.)

But your definition, the right one, is spot on. The pace of learning and decisions is exactly what drives development velocity. My one quibble is that if you want to learn whether something is worth doing, implementing it isn't always the answer. Prototyping vs. production-quality implementation is different, even within that. But yeah, broadly, you need to test and validate as many _ideas_ as possible, in order take make as many correct _decisions_ as possible.

That's one place I'm pretty bullish on AI: using it to explore/test ideas, which otherwise would have been too expensive. You can learn a ton by sending the AI off to research stuff (code, web search, your production logs, whatever), which lets you try more stuff. That genuinely tightens the feedback loop, and you go faster.

I wrote a bit more about that here: https://tern.sh/blog/you-have-to-decide/

replies(4): >>45139232 #>>45139283 #>>45139863 #>>45140155 #
3. tristor ◴[] No.45139060[source]
This, so much. As an engineer turned PM, I am usually sympathetic to the idea that doing more discovery up front leads to better outcomes, but the simple reality is that it's hard to try anything, make any bets, or even do sure wins when the average development lifecycle is 12-18 months to get something released in a large organization and they're allergic to automation, hiring higher quality engineers, and hiring more engineers to improve velocities. Development velocity basically trumps everything, after basic sanity checks on the cost/benefit tradeoffs, because you can just try things and if it doesn't work you try something else.

This is /especially/ true in software in 2025, because most products are SaaS or subscription based, so you have a consistent revenue stream that can cover ongoing development costs which gives you the necessary runway to iterate repeatedly. Development costs then become relatively stable for a given team size and the velocity of that team entirely determines how often you can iterate, which determines how quickly you find an optimal solution and derive more value.

replies(2): >>45139158 #>>45147313 #
4. esseph ◴[] No.45139158[source]
> it's hard to try anything, make any bets, or even do sure wins when the average development lifecycle is 12-18 months to get something released in a large organization and they're allergic to automation, hiring higher quality engineers, and hiring more engineers to improve velocities.

This has been my experience as well :/

5. add-sub-mul-div ◴[] No.45139232[source]
I think people are largely split on LLMs based on whether they've reached a point of mastery where they can work close to as fast as they think and the tech would therefore slow them down rather than accelerate them.
replies(2): >>45139589 #>>45145091 #
6. skydhash ◴[] No.45139283[source]
Naur’s theory of programming has always felt right to me. Once you known everything about the current implementation, planning and decision making can be done really fast and there’s not much time lost on actually implementing prototypes and dead ends (learning with extra steps).

It’s very rare to not touch up code, even when writing new features. Knowing where to do so in advance (and planning to not have to do that a lot) is where velocity is. AI can’t help.

replies(2): >>45140154 #>>45145405 #
7. jayd16 ◴[] No.45139417[source]
When they say dev speed they mean the coding the AI can do.

It's agreed that testing, evaluating, learning and course correcting are what takes the time. That's the entire point being made.

replies(1): >>45139677 #
8. no_wizard ◴[] No.45139589{3}[source]
The verbose LLM approach that Cursor and some others have taken really annoys me. I would prefer if it simply gave me the results (written out to files, changes to files or whatever the appropriate medium is) and only let me introspect the verbose steps it took if I request to do so.

That’s what slows me down with AI tools and why I ended up sticking with GitHub Copilot, which does not do any of that unless I prompt it to

replies(3): >>45142018 #>>45142053 #>>45143248 #
9. Aurornis ◴[] No.45139619[source]
> It's completely absurd how wrong this article is. Development speed is 100% the bottleneck.

The current trend in anti-vibe-coding articles is to take whatever the vibe coding maximalists are saying and then stake out the polar opposite position. In this case, vibe coding maximalists are claiming that LLM coding will dramatically accelerate time to market, so the anti-vibe-coding people feel like they need to claim that development speed has no impact at all. Add a dash of clickbait (putting "development speed" in the headline when they mean typing speed) and you get the standard LLM war clickbait article.

Both extremes are wrong, of course. Accelerating development speed is helpful, but it's not the only factor that goes into launching a successful product. If something can accelerate development speed, it will accelerate time to market and turnaround on feature requests.

I also think this mentality appeals to people who have been stuck in slow moving companies where you spend more time in meetings, waiting for blockers from third parties, writing documents, and appeasing stakeholders than you do shipping code. In some companies, you really could reduce development time to 0 and it wouldn't change anything because every feature must go through a gauntlet of meetings, approvals, and waiting for stakeholders to have open slots in their calendars to make progress. For anyone stuck in this environment, coding speed barely matters because the rest of the company moves so slow.

For those of us familiar with faster moving environments that prioritize shipping and discourage excessive process and meetings, development speed is absolutely a bottleneck.

replies(2): >>45140333 #>>45147277 #
10. thenanyu ◴[] No.45139677[source]
Sure, but the actual lag from "I have an idea worth trying" to "here's a working version people can interact with" is one of the larger pieces of latency in that entire process.

You can't test or evaluate something that doesn't work yet.

11. franktankbank ◴[] No.45139814[source]
Feedback from customers is the longest time.
replies(1): >>45140598 #
12. giancarlostoro ◴[] No.45139863[source]
I can agree with this sentiment. It does not matter how insanely good LLMs become, if you cannot assess it quickly enough. You will ALWAYS want a human to verify and validate, and test the software. There could be a ticking timebomb in there somewhere.

Maybe the real skynet will kill us with ticking time bomb software bugs we blindly accepted.

replies(3): >>45140469 #>>45140953 #>>45143958 #
13. seneca ◴[] No.45139926[source]
Exactly the comment I came to make after reading this article. The article is basically claiming that "trying different things until something works" is what takes time, but the actual act of "trying things" requires development time. I can't see how someone can think about this topic this long, which the author clearly has, and come to this conclusion.

Perhaps I've just misunderstood the point, but it seems like a nonsensical argument.

replies(2): >>45140379 #>>45140451 #
14. flail ◴[] No.45140039[source]
I would agree if the only way to achieve (digital product) success were to implement as many versions of software as possible. That's not true.

The whole Lean Startup was about figuring out how to validate ideas without actually developing them. And it is as relevant as ever, even with AI (maybe, especially with AI).

In fact, it's enough to look at the appalling rate of product success. We commonly agree that 90% of startups fail. The majority of that cohort have built things that shouldn't have been built at all in the first place. That's utter waste.

If only, instead of focusing on building more, they stopped and reevaluated whether they were building the right thing in the first place. Yet, most startups are completely immersed in the "development as a bottleneck" principle. And I tell that part from our own experience of 20+ years of helping such companies to build their early-stage products. The biggest challenge? Convince them to build less, validate, learn, and only then go back to further development.

When it comes to existing products, it gets even more complex. The quote from Leah Tharin explicitly mentions waiting weeks/months of wait till they were able to get statistically significant data. What follows is that within that part of experimentation, they were blocked.

Another angle to take a look at it is the fundamental difference in innovation between Edison/Dyson and Tesla.

The first duo was known for "I have not failed. I found 10,000 ways that don't work." They were flailing around with ideas till something eventually clicked.

Tesla, in contrast, would be at the Einstein's end of the spectrum with "If I had an hour to solve a problem, I'd spend 55 minutes thinking about the problem and 5 minutes thinking about [or in Tesla's case, making] solutions."

While most of the product companies would be somewhere in between, I'd argue that development is a bottleneck only if we are very close to Edison/Dyson's approach.

replies(1): >>45140358 #
15. flail ◴[] No.45140154{3}[source]
I wouldn't discuss with that part, although there are definitely limits to how big a chunk of a big product a single brain can really grasp technically. And when the number of people involved in "grasping" grows, so does the coordination/communication tax. I digress, though.

We could go with that perception, however, only if we assume that whatever is in the backlog is actually the right thing to build. If we knew that every feature has value to the customers and (even better) they are sorted from the most valuable to the least valuable one.

In reality, many features have negative value, i.e., they hurt performance, customer satisfaction, any key metric a company employs.

The big question: can we check some of these before we actually develop a fully-fledged feature? The answer, very often, is positive. And if we follow up with an inquiry about how to validate such ideas without development, we will find a way more often than not.

Teresa Torres' Continuous Discovery Habits is an entire book about that :)

One of her recurring patterns is the Opportunity Solution Tree, which is a way of navigating across all the possible experiments to focus on the right ones (and ignore, i.e., not develop, all the rest).

16. ajuc ◴[] No.45140155[source]
It's like speed of light in different mediums. It's not that photons slow down. They just hit more stuff and spend more time getting absorbed and remitted.

Better developer wastes less time solving the wrong problem.

17. ◴[] No.45140332[source]
18. flail ◴[] No.45140333[source]
Since I haven't mentioned the context in the article, it is a small agency with a customer target of early-stage (ideally earliest-stage) product startups.

We have literally one half-hour-long sync meeting a week. The rest is as lightweight as possible, typically averaging below 10 minutes daily with clients (when all the decisions happen on the fly).

I've worked in the corpo world, too, and it is anything but.

We do use vibe coding a lot in prototyping. Depending on the context, we sometimes have a lot of AI-agent-generated code, too.

What's more, because of working on multiple projects, we have a fairly decent pool of data points. And we don't see much of speed improvement from a perspective of a project (I wrote more on it here: https://brodzinski.com/2025/08/most-underestimated-factor-es...).

However, developers sure report their perception of being more productive. We do discuss how much these perceptions are grounded in reality, though. See this: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o... and this: https://substack.com/home/post/p-172538377

So, I don't think I'm biased toward bureaucratic environments, where developers code in MS Word rather than VS Code.

But these are all just one dimension of the discussion. The other is a simple question: are there ways of validating ideas before we turn them into implemented features/products?

The answer has always been a wholehearted "yes".

If development pace were all that counted, Googles and Amazons of this world would be beating the crap out of every aspiring startup in any niche the big tech cared about, even remotely. And that simply is not happening.

Incumbents are known to be losing ground, and old-school behemoths that still kick butts (such as IBM) do so because they continuously reinvent their businesses.

replies(3): >>45140496 #>>45140889 #>>45141668 #
19. thenanyu ◴[] No.45140358[source]
The whole point of lean startup was to route around the bottleneck of development velocity.
replies(1): >>45141414 #
20. flail ◴[] No.45140379[source]
If only "trying things" always equaled "developing things". There's a whole body of knowledge (under the Lean Startup umbrella) that argues otherwise.

Do we always have to build it before we know that it will work (or, in 9 cases out of 10, that it will not work)?

Even more so, do we have to build a fully-fledged version of it to know?

If yes, then I agree, development is the bottleneck.

replies(1): >>45140609 #
21. epolanski ◴[] No.45140412[source]
I don't buy it.

Prototyping was never the issue.

The lessons you're talking about come from stressing applications and their design, which requires users to stress it.

replies(1): >>45140454 #
22. croes ◴[] No.45140451[source]
> trying different things until something works

That sounds like an awful way of software design. Trial and error isn’t engineering but explains the current state of software security.

replies(3): >>45140564 #>>45140617 #>>45141565 #
23. thenanyu ◴[] No.45140454[source]
So give it to users?
replies(1): >>45140513 #
24. thenanyu ◴[] No.45140469{3}[source]
In most scenarios I can tell you if I like or dislike a feature much faster than it takes a developer to build it
replies(1): >>45140922 #
25. thenanyu ◴[] No.45140496{3}[source]
The map is not the territory. Validating against anything other than the actual feature is a lossy proxy. It may be an acceptable tradeoff because building the feature is too costly but that’s the whole discussion at hand.
replies(1): >>45141330 #
26. bob1029 ◴[] No.45140513{3}[source]
There is often a severe opportunity cost associated with experimenting on your customer base.
replies(2): >>45143013 #>>45143497 #
27. seneca ◴[] No.45140564{3}[source]
Sure, and that is more my own clunky paraphrasing than anything the article states. Iterating and testing to find a fit for customers is the business/product side of software. How you execute on those iterations is engineering.
replies(1): >>45140602 #
28. thenanyu ◴[] No.45140598[source]
Get it sooner then! By getting to market faster
replies(1): >>45141304 #
29. croes ◴[] No.45140602{4}[source]
But the business/product side is the shallow side, customers rarely care about what happens behind the curtain. And most customer needs are pretty similar in the backend
30. thenanyu ◴[] No.45140609{3}[source]
The lean startup offers a lot of lossy proxies for building and releasing things because it presupposes that building things takes a long time
replies(1): >>45141488 #
31. thenanyu ◴[] No.45140617{3}[source]
Trying things and changing if it doesn’t work is the only way I know how to build software.

What would you do? Don’t change?

replies(1): >>45140663 #
32. croes ◴[] No.45140663{4}[source]
The question is, why doesn’t it work? Erroneous code, erroneous algorithm, missing feature in the underlying infrastructure?

The effort it takes to implement a feature makes is more likely you think twice before you start.

If the effort goes to zero, so does the thinking.

We will turn from programmers to just LLM customers sooner or later.

Because testing if it works can be done by none programmers

replies(1): >>45147191 #
33. scarface_74 ◴[] No.45140889{3}[source]
BigTech is “beating startups”. 99% of all startups are just acquisition plays with no real business model.

Check out all of the bullshit “AI” companies that YC is funding.

BigTech is not “loosing ground” all of them are reporting increasing revenues and profits.

replies(1): >>45142126 #
34. k__ ◴[] No.45140922{4}[source]
If it just came down to the "idea guy liking or disliking a feature" things would be quite easy...
replies(1): >>45140980 #
35. ACCount37 ◴[] No.45140953{3}[source]
The threshold of supervision keeps rising - and it's going to keep rising.

GPT-2 was barely capable of writing two lines of code. GPT-3.5 could write a simple code snippet, and be right more often than it was wrong. GPT-4 was a leap over that, enabling things like "vibe coding" for small simple projects, and GPT-5 is yet another advancement in the same direction. Each AI upgrade brings forth more capabilities - with every upgrade, the AI can go further before it needs supervision.

I can totally see the amount of supervision an AI needs collapsing to zero within our lifetimes.

replies(2): >>45141183 #>>45145223 #
36. thenanyu ◴[] No.45140980{5}[source]
why doesn't it? it doesn't have to be you or me personally, it could be a representative sample of our users
replies(1): >>45141955 #
37. chrisweekly ◴[] No.45141131[source]
Yes - and tightening the OODA (Observe, Orient, Decide, Act) loop is essential for organizational velocity.
38. gyrovagueGeist ◴[] No.45141183{4}[source]
In the middle term, I almost feel less productive using modern GPT-5/Claude Sonnet 4 for software dev than prior models, precisely because they are more hands off and less supervised.

Because they generate so much code, that often passes initial tests, looks reasonable, and fails in nonhuman ways, in a pretty opinionated style tbh.

I have less context (and need to spend much more effort and supervision time to get up to speed to learn) to fix, refactor, and integrate the solutions, than if I was only trusting short few line windows at a time.

replies(1): >>45141293 #
39. warkdarrior ◴[] No.45141293{5}[source]
> I almost feel less productive using modern GPT-5/Claude Sonnet 4 for software dev than prior models, precisely because they are more hands off and less supervised.

That is because you are trained in the old way to writing code: manual crafting of software line by line, slowly, deliberately, thoughtfully. New generations of developers will not use the same workflow as you, just like you do not use the same workflow as folks who programmed punch cards.

replies(1): >>45141398 #
40. franktankbank ◴[] No.45141304{3}[source]
Its one variable in the sum of all the times. You are asserting without much evidence that the bottleneck is the dev turnaround time. I think for a lot of people there's evidence that its dev is about 10% or less of the back and forth. I've sat on my hands for months while requirements have got sorted and no this wasn't something I could just jump into which I'm sure you'd (wrongly) suggest is the right approach. Have you ever been involved in a profitable project?
replies(1): >>45142055 #
41. flail ◴[] No.45141330{4}[source]
Sure. And yet, last time I checked, we've had plenty of applications for maps.

I like this metaphor. Looking at a map, we may get a pretty good understanding of whether it's a place we'd like to spend time, say, on vacation.

We don't physically go to a place to scrutinize it.

And we don't limit ourselves to maps only. We check reviews, ask friends, and what have you. We do cheap validation before committing to a costly decision.

If we planned vacations the way we build software products, we'd just go there (because the map is not the territory), learn that the place sucks, and then we'd complain that finding good vacation spots is costly and time-consuming. Oh, and we'd mention that traveling is a bottleneck in finding good spots.

replies(1): >>45142044 #
42. _se ◴[] No.45141398{6}[source]
No, it's because reading code is slower than writing it.

The only way these tools can possibly be faster for non-trivial work is if you don't give a shit enough about the output to not even read it. And if you can do that and still achieve your goal, chances are your goal wasn't that difficult to begin with.

That's why we're now consistently measuring individuals to be slower using these tools even though many of them feel faster.

replies(2): >>45142008 #>>45156360 #
43. flail ◴[] No.45141414{3}[source]
I heard that before. No, Lean Startup is not about working around the cost of software development.

It is about designing good experiments, validating, and learning, so that when we're down to development, we build something that's way more likely to succeed.

The fact that we were advised to build non-technical experiments is but a small part. And with the current AI capabilities, we actually have a new power tool for prototyping that falls neatly into the whole puzzle.

Here's a bit more elaborate argument (sorry for a LinkedIn link): https://www.linkedin.com/posts/pawelbrodzinski_weve-already-...

replies(1): >>45143580 #
44. flail ◴[] No.45141488{4}[source]
I would actually challenge you to read/reread Lean Startup with the following filter:

Disregard parts that explicitly assume that they are relevant only because, in 2013, development was expensive. There are very few parts that you would throw out.

45. flail ◴[] No.45141565{3}[source]
In no part was that suggestion addressed to software design/architecture.

It is telling that, while the article's theme is product management (and its relationship with the pace of development), that context is largely ignored in some comments. It's as if the article's scope was purely what happens within the IDE and/or AI agent of choice.

The whole point is that the perspective necessarily should be broader. Otherwise, we make it a circular argument, really: development is a bottleneck of development.

Well, hard to disagree on that.

46. com2kid ◴[] No.45141668{3}[source]
I needed to make a landing page for an ad campaign to test out an idea for PMF.

Claude crapped out a workable landing page in ~30 seconds of prompting. I updated the copy on the page, total time less than an hour.

The odds of me spending more than an hour just picking a color theme for the page or finding the SVG icons it used is pretty much 100%.

------------

I had a bug in some async code, it hit rarely but often enough it was noticeable. I had narrowed down what file it was in, but after over an hour of staring at the code I wasn't finding it.

Popped into cursor, asked it to look for async bugs in the current file. "You forgot to clean up a resource on this line here."

Bug fixed.

------------

"Here is my nginx config, what is wrong with the block I just added for this new site I'm throwing up?"

------------

"Write a regex to do nnnnnn"

------------

"This page isn't working on mobile, something is wrong, can you investigate and tell me what the issues may be?"

Oh that won't go well, all of the models get super confused about CSS at some point and end up in doom spirals applying incorrect fixes again and again.

> Googles and Amazons of this world would be beating the crap out of every aspiring startup in any niche the big tech cared about, even remotely. And that simply is not happening.

This is already a well explored and understood space, to the extent that big tech cos have at times spun teams off to work independently to gain the advantage of startup-like velocities.

The more infra you have, the more overhead you have. Deploying a company's first service to production is really easy, no infra needed, no dev-ops, just publish.

Deploying the 5th service, eh.

Deploying the 50th service, well by now you need to have a host of meetings before work even starts to make sure you aren't duplicating effort and that the libraries you use mesh with the department's strategic technical vision. By the time those meeting are done, a startup will have already put 3 things into prod.

The communication overhead within large orgs is also famously non-linear.

I spent 10 years working at Microsoft, then 3 years at HBO Max (lean tech company 200 engineers, amazing dev ops), and now I'm working at startups of various sizes.

At Microsoft, pre-Azure it could take weeks just to get a machine provisioned to test an idea out on. Actually getting a project up and running in a repo was... hard at times. Build systems were complex, tooling was complex, and you sure as hell weren't getting anything pushed to users without a lot of checks in place. Now many of those checks were in place for damn good reasons, wrongly drawn lines on a map inside Windows is a literal international incident[1], and we had separate localizations for different variants of English around the world. (And I'd argue that Microsoft's agility at deploying software around the entire world at the same time is unmatched, the people I worked with there were amazing at sorting through the cultural and legal problems!)

Also if Google launches a new service and it goes down from too much traffic, it is embarrassing. Everything they do has to be scalable and load balanced, just to avoid bad press. If a startup hits the front page of HN and their website goes down from being too popular, they get to write a follow up blog post about how their announcement was so damn popular their site crashed! (And if they are lucky, hit the front page of HN again!)

The differences in designing for levels of scale is huge.

At Microsoft it was "expect potentially a billion users" At HBO it was "Expect tens of millions of users", at many startups it is "If we hit 10k users we'll turn a profit and we can figure out how to scale out later."

10K DAU is a load balances and 3 instances of NodeJS (for rolling updates) each running on a potato of a CPU.

> So, I don't think I'm biased toward bureaucratic environments, where developers code in MS Word rather than VS Code.

I've worked in those environments, and the level of engineering quality can be much higher. The number of bugs that can be hammered out and avoided in spec reviews is huge. Technology designs that end up being servicable for years to decades instead of "until the next rewrite". The actual code tends to flow much faster as well, or at least as fast as it can flow in the large sprawling code bases that exist at big tech companies. At other times, those specs are needed so that one has a path forward while working through messy legacy code bases.

Both styles have their place - Sometimes you need to iterate quickly and get lots of code down and see what works, other times it is worth thinking through edge cases, usage scenarios, and performance characteristics. Heck I've done memory bus calculations for different designs, when you are working at that level you don't just "write code and see what works", you first spend a few days (or a week!) with some other smart engineers and try to narrow down the potential field of you should even be trying to do!

[1]https://www.upi.com/Archives/1995/09/09/Microsoft-settles-In...

replies(1): >>45142258 #
47. cestith ◴[] No.45141955{6}[source]
So if you wait to put together a representative sample of users and gather the data long enough for the numbers to matter, you’ve gated further changes. If you’ve gated further changes for a week, why does it matter that the feature change was done in an hour or a day?
replies(1): >>45142084 #
48. mwigdahl ◴[] No.45142008{7}[source]
"Consistently"? Is there more than just the one METR study that's saying this?
replies(1): >>45148718 #
49. cestith ◴[] No.45142018{4}[source]
I want a merge request with a short, meaningful comment and the diffs just like I’d get from a human. Then I want to be able to discuss the changes if they aren’t exactly what’s needed, just like with a human. I don’t want to have to hold its hand and I don’t want to have to pair program everything with a chatbot. It also needs to be able to show a logic diagram, a data flow diagram, and a dependency tree. If an agent can’t give me that, it’s not really ready to work as a developer for me.
50. thenanyu ◴[] No.45142044{5}[source]
The best way to know if you would like a new restaurant or experience is to actually try it. We rely on reviews and maps and directories because trying it is too costly. If trying it wasn't costly, we would just try it instead of relying on proxies.
replies(1): >>45147287 #
51. DenisM ◴[] No.45142053{4}[source]
LLM might rely on their own verbosity to carry the conversation in a stable direction.
52. thenanyu ◴[] No.45142055{4}[source]
The only reason requirements need to be sorted out is because development effort is perceived to be expensive. If you reduce the development effort significantly, then you can just build it instead of talking about building it.
replies(1): >>45142364 #
53. thenanyu ◴[] No.45142084{7}[source]
Releasing it to users does not take a long time. Randomly select 5% of your user base and give them the feature. If your development process was mature, this would be a button you could push in your deployment env.
54. flail ◴[] No.45142126{4}[source]
Of course, Big Techs have leverage of their bottomless coffers. What they can't develop, they buy. What was the last successful product idea coming from, say, Facebook?

Or on a smaller scale, what's the last genuine Attlassian success?

Yet, when it comes to product innovation, the momentum is always on the side of the new players. Always has been.

Project management/work organization software? Linear. Async communication? Slack. Social Media? TikTok. One has to be curious how Zoom is doing so well, given that all the big competition actually controls the channels for setting up meetings. Self-publishing? Substack. Even with AI, everyone plays catch-up with Sam Altman, and many of the most prominent companies are newcomers.

We could go on and on.

Yes, Big Techs will survive because they have enough momentum to survive events such as the Balmer-era MS. But that doesn't mean they lead product innovation.

And it's expected. Conflicting priorities, growing bureaucracies, shareholders' expectations, old business lines (and more), all make them less flexible.

replies(1): >>45142383 #
55. flail ◴[] No.45142258{4}[source]
Big Techs do have ways of rolling out new services step by step.

Paul Buchheit's stories about Gmail and AdSense are good examples. I was an early Gmail user when it was invitation-only and invitations were scarcely distributed (only as fast as the infrastructure could handle).

So, while I understand the difference in PR costs, it's not like they don't have tools to run smaller experiments.

I agree with the huge bureaucracy cost. On the other hand, they really have (relatively) infinite resources if they care to deploy them. And sometimes they do. And they still fail.

They often fail even when they try a Skunk Works-like approach. Google Wave was famously developed as a corporate Lean Startup (before there was Lean Startup). It was a disaster. Precisely because they did close to zero validation pre-release.

A side note, a huge flop it was (although Buzz and Google+ were bigger), it didn't hurt them long term in PR or reputation.

replies(1): >>45142841 #
56. franktankbank ◴[] No.45142364{5}[source]
Sounds like you need a trillion monkeys on typewriters. Easy!
57. scarface_74 ◴[] No.45142383{5}[source]
Again let’s look at YC’s latest batch of companies. How many of them are doing anything “innovative”?

An innovative product is one where customers in aggregate are willing to pay more for it than it costs to create and run. Any idiot can sell a bunch of dollar bills for 95 cents.

Going back to the latest batch of YC companies, there value play can easily be duplicated by any company in their vertical either by throwing a few engineers on it or creating a statement of work for the consulting company I work for and I can pull together a few engineers and knock it out in a few months and they will already have customers to sell it to.

There was one recent YC company (of course one of the BS AI companies) that was a hiring a “founding full stack engineer” for $150K. It looks like they were two non technical “serial entrepreneurs” without even an MVP that YC threw money at.

You can’t imagine how many times some hair brain underfunded startup reached out to me to be a “CTO” that paid less than I made as a mid level employee at BigTech with the promise of Monopoly money “equity”.

replies(2): >>45142996 #>>45147886 #
58. com2kid ◴[] No.45142841{5}[source]
Google had 3,000 employees when Gmail launched. Now they have over 100,000 employees!

People criticize Microsoft's historical fiefdom model, and it had its issues, but it also allowed orgs to find what worked for them and basically run independently. Of course it also had orgs fighting with each other and killing off good products.

Xbox was also a skunk works project at Microsoft (a few good books have been written about it!) and so was Microsoft Band. Xbox succeeded, Band failed for a number of reasons not related to the product or execution itself. (Politics and some historical corporate karma).

IMHO the only company good at deploying infinite resources quickly is Apple. 1 billion developing the first Apple Watch (Microsoft spend under 50 million on two generations of Band!) and then they kept going after the market, even though the first version was kinda meh. In comparison Google wear was on again of again for years until they finally took it seriously recently. I'm sure they spent lots of $, but the end result is nowheres near what Apple pulled off.

replies(1): >>45147998 #
59. thenanyu ◴[] No.45142996{6}[source]
VCs generally expect some small single digit % of their companies to succeed and return the fund

If 90% of the companies fail or are outright fraudulent it doesn’t really matter

replies(1): >>45143296 #
60. thenanyu ◴[] No.45143013{4}[source]
Do it responsibly then?
61. daliusd ◴[] No.45143248{4}[source]
So you want Aider, Claude Code or opencode.ai it seems. I use opencode.ai a lot nowadays and am really happy and productive.
replies(2): >>45145124 #>>45162442 #
62. scarface_74 ◴[] No.45143296{7}[source]
And how many of those “succeed” by creating good products compared to just being acquihires where after acquisition you soon see a blog post about “our amazing journey”?
replies(1): >>45143323 #
63. thenanyu ◴[] No.45143323{8}[source]
Single digit percentage, like I said. Often low single digits
64. estimator7292 ◴[] No.45143497{4}[source]
We've been doing this for fifty years, please catch up with the times.
65. estimator7292 ◴[] No.45143580{4}[source]
Distinction without a difference. Your SWE costs blow up because development velocity is low and labor is a fixed cost. You reduce costs by increasing velocity, which in this case is achieved by aiming your development better.

Move faster and move better (to move faster) are the same thing. You reduce costs by going faster, and with lean you go faster by avoiding time wasters.

66. IanCal ◴[] No.45143958{3}[source]
That doesn’t require developer time though.

Also that time is needed regardless, do you think it’s the majority of time related to releasing a feature?

67. wyum ◴[] No.45144376[source]
I think you and the article actually agree and you are arguing only with their use of the word "development."

The article uses "development" to refer only to the part where code is generated, while you are saying "development" is the process as a whole.

You both agree that latency in the real-world validation feedback loop leads to longer cycles and fewer promising solutions and that is the bottleneck.

68. tharkun__ ◴[] No.45145091{3}[source]
I can't. The LLM (Claude Code really) is just too slow. It is just so slow at doing the things I ask it to do once I'm at the review stage.

Like the initial plan always sounds great and looks great. Then it goes to actually do the changes and proclaims victory after I left it alone doing other stuff, because it takes a while. Then I review what it did and what it didn't do and I inevitably find that it only did half of what it said it would do and did half of what it did do incorrectly despite what it told me what it would do.

The use case here is a large code base that needs changes. Not new feature development on a green field (or a green corner of an established product). And it's just so unbearably frustrating. It's like giving the task to a Junior on probation. I tell them something, they go off for 10 minutes and tell me they're done and I look and find seven holes I need to tell them to fix. But they aren't the Junior that picks up stuff and gets better and needs less supervision. Instead it seems like the context gets more and more polluted and the Junior gets closer and closer to failing his probation.

Many grey hairs added recently, because yeah, we also "have to be faster by using AI" now ...

69. tharkun__ ◴[] No.45145124{5}[source]
I really wanted to use Aider. But it's impossible. How do people actually use it?

Like, I gave it access to our code base, wanted to try a very simple bug fix. I only told it to look at one service I knew needed changes, because it says it works better in smaller code bases. It wanted to send so many tokens to sonnet that I hit the limits before it even started actually doing any coding.

Instant fail.

Then I just ran Claude Code, gave it the same instructions and I had a mostly working fix in a few minutes (never mind the other fails with Claude I've had - see other comment), but Aider was a huge disappointment for me.

replies(1): >>45152381 #
70. daxfohl ◴[] No.45145223{4}[source]
I could see it happening in a year or two. Especially in backend. There's only so many different architecture patterns we use, and an LLM will have access to every one that has ever been deployed, every document, every gripe, every research paper, etc.

I mean, I think ultimately the state space in designing a feature is way smaller than, say, go (the game). Maybe a few hundred common patterns and maybe a billion reasonable ways to combine them. I think it's only a matter of time before we ask it to design a feature, and it produces five options that are all better than what we'd have come up with.

71. himeexcelanta ◴[] No.45145405{3}[source]
Typing syntax and dealing with language issues takes a lot of mental overhead that AI mostly solves in the right hands. It’s not zero!
72. jiggawatts ◴[] No.45147059[source]
> Because just developing the variations would be faster than all of the debate. So the amount of time you waste in meetings and deliberation goes down as well.

Thank you for articulating something I knew but haven't been able to express as eloquently.

It frustrates me to no end to watch half a dozen non-technical bureaucrats argue for days about something that can be tried (and discarded) in a few hours with zero consequences.

"Let's write a position paper so that everyone involved can agree before we do anything."

Noooo! Just do it! See if it works in practice! Validate the marketing! Kick the tyres! Go for a test drive. Just. Get. Behind. The. Wheel.

73. chii ◴[] No.45147191{5}[source]
> Because testing if it works can be done by none programmers

like testing whether a building is structurally sound can just be done by the inhabitants!

74. didibus ◴[] No.45147277[source]
> so the anti-vibe-coding people feel like they need to claim that development speed has no impact at all

Strange, I'd been more of the impression that this is an argument from pro vibe-coders. As more data comes in, the "productivity increases" of AI are not showing up as expected. So as people question, how come things are not getting done faster even though you say you are 10x faster at coding? The vibe-coders answer by saying that coding isn't the bottleneck, as opposed to capitulating and saying that maybe they're not that much faster at coding after-all.

75. flail ◴[] No.45147287{6}[source]
OK, let's assume you can get food for free (or close enough). Like if you were super rich, and the cost was absolutely marginal for you.

How many dinners a day can you have?

You would still rely on alternative proxies, like recommendations or reviews.

76. didibus ◴[] No.45147313[source]
> and they're allergic to automation, hiring higher quality engineers, and hiring more engineers to improve velocities

I think there's another issue, but it could relate to your first two statement here. Even to try ideas, to explore the space of solutions, you need to have ideas to try. When entering development, you need clarity on what you're trying. It's very hard to make decisions on even a single attempt. I see engineers working task the entire time simply not sure what really the task is about.

And in a way, the coding agents need even more clarity in what you ask of them to deliver good result.

So even inside of what we consider "development" or "coding", the bottleneck is often: "what am I supposed to do here?" and not so much "I don't know how to do this" or "I have so much to implement".

This is obvious as well, once you throw more engineers, and you can't break up the work, because you have no clue what so many people could even all do. Knowing what all the needed tasks even are is hard and a big bottleneck.

77. flail ◴[] No.45147886{6}[source]
The last YC batch was like ~170 companies, correct? Each year, there are like 150 million startups. So let's not take YC stable for the whole startup ecosystem.

And I'm with you with a critical view on their all-in move toward AI. It's just what all the VCs do, and it's hard to say who's parroting who in this setup (I think that others are parroting YC, but feel free to challenge me on that).

Having said all that, I wouldn't be surprised if a couple of companies from this year's cohort made it big. If you look at YC's biggest successes year by year, you will often (but not always) find a household name.

Was there anyone who predicted these would be the greatest hits? Of course not! That's the whole point of having an investment portfolio. You can be wrong a lot of times if you secure an early investment in a unicorn every other year or so.

Also, "one recent example" of poor investment decision doesn't invalidate 2 decades of rather successful investment portfolios (as a whole, not individually).

In no way is it a YC defense. I'm very critical of the whole startup funding ecosystem, and they are a prominent player. Yet, if they were consistently stupid with their decisions, they wouldn't exist, let alone be the most desired accelerator out there.

Also, if it's that simple to copy what they do and what the companies in their portfolio do, why wouldn't Google et al. take their almost infinite funds and get the competing offers for non-BS ideas up and running in no time?

I bet that if you had an idea that could pay off thousandfold, you'd get enough eager ears to hear you out in any big tech. And still, it's the makeshift mass of startups that come through with new products.

One has to wonder why things like Shopify, Stripe, Zapier, or Figma did not come from the big tech. Each would have an ideal match. Even if you look at the AI landscape, how come Lovable made such a career? After all, they repackage the AI capabilities rented elsewhere. Somehow, with all the ingenuity of building ChatGPT, OpenAI and the rest didn't get it.

replies(1): >>45152579 #
78. flail ◴[] No.45147998{6}[source]
Sure, it was way easier to move at Google in the early 2000s than it is now. Yet, one has to admit they still keep trying. The list of products that they tried and killed doesn't show signs of stagnation: https://killedbygoogle.com/

And that's only the things that they have released. I'd bet that there are lots more that never make it to the public.

And I expect no less from Microsoft, by the way. Microsoft is, in fact, a great case in point of how failed releases don't hurt the company's PR long-term. How many failures have they scored trying to catch up with the missed opportunities of the 2000s? Smartphones & tablets, search, music players, social media.

They were late to move the Office to the cloud, and kept pumping dollars into the Explorer/Edge lost cause, too.

I don't know enough details, but Xbox seems more like an outlier than a norm.

Yet they rebounded with Azure and made some good bets with AI, and are doing better than ever. However, we don't see a stream of new product bets coming from them.

Oh, and on Apple: I wouldn't discount the role of cult-like following in repeated product success. Neither of the other big techs has such a relationship with its user base. You don't see many raving fans of Facebook or Google. And you definitely have millions of people who would buy any new Apple product simply because it is a new Apple product.

It's like Joel Spolsky but on a global scale. In the 2000s, whatever Joel Spolsky touched turned into gold. Stack Overflow? Check. Trello? Check. Was there something unique about these products? Details, sure. But the biggest thing was Joel's leverage.

Having run a highly popular blog for developers, he could instantly reach out to his early adopters. Given that many of the readers were actual fans, they'd jump on the opportunity, whatever it was. So the early traction was not a problem (which was especially crucial for the developers' forum).

Scale that up to the big tech context, and you get Steve Jobs.

A side note: I wonder how long it will take Tim Cook to dismantle that. You can already see cracks.

79. _se ◴[] No.45148718{8}[source]
I have measured it myself within my organization, and I know many peers across companies who have done the same. No, I cannot share the data (I wish I could, truly), but I expect that we will begin to see many of these types of studies emerge before long.

The tools are absolutely useful, but they need to be applied in the right places and they are decided not a silver bullet or general-purpose software engineering tool in the manner that they're being billed at present. We still use them despite our finding, but we use them judiciously and where they actually help.

80. daliusd ◴[] No.45152381{6}[source]
I don't know about Aider, I am not using it because of lack of MCP and poor GitHub Copilot support (both are important to me). Maybe in the future that will get better if that will be relevant. I am using opencode.ai with Claude Sonnet 4 usually. Sometimes I try to switch to different models, e.g. Gemini 2.5 Pro, but Sonnet is more consistent for me.

It would be good to define what's "smaller code bases". Here is what I am working one: 10 years old project full of legacy consisting of about 10 services and 10 front-end projects. As well tried it on project similar to MUI or Mantine UI. Naturally on many smaller projects. As well tried it on TypeScript codebase where it has failed for me (but it is hard to judge from one attempt). Lastly I am using it on smaller projects. Overall question is more about task than about code base size. If the task does not involve loading too much context when code base size might be irrelevant.

81. scarface_74 ◴[] No.45152579{7}[source]
Let’s look at YCs “successes” as far as the companies that have gone public.

https://medium.com/@kazeemibrahim18/the-post-ipo-performance...

82. j2kun ◴[] No.45154763[source]
> The way to find the right answer isn't to think very hard and miraculously come up with the perfect answer. It's to try every goddamn thing that shows promise.

I have found that spending more time thinking generally reduces the amount of failed attempts. It's amazing what "thinking hard" beforehand can do to eliminate reprioritization scrambling.

83. KronisLV ◴[] No.45156360{7}[source]
> No, it's because reading code is slower than writing it.

This feels wrong to me, unless we qualify the statement with: "...if you want the exact same level of understanding of it."

Otherwise, the bottleneck in development would be pull/merge request review, not writing the code to do something. But almost always, it's the other way around - someone works on a feature for 3-5 days, the pull/merge request does not really spend the same time in active review. I don't think you need the exact same level of intricate understanding over some code when reviewing it.

It's quite similar with the AI stuff, I often nitpick and want to rework certain bits of code that AI generates (or fix obvious issues with it), but using it for the first version/draft is still easier than trying to approach the issue from zero. Ofc AI won't make you consistently better, but will remove some of the friction and reduce the cognitive load.

84. no_wizard ◴[] No.45162442{5}[source]
At the end of the day I want what my job is willing to pay for, which is a few different flavors of AI tools