←back to thread

LLM Inevitabilism

(tomrenner.com)
1613 points SwoopsFromAbove | 1 comments | | HN request time: 0.212s | source
Show context
delichon ◴[] No.44567913[source]
If in 2009 you claimed that the dominance of the smartphone was inevitable, it would have been because you were using one and understood its power, not because you were reframing away our free choice for some agenda. In 2025 I don't think you can really be taking advantage of AI to do real work and still see its mass adaptation as evitable. It's coming faster and harder than any tech in history. As scary as that is we can't wish it away.
replies(17): >>44567949 #>>44567951 #>>44567961 #>>44567992 #>>44568002 #>>44568006 #>>44568029 #>>44568031 #>>44568040 #>>44568057 #>>44568062 #>>44568090 #>>44568323 #>>44568376 #>>44568565 #>>44569900 #>>44574150 #
afavour ◴[] No.44568040[source]
Feels somewhat like a self fulfilling prophecy though. Big tech companies jam “AI” in every product crevice they can find… “see how widely it’s used? It’s inevitable!”

I agree that AI is inevitable. But there’s such a level of groupthink about it at the moment that everything is manifested as an agentic text box. I’m looking forward to discovering what comes after everyone moves on from that.

replies(2): >>44568087 #>>44570793 #
XenophileJKO ◴[] No.44568087[source]
We haven't even barely extracted the value from the current generation of SOTA models. I would estimate less then 0.1% of the possible economic benefit is currently extracted, even if the tech effectively stood still.

That is what I find so wild about the current conversation and debate. I have claude code toiling away building my personal organization software right now that uses LLMs to take unstructured input and create my personal plans/project/tasks/etc.

replies(1): >>44568159 #
WD-42 ◴[] No.44568159[source]
I keep hearing this over and over. Some llm toiling away coding personal side projects, and utilities. Source code never shared, usually because it’s “too specific to my needs”. This is the code version of slop.

When someone uses an agent to increase their productivity by 10x in a real, production codebase that people actually get paid to work on, that will start to validate the hype. I don’t think we’ve seen any evidence of it, in fact we’ve seen the opposite.

replies(3): >>44568209 #>>44568248 #>>44570410 #
XenophileJKO ◴[] No.44568209[source]
:| I'm an engineer of 30+ years. I think I know good and bad quality. You can't "vibe code" good quality, you have to review the code. However it is like having a team of 20 Junior Engineers working. If you know how to steer a group of engineers, then you can create high quality code by reviewing the code. But sure, bury your head in the sand and don't learn how to use this incredibly powerful tool. I don't care. I just find it surprising that some people have such a myopic perspective.

It is really the same kind of thing.. but the model is "smarter" then a junior engineer usually. You can say something like "hmm.. I think an event bus makes sense here" Then the LLM will do it in 5 seconds. The problem is that there are certain behavioral biases that require active reminding (though I think some MCP integration work might resolve most of them, but this is just based on the current Claude Code and Opus/Sonnet 4 models)

replies(4): >>44568238 #>>44568420 #>>44568553 #>>44574632 #
1. pron ◴[] No.44574632[source]
> However it is like having a team of 20 Junior Engineers working. If you know how to steer a group of engineers, then you can create high quality code by reviewing the code.

You cannot effectively employ a team of twenty junior developers if you have to review all of their code (unless you have like seven senior developers, too).

But this isn't a point that needs to be debated. If it is true that LLMs can be as effective as a team of 20 junior developers, then we should be seeing many people quickly producing software that previously required 20 junior devs.

> but the model is "smarter" then a junior engineer usually

And it is also usually worse than interns in some crucial respects. For example, you cannot trust the models to reliably tell you what you need to know such as difficulties they've encountered or important insights they've learnt and understand they're important to communicate.