←back to thread

LLM Inevitabilism

(tomrenner.com)
1616 points SwoopsFromAbove | 2 comments | | HN request time: 0s | source
Show context
delichon ◴[] No.44567913[source]
If in 2009 you claimed that the dominance of the smartphone was inevitable, it would have been because you were using one and understood its power, not because you were reframing away our free choice for some agenda. In 2025 I don't think you can really be taking advantage of AI to do real work and still see its mass adaptation as evitable. It's coming faster and harder than any tech in history. As scary as that is we can't wish it away.
replies(17): >>44567949 #>>44567951 #>>44567961 #>>44567992 #>>44568002 #>>44568006 #>>44568029 #>>44568031 #>>44568040 #>>44568057 #>>44568062 #>>44568090 #>>44568323 #>>44568376 #>>44568565 #>>44569900 #>>44574150 #
afavour ◴[] No.44568040[source]
Feels somewhat like a self fulfilling prophecy though. Big tech companies jam “AI” in every product crevice they can find… “see how widely it’s used? It’s inevitable!”

I agree that AI is inevitable. But there’s such a level of groupthink about it at the moment that everything is manifested as an agentic text box. I’m looking forward to discovering what comes after everyone moves on from that.

replies(2): >>44568087 #>>44570793 #
XenophileJKO ◴[] No.44568087{3}[source]
We haven't even barely extracted the value from the current generation of SOTA models. I would estimate less then 0.1% of the possible economic benefit is currently extracted, even if the tech effectively stood still.

That is what I find so wild about the current conversation and debate. I have claude code toiling away building my personal organization software right now that uses LLMs to take unstructured input and create my personal plans/project/tasks/etc.

replies(1): >>44568159 #
WD-42 ◴[] No.44568159{4}[source]
I keep hearing this over and over. Some llm toiling away coding personal side projects, and utilities. Source code never shared, usually because it’s “too specific to my needs”. This is the code version of slop.

When someone uses an agent to increase their productivity by 10x in a real, production codebase that people actually get paid to work on, that will start to validate the hype. I don’t think we’ve seen any evidence of it, in fact we’ve seen the opposite.

replies(3): >>44568209 #>>44568248 #>>44570410 #
PleasureBot ◴[] No.44570410{5}[source]
People have much more favorable interactions with coding LLMs when they are using it for greenfield projects that they don't have to maintain (ie personal projects). You can get 2 months of work done in a weekend and then you hit a brick wall because the code is such a gigantic ball of mud that neither you nor the LLM are capable of working on it.

Working with production code is basically jumping straight to the ball of mud phase, maybe somewhat less tangled but usually a much much larger codebase. Its very hard to describe to an LLM what to even do since you have such a complex web of interactions to consider in most mature production code.

replies(1): >>44572855 #
1. XenophileJKO ◴[] No.44572855{6}[source]
Maybe the difference is I know how to componentize mature code bases, which effectively limits the scope required for a human (or AI) to edit.

I think it is funny how people act like it is a new problem. If the AI is having trouble with a "ball of mud", don't make mud balls (or learn to carve out abstractions). This cognitive load is impacting everyone working on that codebase. Skilled engineers enable less skilled engineers to flourish by creating code bases where change is easy because the code is modular and self-contained.

I think one sad fact is many/most engineers don't have the skills to understand how to refactor mature code to make it modular. This also means they can't communicate to the AI what kind of refactoring they should make.

Without any guidance Claude will make mud balls because of two tendencies, the tendency to put code where it is consumed and the tendency to act instead of researching.

There are also some second level tendencies that you also need to understand, like the tendency to do a partial migration when changing patterns.

These tendencies are not even unique to the AI, I'm sure we have worked with people like that.

So to counteract these tendencies, just apply your same skills at reading code and understanding when an abstraction is leaky or a method doesn't align with your component boundary. Then you too can have AI building pretty good componentized code.

For example in my pet current project I have a clear CQRS api, access control proxies, repositories for data access. Clearly defined service boundaries.

It is easy for me to see when the AI for example makes a mistake like not using the data repository or access control because it has to add an import statement and dependency that I don't want. All I have to do is nudge it in another direction.

replies(1): >>44582463 #
2. amrocha ◴[] No.44582463[source]
>For example, in my pet current project

Everything you said is invalidated by this