←back to thread

LLM Inevitabilism

(tomrenner.com)
1613 points SwoopsFromAbove | 9 comments | | HN request time: 1.729s | source | bottom
Show context
mg ◴[] No.44568158[source]
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.

Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.

PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:

https://www.gibney.org/prompt_coding

replies(58): >>44568182 #>>44568188 #>>44568190 #>>44568192 #>>44568320 #>>44568350 #>>44568360 #>>44568380 #>>44568449 #>>44568468 #>>44568473 #>>44568515 #>>44568537 #>>44568578 #>>44568699 #>>44568746 #>>44568760 #>>44568767 #>>44568791 #>>44568805 #>>44568823 #>>44568844 #>>44568871 #>>44568887 #>>44568901 #>>44568927 #>>44569007 #>>44569010 #>>44569128 #>>44569134 #>>44569145 #>>44569203 #>>44569303 #>>44569320 #>>44569347 #>>44569391 #>>44569396 #>>44569574 #>>44569581 #>>44569584 #>>44569621 #>>44569732 #>>44569761 #>>44569803 #>>44569903 #>>44570005 #>>44570024 #>>44570069 #>>44570120 #>>44570129 #>>44570365 #>>44570482 #>>44570537 #>>44570585 #>>44570642 #>>44570674 #>>44572113 #>>44574176 #
1. hosh ◴[] No.44568320[source]
While the Internet and LLMs are huge turning points — the metaphor that comes to mind are phase change thresholds, from solid to gas, from gas to solids — there is a crucial difference between the internet and LLMs.

The early internet connected personal computing together. It built on technology that was democratizing.

LLMs appear to be democratizing, but it is not. The enshittification is proceeding much more rapidly. No one wants to be left behind on the land grab. Many of us remember the rise of the world wide web, and perhaps even personal computing that made the internet mainstream.

I am excited to hear the effort of the Swiss models being trained, though it is a step behind. I remember people talking about how fine tuning will accelerate advances out in the open, and that large companies such as Google can’t keep up with that. Perhaps.

I’ve been diving into history. The Industrial Revolution was a time of rapid progress when engines accelerated the development of cheaper access to fuels, more powerful engines. We were able to afford abundance for a middle class, but we also had enshittification then too.

While there is a _propensity_ for enshittification, I for one don’t see it as inevitable, and neither do I think an AI future is inevitable.

replies(2): >>44568447 #>>44568771 #
2. TeMPOraL ◴[] No.44568447[source]
> Many of us remember the rise of the world wide web, and perhaps even personal computing that made the internet mainstream.

I do. The web was the largest and most widespread enshittification process to date, and it started with the first sale made online, with the first ad shown on a web page - this quickly went into full-blown land grab in the late 90s, and then dotcom and smartphones and social media and SaaS and IoT and here we are today.

The "propensity for enshittification" is just called business, or entrepreneurship. It is orthogonal to AI.

I think comparing rise of LLMs to the web taking off is quite accurate, both with the good and bad sides.

replies(1): >>44568603 #
3. hosh ◴[] No.44568603[source]
I have seen people conduct business that doesn’t enshittify. Though rare, it is not an universal trait for conducting business.

The process of creating the AIs require mobilizing vast amount of energy, capital, and time. It is a product of capital with the expectation of locking down future markets. It is not orthogonal to enshittification.

Small web was still a thing through the 90s and early ‘00s. Web servers were not so concentrated as they are with hardware capable of running AI, let alone training them.

replies(1): >>44569177 #
4. Karrot_Kream ◴[] No.44568771[source]
For the internet to be democratizing it needed PCs first. Before that computing was like LLMs: the mainframe era. You either had access to an institution with a mainframe or you were luckily able to get a thin client to a mainframe (the early time-sharing systems.) Even after PCs were invented, for decades mainframes were inarguably better than PCs. Mainframes and thin clients were even some of the earliest computer networks.

I am optimistic that local models will catch up and hit the same pareto-optimal point. At some point your OS will ship with a local model, your system will have access to some Intelligence APIs, and that's that. Linux and BSDs will probably ship with an open-weights model. I may be wrong, but this is my hope.

If you're interested in a taste of that future try the Gemma3 class of models. While I haven't tried agentic coding with them yet, I find them more than good enough for day-to-day use.

replies(1): >>44575270 #
5. TeMPOraL ◴[] No.44569177{3}[source]
> I have seen people conduct business that doesn’t enshittify. Though rare, it is not an universal trait for conducting business.

Exception that proves some markets are still inefficient enough to allow people of good conscience to thrive. Doesn't change the overall trajectory.

> The process of creating the AIs require mobilizing vast amount of energy, capital, and time. It is a product of capital with the expectation of locking down future markets.

So are computers themselves. However free and open the web once was, or could've been, hardware was always capital-heavy, and it only got heavier with time. Cheap, ubiquitous computers and TSMC are two sides of the same coin.

> It is not orthogonal to enshittification.

That's, again, because business begets enshittification; it's one of those failure modes that are hard to avoid.

> Small web was still a thing through the 90s and early ‘00s. Web servers were not so concentrated as they are with hardware capable of running AI, let alone training them.

You can "run AI" on your own computer if you like. I hear Apple Silicon is good for LLMs this time of year. A consumer-grade GPU is more than enough to satisfy your amateur and professional image generation needs too; grab ComfyUI from GitHub and a Stable Diffusion checkpoint from HuggingFace, and you're in business; hell, you're actually close to bleeding edge and have a shot at contributing to SOTA if you're so inclined.

Of course, your local quantized Llama is not going to be as good as ChatGPT o3 - but that's just economies at scale at play. Much like with the web - most of it is concentrated, but some still find reasons to run servers themselves.

replies(1): >>44575334 #
6. hosh ◴[] No.44575270[source]
I have been keenly watching for locally-run AIs. This includes the price point for running 70b models, such as the one recently announced by Switzerland. I've also been looking at what it would take to run these in much smaller compute, such as microcontrollers.

However, fine-tuning may be run locally -- what are you thinking about in terms of training?

"At some point your OS will ship with a local model, your system will have access to some Intelligence APIs, and that's that."

There's a secondary effect that I had not even discussed in detail here. I don't know how to explain it concisely because it requires reframing a lot of things just to be able to see it, let alone to understand it as a problem.

Let me see how concise I can be:

1. There are non-financial capital such as social capital, knowledge capital, political capital, natural capital, etc.

2. The propensity is to convert non-financial capital into financial capital at the expense of the other forms of capital. I think this is the core dynamic driving enshittification (beyond how Cory Doctrow described it when he coined it).

3. While LLMs and AIs can be designed to enhance the human experience, right now, the propensity is to deploy them in a way that does not develop social and knowledge capital for the next generation.

replies(1): >>44579147 #
7. hosh ◴[] No.44575334{4}[source]
"So are computers themselves. However free and open the web once was, or could've been, hardware was always capital-heavy, and it only got heavier with time. Cheap, ubiquitous computers and TSMC are two sides of the same coin."

Ok, I can see that is true.

"Exception that proves some markets are still inefficient enough to allow people of good conscience to thrive. Doesn't change the overall trajectory."

That depends on what you are measuring to determine market efficiency. Social, political, knowledge, and natural capital are excluded from consideration, so of course we optimize towards financial efficiency at the expense of everything else.

Which comes back to: business does not have beget enshittification, and it isn't because of market inefficiencies.

I think we're going to have to agree to disagree on some of these points.

replies(1): >>44579879 #
8. Karrot_Kream ◴[] No.44579147{3}[source]
Interesting points.

1. Expanding in more detail, my feeling (absent correct truth obviously) is that we'll find a Pareto Optimal point of intelligence. At that point my feeling, again unproven, is that with available hardware fine tuning open weight models would not be too difficult.

Training from scratch? I'm really not sure that's something an individual can do. But at some pareto optimal point I do think small organizations can train models.

I'm okay with that threshold. This world I envision is like today's open software which runs on closed hardware or interacts with closed systems like networks that use proprietary hardware.

2. I'm not sure the tendency to convert everything into financial capital is as all subsuming as folks like Doctorow make it out to be. The sentiment certainly drives engagement among a certain demographic, usually a college educated progressive one, but while I see evidence of its truth in some spaces I don't in many others. PCs are a good example. PCs continue to offer more functionality to the end user. Despite endless avenue to lock down and monetize the idea of a PC, it doesn't happen. Firms like Apple and Lenovo remain fairly committed to offering strong consumer experiences.

I suspect this sort of financialization is more prevalent for services or goods that are difficult to meter. Search is a great example.

9. TeMPOraL ◴[] No.44579879{5}[source]
I don't think we need to agree to disagree just yet. I want to remark on:

> That depends on what you are measuring to determine market efficiency. Social, political, knowledge, and natural capital are excluded from consideration, so of course we optimize towards financial efficiency at the expense of everything else.

I'm taking a loose but classical definition of it, so obviously in financial terms. But there isn't really much choice to be made here: the market, as a system borne of aggregate human behavior at scale, is optimizing along a specific dimension for structural reasons. "Social, political, knowledge and natural capital" aren't excluded at all - on the contrary, they're converted into dollars and become part of the optimization, competing with other things that are also converted into monetary units.

It's just that, it turns out, those other forms of capital you mention tend to not have that much value, so they get optimized away, especially under strong competitive pressure.