Most active commenters
  • TeMPOraL(6)
  • ben_w(4)
  • rwmj(3)

←back to thread

LLM Inevitabilism

(tomrenner.com)
1612 points SwoopsFromAbove | 31 comments | | HN request time: 2.041s | source | bottom
Show context
mg ◴[] No.44568158[source]
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.

Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.

PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:

https://www.gibney.org/prompt_coding

replies(58): >>44568182 #>>44568188 #>>44568190 #>>44568192 #>>44568320 #>>44568350 #>>44568360 #>>44568380 #>>44568449 #>>44568468 #>>44568473 #>>44568515 #>>44568537 #>>44568578 #>>44568699 #>>44568746 #>>44568760 #>>44568767 #>>44568791 #>>44568805 #>>44568823 #>>44568844 #>>44568871 #>>44568887 #>>44568901 #>>44568927 #>>44569007 #>>44569010 #>>44569128 #>>44569134 #>>44569145 #>>44569203 #>>44569303 #>>44569320 #>>44569347 #>>44569391 #>>44569396 #>>44569574 #>>44569581 #>>44569584 #>>44569621 #>>44569732 #>>44569761 #>>44569803 #>>44569903 #>>44570005 #>>44570024 #>>44570069 #>>44570120 #>>44570129 #>>44570365 #>>44570482 #>>44570537 #>>44570585 #>>44570642 #>>44570674 #>>44572113 #>>44574176 #
1. bambax ◴[] No.44568844[source]
The problem with LLM is when they're used for creativity or for thinking.

Just because LLMs are indeed useful in some (even many!) context, including coding, esp. to either get something started, or, like in your example, to transcode an existing code base to another platform, doesn't mean they will change everything.

It doesn't mean “AI is the new electricity.” (actual quote from Andrew Ng in the post).

More like AI is the new VBA. Same promise: everyone can code! Comparable excitement -- although the hype machine is orders of magnitude more efficient today than it was then.

replies(5): >>44568939 #>>44568982 #>>44569154 #>>44569340 #>>44569371 #
2. eru ◴[] No.44568939[source]
I don't know about VBA, but spreadsheets actually delivered (to a large extent) on the promise that 'everyone can write simple programs'. So much so that people don't see creating a spreadsheet as coding.

Before spreadsheets you had to beg for months for the IT department to pick your request, and then you'd have to wait a quarter or two for them to implement a buggy version of your idea. After spreadsheets, you can hack together a buggy version of your idea yourself over a weekend.

replies(3): >>44568999 #>>44569008 #>>44569702 #
3. TeMPOraL ◴[] No.44568982[source]
> It doesn't mean “AI is the new electricity.” (actual quote from Andrew Ng in the post).

I personally agree with Andrew Ng here (and I've literally arrived at the exact same formulation before becoming aware of Ng's words).

I take "new electricity" to mean, it'll touch everything people do, become part of every endeavor in some shape of form. Much like electricity. That doesn't mean taking over literally everything; there's plenty of things we don't use electricity for, because alternatives - usually much older alternatives - are still better.

There's still plenty of internal combustion engines on the ground, in the seas and in the skies, and many of them (mostly on extremely light and extremely heavy ends of the spectrum) are not going to be replaced by electric engines any time soon. Plenty of manufacturing and construction is still done by means of hydraulic and pneumatic power. We also sometimes sidestep electricity for heating purposes by going straight from sunlight to heat. Etc.

But even there, electricity-based technology is present in some form. The engine may be this humongous diesel-burning colossus, built from heat, metal, and a lot of pneumatics, positioned and held in place by hydraulics - but all the sensors on it are electric, where in the past some would be hydraulic and rest wouldn't even exist; it's controlled and operated by electricity-based computing network; it's been designed on computers, and so on.

In this sense, I think "AI is a new electricity" is believable. It's a qualitatively new approach to computing, that's directly or indirectly applicable everywhere, and that people already try to apply to literally everything[0]. And, much like with electricity, time and economics will tell which of those applications make sense, which were dead ends, and which were plain dumb in retrospect.

--

[0] - And they really did try to stuff electricity everywhere back when it was the new hot thing. Same with nuclear energy few decades later. We still laugh at how people 100 years ago imagined the future will look like... in between crying that we got short-changed by reality.

replies(1): >>44569181 #
4. TeMPOraL ◴[] No.44568999[source]
Right. Spreadsheeds already delivered on their promise (and then some) decades ago, and the irony is, many people - especially software engineers - still don't see it.

> Before spreadsheets you had to beg for months for the IT department to pick your request, and then you'd have to wait a quarter or two for them to implement a buggy version of your idea. After spreadsheets, you can hack together a buggy version of your idea yourself over a weekend.

That is still the refrain of corporate IT. I see plenty of comments both here and on wider social media, showing that many in our field still just don't get why people resort to building Excel sheets instead of learning to code / asking your software department to make a tool for you.

I guess those who do get it end up working on SaaS products targeting the "shadow IT" market :).

replies(2): >>44569204 #>>44569741 #
5. bambax ◴[] No.44569008[source]
True, Excel is in the same category, yes.
6. ben_w ◴[] No.44569154[source]
While I'd agree with your first line:

> The problem with LLM is when they're used for creativity or for thinking.

And while I also agree that it's currently closer to "AI is the new VBA" because of the current domain in which consumer AI* is most useful.

Despite that, I'd also aver that being useful in simply "many" contexts will make AI "the new electricity”. Electricity itself is (or recently was) only about 15% of global primary power, about 3 TW out of about 20 TW: https://en.wikipedia.org/wiki/World_energy_supply_and_consum...

Are LLMs 15% of all labour? Not just coding, but overall? No. The economic impact would be directly noticeable if it was that much.

Currently though, I agree. New VBA. Or new smartphone, in that we ~all have and use them, while society as a whole simultaneously cringes a bit at this.

* Narrower AI such as AlphaFold etc. would, in this analogy, be more like a Steam Age factory which had a massive custom steam engine in the middle distributing motive power to the equipment directly: it's fine at what it does, but you have to make it specifically for your goal and can't easily adapt it for something else later.

7. camillomiller ◴[] No.44569181[source]
AI is not a fundamental physical element. AI is mostly closed and controlled by people who will inevitably use it to further their power and centralize wealth and control. We acted with this in mind to make electricity a publicly controlled service. There is absolutely no intention nor political strength around to do this with AI in the West.
replies(2): >>44569205 #>>44569279 #
8. ben_w ◴[] No.44569204{3}[source]
>> Before spreadsheets you had to beg for months for the IT department to pick your request, and then you'd have to wait a quarter or two for them to implement a buggy version of your idea. After spreadsheets, you can hack together a buggy version of your idea yourself over a weekend.

> That is still the refrain of corporate IT. I see plenty of comments both here and on wider social media, showing that many in our field still just don't get why people resort to building Excel sheets instead of learning to code / asking your software department to make a tool for you.

In retrospect, this is also a great description of why two of my employers ran low on investors' interest.

9. TeMPOraL ◴[] No.44569205{3}[source]
Electricity here is meant as a technology (or a set of technologies) exploiting a particular physical phenomenon - not the phenomenon itself.

(If it were the latter, then you could argue everything uses electricity if it relies in any way on matter being solid, because AFAIK the furthest we got on the question of "why I don't fall through the chair I'm sitting on" is.... "electromagnetism".)

replies(1): >>44569282 #
10. ben_w ◴[] No.44569279{3}[source]
There's a few levels of this:

• That it is software means that any given model can be easily ordered nationalised or whatever.

• Everyone quickly copying OpenAI, and specifically DeepSeek more recently, showed that once people know what kind of things actually work, it's not too hard to replicate it.

• We've only got a handful of ideas about how to align* AI with any specific goal or value, and a lot of ways it does go wrong. So even if every model was put into public ownership, it's not going to help, not yet.

That said, if the goal is to give everyone access to an AI that demands 375 W/capita 24/7, means the new servers double the global demand for electricity, with all that entails.

* Last I heard (a while back now so may have changed): if you have two models, there isn't even a way to rank them as more-or-less aligned vs. anything. Despite all the active research in this area, we're all just vibing alignment, corporate interests included.

replies(1): >>44574764 #
11. camillomiller ◴[] No.44569282{4}[source]
Either way, it still feels like a stretched and inappropriate comparison at best, or a disingenuous and asinine one at worst.
12. mettamage ◴[] No.44569340[source]
> everyone can code!

I work directly with marketers and even if you give them something like n8n, they find it hard to be precise. Programming teaches you a "precise mindset" that one doesn't have when they aren't really thinking about tech professionally.

I wonder if seasoned UX designers can code now. They do think professionally about software. I wonder if it's at a deep enough granularity such that they can simply use natural language to get something to work.

replies(2): >>44569445 #>>44574164 #
13. informal007 ◴[] No.44569371[source]
LLM is helpful for creativity and thinking When you run out of your ideas
replies(1): >>44569416 #
14. andybak ◴[] No.44569416[source]
I sometimes feel that a lot of people bringing up the topic of creativity have never spent much time thinking, studying and self-reflecting on what "creativity" actually is. It's a complex topic and one that's mixed up with many other complex topics ("originality", "intellectual property", "aesthetic value", "art vs engineering" etc etc)

You see a lot of Motte and Bailey arguments in this discussion as people shift (often subconsciously) between different definitions of key terms and different historical perspectives.

I'd recommend someone tries to gain at least a passing familiarity with art history and the social history of art/design etc. Reading a bit of Edward De Bono and Douglas Hofstadter isn't a bad shout either (although it's many years since I've read the former so I can't guarantee it will stand up as well as my teenage self thought it did)

15. petra ◴[] No.44569445[source]
Can an LLM detect a lack of precision and point it to you ?
replies(3): >>44569512 #>>44569524 #>>44569554 #
16. staunton ◴[] No.44569512{3}[source]
An LLM can even ignore lack of precision and just guess what you wanted, usually correctly, unless what you want is very unusual.
17. TeMPOraL ◴[] No.44569524{3}[source]
It can! Though you might need to ask for it, otherwise it may take what it thinks you mean and run off with it, at which point you'll discover the lack of precision only later, when the LLM gets confused or the result is nothing like what you actually expected.
18. TheOtherHobbes ◴[] No.44569554{3}[source]
Sometimes, yes. Reliably, no.

LLMs don't have enough of a model of the world to understand anything. There was a paper floating around recently about how someone trained an ML system on orbital dynamics. The result was a system that could calculate orbits correctly, but it completely failed to extract the underlying - simple - math. Instead it basically frankensteined together its own system of epicycles which solved a very narrow range of problems but lacked any generality.

Any coding has the same problems. Sometimes you get lucky, sometimes you don't. And if you strap on an emulator and test rig and allow the machine to flail around inside it, sometimes working code falls out.

But there's no abstracted model of software development as a process in there, either in theory or practise. And no understanding of vague goals with constraints and requirements that can be inferred creatively from outside the training data.

replies(1): >>44575715 #
19. 6510 ◴[] No.44569702[source]
People know which ingredients to use, the ratios, how long to bake and cook them but the design of the kitchen prevents them from cooking the meal? Professional cooks debate which gas tube to use with which adapter and how to organize all the adapters according to ISO standards while the various tubes lay on the floor all over the building. The stove switches off if you try to use the wrong brand of pots. The cupboard has a retina scanner. Eventually people go to the back of the garden and make a campfire. There is no fridge there and no way to wash dishes. They are even using the wrong utensils. The horror!
20. rwmj ◴[] No.44569741{3}[source]
Software engineers definitely do understand that spreadsheets are widely used and useful. It's just that we also see the awful downsides of them - like no version control, being proprietary, and having to type obscure incantations into tiny cells - and realise that actual coding is just better.

To bring this back on topic, software engineers see AI being a better search tool or a code suggestion tool on the one hand, but also having downsides (hallucinating, used by people to generate large amounts of slop that humans then have to sift through).

replies(1): >>44569786 #
21. TeMPOraL ◴[] No.44569786{4}[source]
> It's just that we also see the awful downsides of them - like no version control, being proprietary, and having to type obscure incantations into tiny cells

Right. But this also tends to make us forget sometimes that those things aren't always a big deal. It's the distinction between solving an immediate problem vs. building a proper solution.

(That such one-off solution tends to become a permanent fixture in an organization - or household - is unfortunately an unsolved problem of human coordination.)

> and realise that actual coding is just better.

It is, if you already know how to do it. But then we overcompensate in the opposite direction, and suddenly 90% of the "actual coding" turns into dealing with build tools and platform bullshit, at which point some of us (like myself) look back at spreadsheets in envy, or start using LLMs to solve sub-problems directly.

It's actually unfortunate, IMO, that LLMs are so over-trained on React and all kinds of modern webshit - this makes them almost unable to give you simple solutions for anything involving web, unless you specifically prompt them to go full vanilla and KISS.

replies(2): >>44569940 #>>44572252 #
22. rwmj ◴[] No.44569940{5}[source]
I'm constantly surprised that no one has mainstreamed version control. I see so many cases where it could be applied: document creation and editing, web site updates, spreadsheets ... even the way that laws are amended in Parliament [1]

[1] https://www.gov.uk/guidance/legislative-process-taking-a-bil... https://www.gov.uk/government/publications/amending-bills-st...

replies(1): >>44578372 #
23. gedy ◴[] No.44572252{5}[source]
> But this also tends to make us forget sometimes that those things aren't always a big deal. It's the distinction between solving an immediate problem vs. building a proper solution.

I agree about the "code quality" not being a huge issue for some use cases, however having worked at places with entrenched spreadsheet workflows (like currently), I think that non engineers still need help seeing they don't need a faster horse - e.g. automate this task away. Many, many times a "spreadsheet" is ironically used for a very inefficient manual task.

replies(1): >>44574293 #
24. MattSayar ◴[] No.44574164[source]
Our UX designers have been prototyping things they started in Figma with Windsurf. They seem pretty happy with it. Of course there's a big step in getting it production-ready but it really smooths the conversation with engineering.
replies(1): >>44580936 #
25. TeMPOraL ◴[] No.44574293{6}[source]
> Many, many times a "spreadsheet" is ironically used for a very inefficient manual task.

Right. But spreadsheets and "shadow IT" aren't really about technology - they're about autonomy, about how the organization is structured internally. No one is choosing a bad process from the start - spreadsheets are the easiest (and often the only possible) way to solve an immediate problem, and even as they turn into IT horror stories, there usually is no point at which the people using it could make things better on their own. The "quality solutions", conversely, are usually top-down and don't give users much control over the process - instead of adoption, this just breeds resistance.

26. ijk ◴[] No.44574764{4}[source]
Public control over AI models is a distinct thing from everyone having access to an AI server (not that national AI would need a 1:1 ratio of servers to people, either).

It's pretty obvious that the play right now is to lock down the AI as much as possible and use that to facilitate control over every system it gets integrated with. Right now there's too many active players to shut out random developers, but there's an ongoing trend of companies backing away from releasing open weight models.

replies(1): >>44575162 #
27. ben_w ◴[] No.44575162{5}[source]
> It's pretty obvious that the play right now is to lock down the AI as much as possible and use that to facilitate control over every system it gets integrated with. Right now there's too many active players to shut out random developers, but there's an ongoing trend of companies backing away from releasing open weight models.

More the opposite, despite the obvious investment incentive to do as you say to have any hope of a return on investment. OpenAI *tried* to make that a trend with GPT-2 on the grounds that it's irresponsible to give out a power tool in the absence of any idea of what "safety tests" even mean in that context, but lots of people mocked them for it and it looks like only them and Anthropic take such risks seriously. Or possibly just Anthropic, depending how cynical you are about Altman.

28. antonvs ◴[] No.44575715{4}[source]
> LLMs don't have enough of a model of the world to understand anything.

This is binary thinking, and it's fallacious.

For your orbital mechanics example, sure, it's difficult for LLMs to develop good models of the physical world, in large part because they aren't able to interact with the world directly and have to rely on human texts to describe it to them.

For your software development example, you're making a similar mistake: the fact that their strongest suit is not producing fully working systems doesn't mean that they have no world model, or that their successes are as random as you seem to think ("Sometimes you get lucky, sometimes you don't," "sometimes working code falls out.")

But if you try, for example, asking an LLM to identify a bug in a program, or ask it questions about how a program works, you'll find that from a functional perspective, they exhibit excellent understanding that strongly implies a good world model. You may be taking your own thought processes for granted too much to realize how good they are at this. The idea that "there's no abstracted model of software development as a process in there" is hard to reconcile with the often superhuman responses they're capable of, when you use them in the scenarios they're most effective at.

29. eru ◴[] No.44578372{6}[source]
Google Docs has some mild forms of version control. You have infinite undo, and you can have people make suggestions (instead of direct edits).

I don't know about Microsoft's offerings, but I imagine it's probably fairly similar?

Wikipedia also has mild forms of version control, and it is (or at least used to be) fairly mainstream to edit it.

replies(1): >>44579927 #
30. rwmj ◴[] No.44579927{7}[source]
They're the most basic form of version control possible (little more than undo/redo). The Google one has a really awful interface as well. I haven't used any MS products in a long time, but Word's version control circa late 2000s was bad as well. Actual version control allows branching, merge strategies, offline editing, and all the other crazy stuff you can do with git.

Expressing all of that in a way that makes sense to end users is the real challenge, and I guess why this hasn't been solved yet.

31. mettamage ◴[] No.44580936{3}[source]
Ah so while they can't make fully fledged products, they deepen their skills in making high fidelity prototypes more quickly.

That's cool!