Most active commenters
  • globular-toast(3)
  • xpe(3)

←back to thread

LLM Inevitabilism

(tomrenner.com)
1619 points SwoopsFromAbove | 46 comments | | HN request time: 1.149s | source | bottom
1. Workaccount2 ◴[] No.44570646[source]
People like communicating in natural language.

LLMs are the first step in the movement away from the "early days" of computing where you needed to learn the logic based language and interface of computers to interact with them.

That is where the inevitabilism comes from. No one* wants to learn how to use a computer, they want it to be another entity that they can just talk to.

*I'm rounding off the <5% who deeply love computers.

replies(15): >>44570755 #>>44570832 #>>44570838 #>>44571025 #>>44571126 #>>44571238 #>>44571322 #>>44571750 #>>44572127 #>>44572396 #>>44572611 #>>44573565 #>>44573713 #>>44574762 #>>44576068 #
2. usrbinbash ◴[] No.44570755[source]
> No one* wants to learn how to use a computer, they want it to be another entity that they can just talk to.

No, we don't.

Part of the reason why I enjoy programming, is because it is a mental exercise allowing me to give precise, unambiguous instructions that either work exactly as advertised or they do not.

replies(1): >>44570949 #
3. aksosoakbab ◴[] No.44570832[source]
Spoken language is a miserable language to communicate in for programming. It’s one of the major detractors of LLMs.

Programming languages have a level of specification orders of magnitude greater than human communication ones.

replies(2): >>44570883 #>>44571235 #
4. spopejoy ◴[] No.44570838[source]
> People like communicating in natural language

It does puzzle me a little that there isn't more widespread acclaim of this, achieving a natural-language UI has been a failed dream of CS for decades and now we can just take it for granted.

LLMs may or may not be the greatest thing for coding, writing, researching, or whatever, but this UX is a keeper. Being able to really use language to express a problem, have recourse to abbreviations, slang, and tone, and have it all get through is amazing, and amazingly useful.

5. noosphr ◴[] No.44570883[source]
It absolutely is, but 99% of programs the average person wants to write for thier job are some variation of, sort these files, filter between value A and B, search inside for string xyz, change string to abc.

LLMs are good enough for that. Just like how spreadsheets are good enough for 99% of numerical office work.

6. jowea ◴[] No.44570949[source]
Exactly, we are in the *, the 5% (and I think that's an overestimate) who actually like it. Seems tech is at least partly moving on.
replies(1): >>44571257 #
7. layer8 ◴[] No.44571025[source]
People also like reliable and deterministic behavior, like when they press a specific button it does the same thing 99.9% of the time, and not slightly different things 90% of the time and something rather off the mark 10% of the time (give and take some percentage points). It's not clear that LLMs will get us to the former.
replies(4): >>44572101 #>>44572139 #>>44576951 #>>44579923 #
8. andai ◴[] No.44571126[source]
Many people love games, and some of those even love making games, but few truly love to code.

I'm designing a simple game engine now and thinking, I shall have to integrate AI programming right into it, because the average user won't know how to code, and they'll try to use AI to code, and then the AI will frantically google for docs, and/or hallucinate, so I might as well set it up properly on my end.

In other words, I might as well design it so it's intuitive for the AI to use. And -- though I kind of hate to say this -- based on how the probabilistic LLMs work, the most reliable way to do that is to let the LLM design it itself. (With the temperature set to zero.)

i.e. design it so the system already matches how the LLM thinks such a system works. This minimizes the amount of prompting required to "correct" its behavior.

The passionate human programmer remains a primary target, and it's absolutely crucial that it remains pleasant for humans to code. It's just that most of them won't be in that category, they'll be using it "through" this new thing.

replies(1): >>44571397 #
9. jaza ◴[] No.44571235[source]
Computer scientists in the ~1970s said that procedural languages are a miserable medium for programming, compared to assembly languages.

And they said in the ~1960s that assembly languages are a miserable medium for programming, compared to machine languages.

(Ditto for every other language paradigm under the sun since then, particularly object-oriented languages and interpreted languages).

I agree that natural languages are a miserable medium for programming, compared to procedural / object-oriented / functional / declarative languages. But maybe I only agree because I'm a computer scientist from the ~2010s!

replies(2): >>44571361 #>>44572415 #
10. globular-toast ◴[] No.44571238[source]
LLMs are nowhere near the first step. This is Python, an almost 35 year old language:

    for apple in sorted(bag):
        snake.eat(apple)
The whole point of high-level programming languages is we can write code that is close enough to natural language while still being 100% precise and unambiguous.
replies(2): >>44572010 #>>44574845 #
11. xpe ◴[] No.44571257{3}[source]
> seems like tech is at least partly moving on

This framing risks getting it backwards and disempowering people, doesn’t it? Technology does not make its own choices (at least not yet).

Or does it? To problematize my own claims… If you are a materialist, “choice” is an illusion that only exists once you draw a system boundary. In other words, “choice” is only an abstraction that makes sense if one defines an “agent”. We have long-running agents, so…

replies(1): >>44575897 #
12. xpe ◴[] No.44571322[source]
> LLMs are the first step in the movement away from the "early days" of computing where you needed to learn the logic based language and interface of computers to interact with them.

Even if one accepts the framing (I don’t), LLMs are far from the first step.

The article is about questioning “inevitabilism”! To do that, we need to find anchoring and assuming the status-quo. Think broader: there are possible future scenarios where people embrace unambiguous methods for designing computer programs, even business processes, social protocols, governments.

replies(1): >>44574050 #
13. abagee ◴[] No.44571361{3}[source]
I don't think that's the only difference - every "leap" in languages you mentioned was an increase in the level of abstraction, but no change in the fact that the medium was still deterministic.

Programming in natural languages breaks that mold by adding nondeterminism and multiple interpretations into the mix.

Not saying it will never happen - just saying that I don't think it's "only" because you're a computer scientist from the 2010s that you find natural languages to be a poor medium for programming.

replies(1): >>44572136 #
14. deltaburnt ◴[] No.44571397[source]
I'm not sure I see the logic in what you're describing. By the time you run into this "users using AI on my engine" problem, the models will be different from the ones you used to make the design. Design how you like, I would just be surprised if that choice actually ended up mattering 5 years from now.
replies(1): >>44576459 #
15. hinkley ◴[] No.44571750[source]
If there was a way to explain contracts in natural language, don’t you think lawyers would have figured it out by now? How much GDP do we waste on one party thinking the contract says they paid for one thing but they got something else?
replies(2): >>44571915 #>>44572284 #
16. 827a ◴[] No.44571915[source]
There's a potentially interesting idea in the space of: The cryptobros went really deep into trying to describe everything Up To And Including The World in computer code, with things like Etherium contracts, tokenization of corporate voting power, etc. That's all dead now, but you have to have some respect for the very techno-utopian idea that we can extend the power and predictability of Computer Code into everything; and its interesting how LLMs were the next techno-trend, yet totally reversed it. Now, its: computer code doesn't matter, only natural language matters, describe everything in natural language including computer code.
17. 827a ◴[] No.44572010[source]
I really appreciate this take.

High level programming languages should be able to do much that LLMs can do when it comes to natural language expression of ideas into computing behavior, but with the extreme advantage of 100% predictable execution. LLM queries, system prompts, and context, of sufficient complexity, required to get reasonably good results out of the LLM, begin to look like computer code and require skills similar to software engineering; but still without the predictable conformance. Why not just write computer code?

Our industry developed some insanely high productivity languages, frameworks, and ways of thinking about systems development, in the mid-2000s. Rails is the best example of this; Wordpress, Django, certainly a few others. Then, for some reason, around the early 2010s, we just forgot about that direction of abstraction. Javascript, Go, and Rust took over, React hit in the mid-2010s, microservices and kubernetes, and it feels like we forgot about something that we shouldn't have ever forgotten about.

18. hnfong ◴[] No.44572101[source]
You can set the temperature of LLMs to 0 and that will make them deterministic.

Not necessarily reliable though, and you could get different results if you typed an extra whitespace or punctuation.

replies(3): >>44572488 #>>44572528 #>>44580073 #
19. pera ◴[] No.44572127[source]
> Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as little as the physicist means to say.

- Bertrand Russell, The Scientific Outlook (1931)

There is a reason we don't use natural language for mathematics anymore: It's overly verbose and extremely imprecise.

replies(1): >>44573171 #
20. hnfong ◴[] No.44572136{4}[source]
> the medium was still deterministic

Well, you should participate more in the discussions on Undefined Behavior in C/C++....

21. erikerikson ◴[] No.44572139[source]
That is a parameter that can be changed, often called temperature. Setting the variance to 0 can be done and you will get repeatability. Whether you would be happy with that is another matter.
22. cootsnuck ◴[] No.44572284[source]
> If there was a way to explain contracts in natural language, don’t you think lawyers would have figured it out by now?

Uh...I mean...you do know they charge by the hour, right?

Half joking, but seriously, the concept of "job security" still exists even for a $400 billion industry. Especially when that industry commands substantial power across essentially all consequential areas of society.

LLMs literally do explain contracts in natural language. They also allow you to create contracts with just natural language. (With all the same caveats as using LLMs for programming or anything else.)

I would say law is quietly one of the industries that LLMs have had a larger than expected impact on. Not in terms of job loss (but idk, would be curious to see any numbers on this). But more just like evident efficacy (similar to how programming became a clear viable use case for LLMs).

All of that being said, big law, the type of law that dominates the industry, does not continue to exist because of "contract disputes". It exists to create and reinforce legal machinations that advance the interests of their clients and entrench their power. And the practice of doing that is inherently deeply human. As in, the names of the firm and lawyers involved are part of the efficacy of the output. It's deeply relational in many ways.

(I'd bet anything though that smart lawyers up and down the industry are already figuring out ways to make use of LLMs to allow them to do more work.)

replies(1): >>44573644 #
23. deadbabe ◴[] No.44572396[source]
Let’s reframe your world view:

No one wants to communicate with a computer. Computers are annoying, vile things. They just want things to work easily and magically.

Therefore, for these people, being able to communicate in a natural language isn’t going to be anymore appealing than a nice graphical user interface. Using a search engine to find stuff you want already requires no logic, the LLM does the same but it just gives you better results.

Thus the world of LLMs is going to look much like the world of today: just with lazier people who want to do even less thinking than they do now.

It is inevitable.

24. ◴[] No.44572415{3}[source]
25. jihadjihad ◴[] No.44572488{3}[source]
> You can set the temperature of LLMs to 0 and that will make them deterministic.

It will make them more deterministic, but it will not make them fully deterministic. This is a crucial distinction.

replies(2): >>44572612 #>>44572653 #
26. sealeck ◴[] No.44572528{3}[source]
Even then, this isn't actually what you want. When people say deterministic, at one level they mean "this thing should be a function" (so input x always returns the same output y). Some people also use determinism to mean they want a certain level of "smoothness" so that the function behaves predictably (and they can understand it). That is "make me a sandwich" should not return radically different results to "make me a cucumber sandwich".

As you note, your scheme significantly solves the first problem (which is a pretty weak condition) but fails to solve the second problem.

27. e3bc54b2 ◴[] No.44572611[source]
> LLMs are the first step in the movement away from (...) the logic based language

This dumb thing again.. The logic based language was and remains a major improvement [0] in being able to build abstractions because it allows the underlying implementations to be 'deterministic'. The natural language misses that mark by such a wide margin that it is impossible to explain in nicer language. And if one wants to make the argument that people achieve that anyway, perhaps you reading through one [1] will put that thought to rest :)

[0] www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667.html

[1] https://www.congress.gov/bill/119th-congress/house-bill/1/te...

replies(1): >>44574408 #
28. falcor84 ◴[] No.44572612{4}[source]
Google is significantly less deterministic than AltaVista was.
29. cookingrobot ◴[] No.44572653{4}[source]
That’s an implementation choice. All the math involved is deterministic if you want it to be.
replies(1): >>44574191 #
30. calvinmorrison ◴[] No.44573171[source]
which is why every NoCode platform, or iPaas or whatever always falls back to implementing DSLs. programming languages are the most succinct deterministic way to instruct a computer, or even a person to do something.
31. yoyohello13 ◴[] No.44573565[source]
Logic based languages are useful because they are unambiguous. Natural language is far less efficient for communicating hard requirements. Why do you think mathematical notation exists? It’s not just because the ivory tower elites want to sound smart. It’s a more efficient medium for communicating mathematical ideas.
32. dmoy ◴[] No.44573644{3}[source]
> LLMs literally do explain contracts in natural language. They also allow you to create contracts with just natural language. (With all the same caveats as using LLMs for programming or anything else.)

I can't generalize, but the last time I tried to use an LLM for looking at a legal document (month or two ago), it got a section completely wrong. And then when that was pointed out, it dug in its heels and insisted it was right, even though it was very wrong.

Interestingly there was a typo, which was obvious to any human, and would have been accepted as intended in a court, but the LLM insisted on using a strict interpretation accepting the typo as truth.

It was weird, because it felt like on the one hand the LLM was trained to handle legal documents with a more strict interpretation of what's written, but then couldn't cope with the reality of how a simple typo would be handled in courts or real legal proceedings.

So.... I dunno. LLMs can explain contracts, but they may explain then in a very wrong way, which could lead to bad outcomes if you rely on it.

33. davesque ◴[] No.44573713[source]
The thing is, people also dislike natural language for its ambiguity. That's why we invented things like legalese and computers; to get more reliable results. There will always be a need for that.
34. xpe ◴[] No.44574050[source]
belated edits: … find other anchors … and try not to assume the status quo will persist, much less be part of a pattern or movement (which may only be clear in retrospect)
35. Jaxan ◴[] No.44574191{5}[source]
It will still be nondeterministic in this context. Prompts like “Can you do X?” and “Please do X” might result in very different outcomes, even when it’s “technically deterministic”. For the human operating with natural language it’s nondeterministic.
36. JustBreath ◴[] No.44574408[source]
Very true, the whole point of logic and programming is that language is itself subjective and vague.

A deterministic program given the same inputs will always give the same outputs.

We can debate about what is cool, cold or freezing but a thermometer will present the same numeric value to everyone.

37. techpineapple ◴[] No.44574762[source]
“People like communicating in natural language”

I would actually want to see some research on this. Maybe? But I’d think there would be a lot of exceptions. At its most basic, I’d rather flick my thumb than constantly say “scroll down”. And I think that you’d want to extrapolate that out.

38. Workaccount2 ◴[] No.44574845[source]
My 65yr old mother will never use python.

What she wants is to tell her phone to switch it's background to the picture she took last night of the family.

That is the inevitabilism.

Forget about the tiny tech bubble for a moment and see the whole world.

replies(1): >>44582584 #
39. eddythompson80 ◴[] No.44575897{4}[source]
> This framing risks getting it backwards and disempowering people, doesn’t it? Technology does not make its own choices (at least not yet).

It doesn't but we rarely chase technology for its own sake. Some do, and I envy them.

However, most of us are being paid to solve specific problems usually using a specific set of technologies. It doesn't matter how much I love the Commodore or BASIC, it'll be very hard to convince someone to pay me to develop a solution for their problem based on it. The choice to use nodejs and react to solve their problem was.... my choice.

Will there be a future where I can't really justify paying you to write code by hand. instead I can only really justify paying you to debug LLM generated code or to debug a prompt? Like could we have companies selling products and services with fundamentally no one at the helm of writing code. The entire thing is built through prompting and everynow and then you hire someone to take the hammer and keep beating a part until it sorta behaves the way it sorta should and they add a few ALL CAPS INSTRUCTIONS TO THE AGENT NOT TO TOUCH THIS!!!!!

40. quantumHazer ◴[] No.44576068[source]
Dijkstra would like to have a word here

https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...

41. andai ◴[] No.44576459{3}[source]
>by the time you run into this problem

I'm describing the present day. My friend, who doesn't know anything about programming, made three games in an afternoon with Gemini.

42. ryankrage77 ◴[] No.44576951[source]
To a user, many modern UI's are unpredictable and unreliable anyway. "I've always done it this way, but it's not working...".
replies(1): >>44577113 #
43. layer8 ◴[] No.44577113{3}[source]
I agree, but UIs don't need to be that way. Non-smart light switches, thermostats, household appliances, etc. generally aren't that way, and that’s why many people prefer them, and expect UIs to work similarly — which they overall typically still do.
44. solarkraft ◴[] No.44579923[source]
With the declining quality of consumer products (due to „just ship it“ culture), this unreliability is already commonplace.

I hate that, but this society has brought it upon itself through consumer choices.

People are really quick to depend on and trust technology that has shown itself to be useful. This can already be observed for LLMs.

45. globular-toast ◴[] No.44580073{3}[source]
So then people will just learn the language of the LLM, e.g. if a particular LLM always interprets "set my alarm for 8" as setting your alarm for 8am people will learn to just say that if they wanted 8am or specify pm (or use a 24 hour clock) if they want 8pm.

I can see this having odd effects with natural language. Natural language users are forever in a state of negotiation with each other. If you say something to someone and they don't understand they can ask for clarification (or, more likely, just look confused) but, equally, you can take that feedback and adjust your own language model. This happens all day, every day. If most people understand you but a few don't, it's on the few to adjust their models, but if more misunderstand than understand then it's on you to adjust yours.

With current LLMs it's one way. Only you, the human, are malleable. Of course, theoretically the LLM could continuously incorporate input into its model, but we're a long way off that being practical as far as I know.

We'll have to see how it pans out but I can it either ending up in a weird feedback loop where people just capitulate and use the language of the LLM, or they continue to use human language with humans and a special LLM language with LLMs. Both options seem pretty bad.

46. globular-toast ◴[] No.44582584{3}[source]
> What she wants is to tell her phone to switch it's background to the picture she took last night of the family.

Which is kind of absurd. If you were asking a friend to hang a picture would you verbally describe the picture to them or just show them which one to hang?

There is a lot of "if all you have is a hammer everything looks like a nail" going on.