Most active commenters

    ←back to thread

    LLM Inevitabilism

    (tomrenner.com)
    1612 points SwoopsFromAbove | 12 comments | | HN request time: 1.119s | source | bottom
    Show context
    Workaccount2 ◴[] No.44570646[source]
    People like communicating in natural language.

    LLMs are the first step in the movement away from the "early days" of computing where you needed to learn the logic based language and interface of computers to interact with them.

    That is where the inevitabilism comes from. No one* wants to learn how to use a computer, they want it to be another entity that they can just talk to.

    *I'm rounding off the <5% who deeply love computers.

    replies(15): >>44570755 #>>44570832 #>>44570838 #>>44571025 #>>44571126 #>>44571238 #>>44571322 #>>44571750 #>>44572127 #>>44572396 #>>44572611 #>>44573565 #>>44573713 #>>44574762 #>>44576068 #
    1. layer8 ◴[] No.44571025[source]
    People also like reliable and deterministic behavior, like when they press a specific button it does the same thing 99.9% of the time, and not slightly different things 90% of the time and something rather off the mark 10% of the time (give and take some percentage points). It's not clear that LLMs will get us to the former.
    replies(4): >>44572101 #>>44572139 #>>44576951 #>>44579923 #
    2. hnfong ◴[] No.44572101[source]
    You can set the temperature of LLMs to 0 and that will make them deterministic.

    Not necessarily reliable though, and you could get different results if you typed an extra whitespace or punctuation.

    replies(3): >>44572488 #>>44572528 #>>44580073 #
    3. erikerikson ◴[] No.44572139[source]
    That is a parameter that can be changed, often called temperature. Setting the variance to 0 can be done and you will get repeatability. Whether you would be happy with that is another matter.
    4. jihadjihad ◴[] No.44572488[source]
    > You can set the temperature of LLMs to 0 and that will make them deterministic.

    It will make them more deterministic, but it will not make them fully deterministic. This is a crucial distinction.

    replies(2): >>44572612 #>>44572653 #
    5. sealeck ◴[] No.44572528[source]
    Even then, this isn't actually what you want. When people say deterministic, at one level they mean "this thing should be a function" (so input x always returns the same output y). Some people also use determinism to mean they want a certain level of "smoothness" so that the function behaves predictably (and they can understand it). That is "make me a sandwich" should not return radically different results to "make me a cucumber sandwich".

    As you note, your scheme significantly solves the first problem (which is a pretty weak condition) but fails to solve the second problem.

    6. falcor84 ◴[] No.44572612{3}[source]
    Google is significantly less deterministic than AltaVista was.
    7. cookingrobot ◴[] No.44572653{3}[source]
    That’s an implementation choice. All the math involved is deterministic if you want it to be.
    replies(1): >>44574191 #
    8. Jaxan ◴[] No.44574191{4}[source]
    It will still be nondeterministic in this context. Prompts like “Can you do X?” and “Please do X” might result in very different outcomes, even when it’s “technically deterministic”. For the human operating with natural language it’s nondeterministic.
    9. ryankrage77 ◴[] No.44576951[source]
    To a user, many modern UI's are unpredictable and unreliable anyway. "I've always done it this way, but it's not working...".
    replies(1): >>44577113 #
    10. layer8 ◴[] No.44577113[source]
    I agree, but UIs don't need to be that way. Non-smart light switches, thermostats, household appliances, etc. generally aren't that way, and that’s why many people prefer them, and expect UIs to work similarly — which they overall typically still do.
    11. solarkraft ◴[] No.44579923[source]
    With the declining quality of consumer products (due to „just ship it“ culture), this unreliability is already commonplace.

    I hate that, but this society has brought it upon itself through consumer choices.

    People are really quick to depend on and trust technology that has shown itself to be useful. This can already be observed for LLMs.

    12. globular-toast ◴[] No.44580073[source]
    So then people will just learn the language of the LLM, e.g. if a particular LLM always interprets "set my alarm for 8" as setting your alarm for 8am people will learn to just say that if they wanted 8am or specify pm (or use a 24 hour clock) if they want 8pm.

    I can see this having odd effects with natural language. Natural language users are forever in a state of negotiation with each other. If you say something to someone and they don't understand they can ask for clarification (or, more likely, just look confused) but, equally, you can take that feedback and adjust your own language model. This happens all day, every day. If most people understand you but a few don't, it's on the few to adjust their models, but if more misunderstand than understand then it's on you to adjust yours.

    With current LLMs it's one way. Only you, the human, are malleable. Of course, theoretically the LLM could continuously incorporate input into its model, but we're a long way off that being practical as far as I know.

    We'll have to see how it pans out but I can it either ending up in a weird feedback loop where people just capitulate and use the language of the LLM, or they continue to use human language with humans and a special LLM language with LLMs. Both options seem pretty bad.