←back to thread

416 points floverfelt | 1 comments | | HN request time: 0s | source
Show context
ares623 ◴[] No.45056350[source]
> Other forms of engineering have to take into account the variability of the world.

> Maybe LLMs mark the point where we join our engineering peers in a world on non-determinism.

Those other forms of engineering have no choice due to the nature of what they are engineering.

Software engineers already have a way to introduce determinism into the systems they build! We’re going backwards!

replies(6): >>45056412 #>>45056449 #>>45056511 #>>45056669 #>>45056797 #>>45059375 #
tptacek ◴[] No.45056669[source]
'potatolicious says we're going forwards: https://news.ycombinator.com/item?id=44978319
replies(3): >>45056747 #>>45056812 #>>45057894 #
ants_everywhere ◴[] No.45056747[source]
adding to this, software deals with non-determinism all the time.

For example, web requests are non-deterministic. They depend, among other things, on the state of the network. They also depend on the load of the machine serving the request.

One way to think about this is: how easy is it for you to produce byte-for-byte deterministic builds of the software you're working on? If it's not trivial there's more non-determinism than is obvious.

replies(2): >>45056820 #>>45057213 #
skydhash ◴[] No.45056820[source]
Mostly the engineering part of software is dealing with non-determinism, by avoiding it or enforcing determinism. Take something like TCP, it's all about guaranteeing the determinism that either the message is sent and received or it is not. And we have a lot of algorithms that tries to guarantee consistency of information between the elements of a system.
replies(2): >>45056858 #>>45056960 #
ares623 ◴[] No.45056960[source]
But there is an underlying deterministic property in the TCP example. A message is either received within a timeout or not.

How can that be extralopated with LLMs? How does a system independently know that it's arrived at a correct answer within a timeout or not? Has the halting problem been solved?

replies(2): >>45057228 #>>45059115 #
skydhash ◴[] No.45059115[source]
> How can that be extralopated with LLMs? How does a system independently know that it's arrived at a correct answer within a timeout or not?

That's the catch 22 with LLM. You're supposed to be both the asker and the verifier. Which in practice, it's not that great. LLMs will just find the snippets of code that matches somehow and just act on it (It's the "I'm feeling Lucky" button with extra steps)

In traditional programming, coding is a notation too more than anything. You supposed to have a solution before coding, but because of how the human brain works, it's more like a blackboard, aka an helper for thinking. You write what you think is correct, verify your assumptions, then store and forget about all of it when that's true. Once in a while, you revisit the design and make it more elegant (at least you hope you're allowed to).

LLM programming, when first started, was more about a direct english to finished code translation. Now, hope has scaled down and it's more about precise specs to diff proposal. Which frankly does not improve productivity as you can either have a generator that's faster and more precise (less costly too) or you will need to read the same amount of docs to verify everything as you would need to do to code the stuff in the first place (80% of the time spent coding).

So no determinism with LLMs. The input does not have any formal aspects, and the output is randomly determined. And the domain is very large. It is like trying to find a specific grain of sand on a beach while not fully sure it's there. I suspect most people are doing the equivalent of taking a handful of sand and saying that's what they wanted all along.

replies(2): >>45059274 #>>45059449 #
1. ants_everywhere ◴[] No.45059449{6}[source]
> LLMs will just find the snippets of code that matches somehow

This suggests a huge gap in your understanding of LLMs if we are to take this literally.

> LLM programming, when first started, was more about a direct english to finished code translation

There is no direct english to finished code translation. A prompt like "write me a todo app" has infinitely many codes it maps to with different tradeoffs and which appeal to different people. Even if LLMs never made any coding mistakes, there is no function that maps a statement like that to specific pieces of code unless you're making completely arbitrary choices like the axiom of choice.

So we're left with the fact that we have to specify what we want. And at that LLMs do exceptionally well.