←back to thread

54 points tudorizer | 3 comments | | HN request time: 1.604s | source
Show context
oytis ◴[] No.44367106[source]
I don't get his argument, and if it wasn't Martin Fowler I would just dismiss it. He admits himself that it's not an abstraction over previous activity as it was with HLLs, but rather a new activity altogether - that is prompting LLMs for non-deterministic outputs.

Even if we assume there is value in it, why should it replace (even if in part) the previous activity of reliably making computers do exactly what we want?

replies(2): >>44403162 #>>44403847 #
dist-epoch ◴[] No.44403162[source]
Because unreliably solving a harder problem with LLMs is much more valuable than reliably solving an easier problem without.
replies(4): >>44403214 #>>44403346 #>>44404165 #>>44407471 #
1. oytis ◴[] No.44403346[source]
OK, so we are having two classes of problems here - ones worth solving unreliably, and ones that are better solved without LLMs. Doesn't sound like a next level of abstraction to me
replies(2): >>44403871 #>>44404015 #
2. dist-epoch ◴[] No.44403871[source]
I was thinking more along this line: you can solve unreliably 100% of the problem with LLMs, or solve reliably only 80% of the problem.

So you trade reliability to get to that extra 20% of hard cases.

3. pydry ◴[] No.44404015[source]
The story of programming is not largely one of humans striving to be more reliable when programming but putting up better defenses against our own inherent unreliabilities.

When I watch juniors struggle they seem to think that it's because they dont think hard enough whereas it's usually because they didnt build enough infrastructure that would prevent them from needing to think too hard.

As it happens, when it comes to programming, LLM unreliabilities seem to align quite closely with ours so the same guardrails that protect against human programmers' tendencies to fuck up (mostly tests and types) work pretty well for LLMs too.