←back to thread

54 points tudorizer | 1 comments | | HN request time: 0s | source
Show context
oytis ◴[] No.44367106[source]
I don't get his argument, and if it wasn't Martin Fowler I would just dismiss it. He admits himself that it's not an abstraction over previous activity as it was with HLLs, but rather a new activity altogether - that is prompting LLMs for non-deterministic outputs.

Even if we assume there is value in it, why should it replace (even if in part) the previous activity of reliably making computers do exactly what we want?

replies(2): >>44403162 #>>44403847 #
dist-epoch ◴[] No.44403162[source]
Because unreliably solving a harder problem with LLMs is much more valuable than reliably solving an easier problem without.
replies(4): >>44403214 #>>44403346 #>>44404165 #>>44407471 #
oytis ◴[] No.44403346[source]
OK, so we are having two classes of problems here - ones worth solving unreliably, and ones that are better solved without LLMs. Doesn't sound like a next level of abstraction to me
replies(2): >>44403871 #>>44404015 #
1. pydry ◴[] No.44404015[source]
The story of programming is not largely one of humans striving to be more reliable when programming but putting up better defenses against our own inherent unreliabilities.

When I watch juniors struggle they seem to think that it's because they dont think hard enough whereas it's usually because they didnt build enough infrastructure that would prevent them from needing to think too hard.

As it happens, when it comes to programming, LLM unreliabilities seem to align quite closely with ours so the same guardrails that protect against human programmers' tendencies to fuck up (mostly tests and types) work pretty well for LLMs too.