←back to thread

54 points tudorizer | 1 comments | | HN request time: 0.207s | source
Show context
oytis ◴[] No.44367106[source]
I don't get his argument, and if it wasn't Martin Fowler I would just dismiss it. He admits himself that it's not an abstraction over previous activity as it was with HLLs, but rather a new activity altogether - that is prompting LLMs for non-deterministic outputs.

Even if we assume there is value in it, why should it replace (even if in part) the previous activity of reliably making computers do exactly what we want?

replies(2): >>44403162 #>>44403847 #
dist-epoch ◴[] No.44403162[source]
Because unreliably solving a harder problem with LLMs is much more valuable than reliably solving an easier problem without.
replies(4): >>44403214 #>>44403346 #>>44404165 #>>44407471 #
1. ◴[] No.44407471[source]