←back to thread

321 points laserduck | 2 comments | | HN request time: 0.399s | source
Show context
klabb3 ◴[] No.42157457[source]
I don’t mind LLMs in the ideation and learning phases, which aren’t reproducible anyway. But I still find it hard to believe engineers of all people are eager to put a slow, expensive, non-deterministic black box right at the core of extremely complex systems that need to be reliable, inspectable, understandable…
replies(6): >>42157615 #>>42157652 #>>42158074 #>>42162081 #>>42166294 #>>42167109 #
brookst ◴[] No.42157652[source]
You find it hard to believe that non-deterministic black boxes at the core of complex systems are eager to put non-deterministic black boxes at the core of complex systems?
replies(7): >>42157709 #>>42157955 #>>42158073 #>>42159585 #>>42159656 #>>42171900 #>>42172228 #
klabb3 ◴[] No.42157955[source]
Yes I do! Is that some sort of gotcha? If I can choose between having a script that queries the db and generates a report and “Dave in marketing” who “has done it for years”, I’m going to pick the script. Who wouldn’t? Until machines can reliably understand, operate and self-correct independently, I’d rather not give up debuggability and understandability.
replies(2): >>42158254 #>>42158579 #
og_kalu ◴[] No.42158254[source]
>If I can choose between having a script that queries the db and generates a report and “Dave in marketing” who “has done it for years”

If you could that would be nice wouldn't it? And if you couldn't?

If people were saying, "let's replace Casio Calculators with interfaces to GPT" then that would be crazy and I would wholly agree with you but by and large, the processes people are scrambling to place LLMs in are ones that typical machines struggle or fail and humans excel or do decently (and that LLMs are making some headway in).

You're making the wrong distinction here. It's not Dave vs your nifty script. It's Dave or nothing at all.

There's no point comparing LLM performance to some hypothetical perfect understanding machine that doesn't exist.

You compare to the things its meant to replace - humans. How well can the LLM do this compared to Dave ?

replies(1): >>42158614 #
kuhewa ◴[] No.42158614[source]
> by and large, the processes people are scrambling to place LLMs in are ones that typical machines struggle or fail

I'm pretty sure they are scrambling to put them absolutely anywhere it might save or make a buck (or convince an investor that it could)

replies(2): >>42165625 #>>42170639 #
og_kalu ◴[] No.42170639[source]
If your task was being solved well by a deterministic script/algorithm, you are not going to save money porting to LLMs even if you use Open Source models.
replies(1): >>42170696 #
1. kuhewa ◴[] No.42170696[source]
'could' is doing a whole lot of work in that sentence, I'm being charitable. Reality is LLMs are being crammed in places where it isn't very sensible under thin justifications, just like the last few big ideas were (c.f. blockchain)
replies(1): >>42172376 #
2. og_kalu ◴[] No.42172376[source]
If it can't be solved by a script then what's problem with seeing if you can use LLMs ?

I guess I just don't see your point. So a few purported applications are not very sensible. So what ? This is every breakthrough ever.