←back to thread

317 points laserduck | 1 comments | | HN request time: 0.211s | source
Show context
klabb3 ◴[] No.42157457[source]
I don’t mind LLMs in the ideation and learning phases, which aren’t reproducible anyway. But I still find it hard to believe engineers of all people are eager to put a slow, expensive, non-deterministic black box right at the core of extremely complex systems that need to be reliable, inspectable, understandable…
replies(6): >>42157615 #>>42157652 #>>42158074 #>>42162081 #>>42166294 #>>42167109 #
1. wslh ◴[] No.42158074[source]
100% agree. While I can’t find all the sources right now, [1] and its references could be a good starting point for further exploration. I recall there being a proof or conjecture suggesting that it’s impossible to build an "LLM firewall" capable of protecting against all possible prompts—though my memory might be failing me

[1] https://arxiv.org/abs/2410.07283