←back to thread

145 points jakozaur | 1 comments | | HN request time: 0.259s | source
1. pton_xd ◴[] No.45674000[source]
The "lethal trifecta" sounds catchy but I don't believe it accurately characterizes the risks of LLMs.

In theory any two of the trifecta is fine, but practically speaking I think you only need "ability to communicate with the outside," or maybe not even that. Business logic is not really private data anymore. Most devs are likely one `npm update` away from their LLM getting a new command from some transitive dependency.

The LLM itself is also a giant blackbox of unverifiable untrusted data, so I guess you just have to cross your fingers on that one. Maybe your small startup doesn't need to be worried about models being seeded with adversarial training data, but if I were say Coinbase I'd think twice before allowing LLM access to anything.