←back to thread

146 points jakozaur | 2 comments | | HN request time: 0.491s | source
1. gok ◴[] No.45673322[source]
> While they provide data privacy, our research shows their weaker reasoning and alignment capabilities make them easier targets for sabotage.

If you are using any LLM's reasoning ability as a security boundary, something is deeply, deeply wrong.

replies(1): >>45680503 #
2. liqilin1567 ◴[] No.45680503[source]
This reminds me of stalwart's spam filter feature claim: "LLM-driven spam filtering and message analysis." :D

https://github.com/stalwartlabs/stalwart