←back to thread

323 points steerlabs | 1 comments | | HN request time: 0.22s | source
Show context
jqpabc123 ◴[] No.46153440[source]
We are trying to fix probability with more probability. That is a losing game.

Thanks for pointing out the elephant in the room with LLMs.

The basic design is non-deterministic. Trying to extract "facts" or "truth" or "accuracy" is an exercise in futility.

replies(17): >>46155764 #>>46191721 #>>46191867 #>>46191871 #>>46191893 #>>46191910 #>>46191973 #>>46191987 #>>46192152 #>>46192471 #>>46192526 #>>46192557 #>>46192939 #>>46193456 #>>46194206 #>>46194503 #>>46194518 #
1. jkubicek ◴[] No.46194503[source]
The author's solution feels like adding even more probability to their solution.

> The next time the agent runs, that rule is injected into its context.

Which the agent may or may not choose to ignore.

Any LLM rule must be embedded in an API. Anything else is just asking for bugs or security holes.