←back to thread

423 points serjester | 1 comments | | HN request time: 0.291s | source
1. LeifCarrotson ◴[] No.43538030[source]
Unfortunately, LLMs, natural language, and human cognition largely are what they are. Mix the three together and you don't get reliability as a result.

It's not like there's a lever in Cursor HQ where one side is "Capability" and one side is "Reliability", and they can make things better just by tipping it back towards the latter.

You can bias designs and efforts in that direction, and get your tool to output reversible steps or bake in sanity checks to blessed actions, but that doesn't change the nature of the problem.