←back to thread

419 points serjester | 1 comments | | HN request time: 0.199s | source
1. gcp123 ◴[] No.43542963[source]
I've spent the last six months building a coding agent at work, and the reliability issues are killing us. Our users don't want 'superhuman' results 10% of the time - they want predictable behavior they can trust.

When we tried the 'full agent' approach (letting it roam freely through our codebase), we ended up with some impressive demos but constant production incidents. We've since pivoted to more constrained workflows with human checkpoints, and while less flashy, user satisfaction has gone way up.

The Cursor wipeout incident is a perfect example. It's not about blaming users who don't understand git - it's about tools that should know better. When I hand my code to another developer, they understand the implied contract of 'don't delete all my shit without asking.' Why should AI get a pass?

Reliable > clever. It's the difference between a senior engineer who delivers consistently and a junior who occasionally writes brilliant code but breaks the build every other week."