←back to thread

419 points serjester | 1 comments | | HN request time: 0.21s | source
Show context
kuil009 ◴[] No.43540865[source]
It's natural to expect reliability from AI agents — but I don't think Cursor is a fair example. It's a developer tool deeply integrated with git, where every action can have serious consequences, as in any software development context.

Rather than blaming the agent, we should recognize that this behavior is expected. It’s not that AI is uniquely flawed — it's that we're automating a class of human communication problems that already exist.

This is less about broken tools and more about adjusting our expectations. Just like hunters had to learn how to manage gunpowder weapons after using bows, we’re now figuring out how to responsibly wield this new power.

After all, when something works exactly as intended, we already have a word for that: software.

replies(1): >>43540930 #
bigfishrunning ◴[] No.43540930[source]
Lol software is a field that pretty severely lacks rigor -- if software is "something that works exactly as intended", then you've had a very different experience in this industry then I have.
replies(1): >>43541052 #
1. kuil009 ◴[] No.43541052[source]
Garbage in, garbage out. Like it or not, even someone’s trashy intentions can run exactly as designed — so I guess we’ve had the same experience.