←back to thread

108 points leoncaet | 2 comments | | HN request time: 0.001s | source
Show context
reactordev ◴[] No.44537411[source]
Curious how they’ll balance the business needs of moving fast with AI vs quality because my agents aren’t that good. While it works, I’m often having to cleanup afterwards - slowing everything down. I was almost as fast when I had just basic intellisense.

Anyway, I’ll watch the twitch stream from across the pond.

replies(8): >>44537450 #>>44537918 #>>44538120 #>>44538124 #>>44538182 #>>44538255 #>>44538459 #>>44538599 #
1. xyzzy123 ◴[] No.44538124[source]
Yeah it's interesting, unless I lean hard on them, AI coding agents will tend to solve problems with a lot of "hedging" by splitting into cases or duplicating code. It is totally fine with infinity special cases and unless you push for it, they will solve most problems with special cases and not generalise or consolidate (gemini, claude code at least both seem to have this behaviour).

I feel like this comes about because it's the optimal strategy for doing robust one-shot "point fixes", but it comes at the cost of long-term codebase heath.

I have noticed this bias towards lots of duplication eventually creates a kind of "ai code soup" that you can only really "fix" or keep working on with AI from that point out.

With the right guidance and hints you can get it to refactor and generalise - and it does it well - but the default style definitely trends to "slop" in my experience so far.

replies(1): >>44538296 #
2. zahlman ◴[] No.44538296[source]
To be fair, a lot of humans also have this problem.