←back to thread

559 points Gricha | 6 comments | | HN request time: 0s | source | bottom
1. maddmann ◴[] No.46232972[source]
lol 5000 tests. Agentic code tools have a significant bias to add versus remove/condense. This leads to a lot of bloat and orphaned code. Definitely something that still needs to be solved for by agentic tools.
replies(2): >>46233443 #>>46233663 #
2. oofbey ◴[] No.46233443[source]
Oh I’ve had agents remove tests plenty of times. Or cripple the tests so they pass but are useless - more common and harder to prompt against.
replies(1): >>46235744 #
3. nosianu ◴[] No.46233663[source]
> Agentic code tools have a significant bias to add versus remove/condense.

Your point stands uncontested by me, but I just wanted to mention that humans have that bias too.

Random link (has the Nature study link): https://blog.benchsci.com/this-newly-proven-human-bias-cause...

https://en.wikipedia.org/wiki/Additive_bias

replies(1): >>46235708 #
4. maddmann ◴[] No.46235708[source]
Great point, interesting how agents somehow pick up the same bias.
5. maddmann ◴[] No.46235744[source]
Ah true, that also can happen — in aggregate I think models will tend to expand codebases versus contract. Though, this is anecdotal and probably is something ai labs and coding agent companies are looking at now.
replies(1): >>46237595 #
6. oofbey ◴[] No.46237595{3}[source]
It’s the same bias for action which makes them code up a change when you genuinely are just asking a question about something. They really want to write code.