←back to thread

75 points throwaway-ai-qs | 2 comments | | HN request time: 0.003s | source

Between code reviews, and AI generated rubbish, I've had it. Whether it's people relying on AI to write pull request descriptions (that are crap by the way), or using it to generate tests.. I'm sick of it.

Over the year, I've been doing a tonne of consulting. The last three months I've watched at least 8 companies embrace AI generated pip for coding, testing, and code reviews. Honestly, the best suggestions I've seen are found by linters in CI, and spell checkers. Is this what we've come to?

My question for my fellow HNers.. is this what the future holds? Is this everywhere? I think I'm finally ready to get off the ride.

Show context
nharada ◴[] No.45278990[source]
My biggest annoyance is that people aren't transparent about when they use AI, and thus you are forced to review everything through the lens that it may be human created and thus deserving of your attention and benefit of the doubt.

When an AI generates some nonsense I have zero problem changing or deleting it, but if it's human-written I have to be aware that I may be missing context/understanding and also cognizant of the author's feelings if I just re-write the entire thing without their input.

It's a huge amount of work offloaded on me, the reviewer.

replies(2): >>45279083 #>>45279169 #
kstrauser ◴[] No.45279083[source]
I disagree. Code is code: it speaks for itself. If it's high quality, I don't care whether it came from a human or an AI trained on good code examples. If it sucks, it's not somehow less awful just because someone worked really hard on it. What would change for me is how tactful I am in wording my response to it, in which case it's a little nicer replying to AIs because I don't care about being mean to them. The summary of my review would be the same either way: here are the bad parts I want you to re-work before I consider this.
replies(10): >>45279121 #>>45279166 #>>45279176 #>>45279188 #>>45279282 #>>45279301 #>>45279327 #>>45279336 #>>45279362 #>>45280127 #
1. roughly ◴[] No.45279188[source]
The problem with AI generated code is there’s no unifying theory behind the change. When a human writes code and one part of the system looks weird or different, there’s usually a reason - by digging in on the underlying system, I can usually figure out what they were going for or why they did something in a particular way. I can only provide useful feedback or alternatives if I can figure out why something was done, though.

LLM-generating code has no unifying theory behind it - every line may as well have been written by a different person, so you get an utterly insane looking codebase with no constant thread tying it together and no reason why. It’s like trying to figure out what the fuck is happening in a legacy codebase, except it’s brand new. I’ve wasted hours trying to understand someone’s MR, only to realize it’s vibe code and there’s no reason for any of it.

replies(1): >>45282486 #
2. scarface_74 ◴[] No.45282486[source]
How is that different then long lived code bases where various developers with varying skill levels have modified it?