←back to thread

75 points throwaway-ai-qs | 5 comments | | HN request time: 0s | source

Between code reviews, and AI generated rubbish, I've had it. Whether it's people relying on AI to write pull request descriptions (that are crap by the way), or using it to generate tests.. I'm sick of it.

Over the year, I've been doing a tonne of consulting. The last three months I've watched at least 8 companies embrace AI generated pip for coding, testing, and code reviews. Honestly, the best suggestions I've seen are found by linters in CI, and spell checkers. Is this what we've come to?

My question for my fellow HNers.. is this what the future holds? Is this everywhere? I think I'm finally ready to get off the ride.

Show context
nharada ◴[] No.45278990[source]
My biggest annoyance is that people aren't transparent about when they use AI, and thus you are forced to review everything through the lens that it may be human created and thus deserving of your attention and benefit of the doubt.

When an AI generates some nonsense I have zero problem changing or deleting it, but if it's human-written I have to be aware that I may be missing context/understanding and also cognizant of the author's feelings if I just re-write the entire thing without their input.

It's a huge amount of work offloaded on me, the reviewer.

replies(2): >>45279083 #>>45279169 #
kstrauser ◴[] No.45279083[source]
I disagree. Code is code: it speaks for itself. If it's high quality, I don't care whether it came from a human or an AI trained on good code examples. If it sucks, it's not somehow less awful just because someone worked really hard on it. What would change for me is how tactful I am in wording my response to it, in which case it's a little nicer replying to AIs because I don't care about being mean to them. The summary of my review would be the same either way: here are the bad parts I want you to re-work before I consider this.
replies(10): >>45279121 #>>45279166 #>>45279176 #>>45279188 #>>45279282 #>>45279301 #>>45279327 #>>45279336 #>>45279362 #>>45280127 #
1. alansammarone ◴[] No.45279176[source]
I've had a similar discussion with a coworker which I respect and know to be very experienced, and interestingly we disagreed on this very point. I'm with you - I think AI is just tool, and people shouldn't be off the hook because they used AI code. If they consistently deliver bad code, bad PR descriptions, or fail to explain/articulate their reasoning, I don't see any particular reason we should treat it any differently now that AI exists. It goes both ways, of course - reviewer also shouldn't pay less attention when the code is did not involve AI help in any form. I think these are completely orthogonal and I honestly don't see why people have this view.

The person who created the PR is responsible for it. Period. Nothing changes.

replies(1): >>45279314 #
2. skydhash ◴[] No.45279314[source]
It does because the amount of PR goes up. So instead of reviewing, it’s more like back and forth debugging where you are doing the check that the author was supposed to do.
replies(1): >>45279372 #
3. alansammarone ◴[] No.45279372[source]
So the author is not a great programmer/professional. I agree with you that they should have done their homework, tested it, have a mental model for why and how, etc. If they don't, it doesn't seem to be particularly relevant to me if that's because they had a concussion or because they use AI.
replies(1): >>45279552 #
4. skydhash ◴[] No.45279552{3}[source]
It’s easy to skip quality with code, starting with coding only the happy path and bad design that hides bugs. Handling errors properly can take a lot of times, and designing to avoid errors takes longer.

So when you have a tools that can produce things that fits the happy path easily, don’t be surprised that the amount of PRs goes up. Because before, by the time you can write the happy path that easily, experience has taught you all the error cases that you would have skipped.

replies(1): >>45282467 #
5. JustExAWS ◴[] No.45282467{4}[source]
I have been developing for 40 years as a hobbyist for 10 and a professional for 30. I always started with the happy path and made it sure it worked and then kept thinking about corner cases. If an LLM can get me through the happy path (and it often generates code to guard against corner cases) why wouldn’t I use it?