←back to thread

728 points freetonik | 2 comments | | HN request time: 0s | source
Show context
ivanjermakov ◴[] No.44979576[source]
Hot take: if you can't spot any issues in the code review it's either good code, code that needs further changes, or review was not done properly. I don't see how "I used LLMs" fit here, because it means nothing to the quality of the code submitted.

If such mention would mean increased reviewer attention, then every code review should include it.

replies(3): >>44979774 #>>44982405 #>>44982900 #
badosu ◴[] No.44982405[source]
I contribute to an open-source game with decades of legacy and employing many unusual patterns.

Two weeks ago someone asked me to review a PR, which I did pointing out some architectural concerns. The code was not necessarily bad, but it added boilerplate to a point it required some restructuring to keep the codebase maintainable. I couldn't care less if it was written by AI or not in this case, it was not necessarily simple but was straightforward.

It took considerable effort to conjure a thoughtful and approachable description of the problem and the possible solutions. Keep in mind this is a project with considerable traction and a high amount of contributions are from enthusiasts. So having a clear, maintainable and approachable codebase is sometimes the most important requirement.

They asked for a second pass but it took two weeks for me to get around it, in the meantime they sent 3 different PRs, one closed after the other. I found it a bit strange, then I put some effort to review the last iteration. It had half baked solutions where for example there would be state cleanup functions but state was never written in the first place, something that would never go through normally, I gave the benefit of the doubt and pointed it out. I suspected it was AI generated most likely.

Then they showed me another variation of the PR where they implement a whole different architecture, some incredibly overengineered fluent interface to resolve a simple problem with many layers of indirection that reflects complete indifference to the more nuanced considerations relevant to the domain and project, that I tried to impair. The code might work, but even if it does it's obvious that the change is a net negative to the project.

What I suspected indeed was the case, as they finally disclosed the use of AI, but that is not necessarily the problem, as I hope to convey. The problem is that I was unable to gauge the submitters commitment to perform the humble job of _understanding_ what I proposed. The proposal, in the end, just becoming mere tokens for inclusion into a prompt. Disclosure wouldn't necessarily have caused me to not take the PR seriously, instead I would invested my time in the more productive effort of orienting the submitter on the tractability of achieving their goal.

I would rather have known they didn't intend or gauged their capacity beforehand. It would have been better for both of us: they would have had their initial iteration merged (which was fine, I would just have shrugged the refactor for another occasion) and I wouldn't have lost time.

replies(1): >>44982498 #
1. ivanjermakov ◴[] No.44982498[source]
Your story highlights the clash of two mindsets regarding open source development. On one side there are artisans who treat programming as a craft and care about every line of code. On the other side, vibe-coders who might not even seen the code LLMs have generated and don't care about it as long as the program works.

And of course there is everything in between.

replies(1): >>44982528 #
2. badosu ◴[] No.44982528[source]
Some context: it is a tightly knit community, where I interact with the contributors often in a conversational manner.

I know their intent is to push the project forward and well-meaning, I don't care about whether they are vibe-coding. I care about knowing they are vibe-coding so I can assist them to vibe-code in a way they can actually achieve their goal, or help them realize early that they lack the capacity to contribute (not of their own fault necessarily, maybe they just require orientation on how to reason about problems and solutions or their use of the tools).