←back to thread

728 points freetonik | 4 comments | | HN request time: 0.213s | source
Show context
hodgehog11 ◴[] No.44976891[source]
How does this not lead to a situation where no honest person can use any AI in their submissions? Surely pull requests that acknowledge AI tooling will be given significantly less attention, on the grounds that no one wants to read work that they know is written by AI.
replies(8): >>44976947 #>>44976989 #>>44977022 #>>44977044 #>>44977416 #>>44977491 #>>44977540 #>>44977886 #
Workaccount2 ◴[] No.44977022[source]
Make a knowledgeable reply and mention you used chat-gpt - comment immediately buried.

Make a knowledgeable reply and give no reference to the AI you used- comment is celebrated.

We are already barreling full speed down the "hide your AI use" path.

replies(3): >>44977616 #>>44979921 #>>44984197 #
1. showcaseearth ◴[] No.44977616[source]
I doubt a PR is going to be buried if it's useful, well designed, good code, etc, just because of this disclosure. Articulate how you used AI and I think you've met the author's intent.

If the PR has issues and requires more than superficial re-work to be acceptable, the authors don't want to spend time debugging code spit out by an AI tool. They're more willing to spend a cycle or two if the benefit is you learning (either generally as a dev or becoming more familiar with the project). If you can make clear that you created or understand the code end to end, then they're more likely to be willing to take these extra steps.

Seems pretty straightforward to me and thoughtful by the maintainers here.

replies(1): >>44978224 #
2. rane ◴[] No.44978224[source]
> I doubt a PR is going to be buried if it's useful, well designed, good code, etc, just because of this disclosure

If that were the case, why would this rule be necessary, if it indeed is the substance that matters? AI generated anything has a heavy slop stigma right now, even if the content is solid.

This would make for an interesting experiment to submit a PR that was absolute gold but with the disclaimer it was generated with help of ChatGPT. I would almost guarantee it would be received with skepticism and dismissals.

replies(2): >>44980328 #>>44980480 #
3. s-lambert ◴[] No.44980328[source]
The rule is necessary because the maintainers want to build good will with contributors and if a contributor makes a bad PR but could still learn from it then they will put effort into it. It's a "if you made a good effort, we'll give you a good effort" and using AI tools gives you a very low floor for what "effort" is.

If you make a PR where you just used AI, it seems to work, but didn't go further then the maintainers can go "well I had a look, it looks bad, you didn't put effort in, I'm not going to coach you through this". But if you make a PR where you go "I used AI to learn about X then tried to implement X myself with AI writing some of it" then the maintainers can go "well this PR doesn't look good quality but looks like you tried, we can give some good feedback but still reject it".

In a world without AI, if they were getting a lot of PRs from people who obviously didn't spend any time on their PRs then maybe they would have a "tell us how long this change took you" disclosure as well.

4. davidcbc ◴[] No.44980480[source]
The author explains why

> While we aren't obligated to in any way, I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.

If it's bad code from a person he'll help them get it fixed. If it's bad code from an AI why bother?