Most active commenters

    ←back to thread

    728 points freetonik | 15 comments | | HN request time: 1.751s | source | bottom
    1. electric_muse ◴[] No.44976627[source]
    I just submitted my first big open source contribution to the OpenAI agents SDK for JS. Every word except the issue I opened was done by AI.

    On the flip side, I’m preparing to open source a project I made for a serializable state machine with runtime hooks. But that’s blood sweat and tears labor. AI is writing a lot of the unit tests and the code, but it’s entirely by my architectural design.

    There’s a continuum here. It’s not binary. How can we communicate what role AI played?

    And does it really matter anymore?

    (Disclaimer: autocorrect corrected my spelling mistakes. Sent from iPhone.)

    replies(6): >>44976688 #>>44976709 #>>44976717 #>>44976767 #>>44977295 #>>44977510 #
    2. kbar13 ◴[] No.44976688[source]
    if you read his note i think he gives good insight as to why he wants PRs to signal AI involvement.

    that being said i feel like this is an intermediate step - it's really hard to review PRs that are AI slop because it's so easy for those who don't know how to use AI to create a multi-hundred/thousand line diff. but when AI is used well, it really saves time and often creates high quality work

    replies(1): >>44976818 #
    3. kg ◴[] No.44976709[source]
    The OP seems to be coming from the perspective of "my time as a PR reviewer is limited and valuable, so I don't want to spend it coaching an AI agent or a thin human interface to an AI agent". From that perspective, it makes perfect sense to want to know how much a human is actually in the loop for a given PR. If the PR is good enough to not need much review then whether AI wrote it is less important.

    An angle not mentioned in the OP is copyright - depending on your jurisdiction, AI-generated text can't be copyrighted, which could call into question whether you can enforce your open source license anymore if the majority of the codebase was AI-generated with little human intervention.

    replies(1): >>44976935 #
    4. ToucanLoucan ◴[] No.44976717[source]
    > And does it really matter anymore?

    Well, if you had read what was linked, you would find these...

    > I think the major issue is inexperienced human drivers of AI that aren't able to adequately review their generated code. As a result, they're pull requesting code that I'm sure they would be ashamed of if they knew how bad it was.

    > The disclosure is to help maintainers assess how much attention to give a PR. While we aren't obligated to in any way, I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.

    > I'm a fan of AI assistance and use AI tooling myself. But, we need to be responsible about what we're using it for and respectful to the humans on the other side that may have to review or maintain this code.

    I don't know specifically what PR's this person is seeing. I do know it's been a rumble around the open source community that inexperienced devs are trying to get accepted PRs for open source projects because they look good on a resume. This predated AI in fact, with it being a commonly cited method to get attention in a competitive recruiting market.

    As always, folks trying to get work have my sympathies. However ultimately these folks are demanding time and work from others, for free, to improve their career prospects while putting in the absolute bare minimum of effort one could conceivably put in (having Copilot rewrite whatever part of an open source project and shove it into a PR with an explanation of what it did) and I don't blame them for being annoyed at the number of low-quality submissions.

    I have never once criticized a developer for being inexperienced. It is what it is, we all started somewhere. However if a dev generated shit code and shoved it into my project and demanded a headpat for it so he could get work elsewhere, I'd tell him to get bent too.

    5. beckthompson ◴[] No.44976767[source]
    I think its simple, just don't hide it. I've had mutliple contributors try to hide the fact that they used AI (E.g removing claude as a code author - they didn't know how to do it and close the PR when it first happened.). I don't really care if someone uses AI, but most of the people who do also do not test their changes which just gives me more work. If someone:

    1.) Didn't try to hide the fact that they used AI

    2.) Tested their changes

    I would not care at all. The main issue is this is usually not the case, most people submitting PRs that are 90% AI do not bother testing (Usually they don't even bother running the automated tests)

    replies(1): >>44977089 #
    6. spaceywilly ◴[] No.44976818[source]
    As long as they make it easy to add a “made with AI” tag to the PR, it seems like there’s really no downside. I personally can’t imagine why someone would want to hide the fact they used AI. A contractor would not try to hide that they used an excavator to dig a hole instead of a shovel.
    replies(2): >>44976950 #>>44976988 #
    7. victorbjorklund ◴[] No.44976935[source]
    As long as some of the code is written by humans it should be enforceable. If we assume AI code has no copyright (not sure it has been tested in courts yet) then it would only be the parts written by the AI. So if AI writes 100 lines of code in Ghostty then I guess yes someone can "steal" that code (but no other code in Ghostty). Why would anyone do that? 100 random lines of AI code in isolation isn't really worth anything...
    replies(1): >>44983033 #
    8. victorbjorklund ◴[] No.44976950{3}[source]
    I guess if you write 1000 lines and you just auto tabbed an auto-complete of a variable name done by AI you might not wanna say the code is written by AI.
    9. ineedasername ◴[] No.44976988{3}[source]
    >I personally can’t imagine why someone would want to hide the fact they used AI.

    Because of the perception that anything touched by AI must be uncreative slop made without effort. In the case of this article, why else are they asking for disclosure if not to filter and dismiss such contributions?

    replies(1): >>44978170 #
    10. ◴[] No.44977089[source]
    11. Jaxan ◴[] No.44977295[source]
    > How can we communicate what role AI played?

    What about just telling exactly what role AI played? You can say it generated the tests for you for instance.

    12. KritVutGu ◴[] No.44977510[source]
    > AI is writing a lot of the unit tests

    Are you kidding?

    - For ages now, people have used "broad test coverage" and "CI" as excuses for superficial reviews, as excuses for negligent coding and verification.

    - And now people foist even writing the test suite off on AI.

    Don't you see that this way you have no reasoned examination of the code?

    > ... and the code, but it’s entirely by my architectural design.

    This is fucking bullshit. The devil is in the details, always. The most care and the closest supervision must be precisely where the rubber meets the road. I wouldn't want to drive a car that you "architecturally designed", and a statistical language model manufactured.

    13. showcaseearth ◴[] No.44978170{4}[source]
    Did you actually read the post? The author describes exactly why. It's not to filter and dismiss, but it's to deprioritize spending cycles debugging and/or coaching a contributor on code they don't actually understand anyway. If you can articulate how you used AI, demonstrate that you understand the problem and your proposed solution (even if AI helped get you there), then I'm sure the maintainers will be happy to work with you to get a PR merged.

    >I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.

    replies(1): >>44979655 #
    14. ineedasername ◴[] No.44979655{5}[source]
    >did you actually read the post?

    Yes.

    >but it's to deprioritize spending cycles debugging and/or coaching a contributor on code they don't

    This is very much in line with my comment about doing it to filter and dismiss. The author didn't say "So I can reach out and see if their clear eagerness to contribute extends to learning to code in more detail".

    15. simoncion ◴[] No.44983033{3}[source]
    You might be interested in reading Part 2 of the US Copyright Office's report on Copyright and Artificial Intelligence: <https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...>