Most active commenters

    ←back to thread

    490 points todsacerdoti | 23 comments | | HN request time: 0.969s | source | bottom
    1. Havoc ◴[] No.44382839[source]
    I wonder whether the motivation is really legal? I get the sense that some projects are just sick of reviewing crap AI submissions
    replies(6): >>44382854 #>>44382954 #>>44383005 #>>44383017 #>>44383164 #>>44383177 #
    2. SchemaLoad ◴[] No.44382854[source]
    This could honestly break open source, with how quickly you can generate bullshit, and how long it takes to review and reject it. I can imagine more projects going the way of Android where you can download the source, but realistically you can't contribute as a random outsider.
    replies(5): >>44382866 #>>44382874 #>>44383174 #>>44383418 #>>44385273 #
    3. api ◴[] No.44382866[source]
    Quality contributions to OSS are rare unless the project is huge.
    replies(1): >>44382922 #
    4. hollerith ◴[] No.44382874[source]
    I've always thought that the possibility of forking the project is the main benefit to open-source licensing, and we know Android can be forked.
    replies(1): >>44382997 #
    5. loeg ◴[] No.44382922{3}[source]
    Historically the opposite of quality contributions has been no contributions, not net-negative contributions (random slop that costs more in review than it provides benefit).
    replies(2): >>44383133 #>>44387502 #
    6. disconcision ◴[] No.44382954[source]
    i mean they say the policy is open for revision and it's also possible to make exceptions; if it's an excuse, they are going out of their way to let people down easy
    7. ants_everywhere ◴[] No.44382997{3}[source]
    the primary benefit of open source is freedom
    replies(1): >>44383055 #
    8. Lerc ◴[] No.44383005[source]
    I'm not sure which way AI would move the dial when it comes to the median submission. Humans can, and do, make some crap code.

    If the problem is too many submissions, that would suggest there needs to be structures in place to manage that.

    Perhaps projects receiving lage quanties of updates need triage teams. I suspect most of the submissions are done in good faith.

    I can see some people choosing to avoid AI due to the possibility of legal issues. I'm doubtful of the likelihood of such problems, but some people favour eliminating all possibly over minimizing likelihood. The philosopher in me feels like people who think they have eliminated the possibility of something just haven't thought about it enough.

    replies(2): >>44383115 #>>44383122 #
    9. gerdesj ◴[] No.44383017[source]
    The policy is concise and well bounded. It seems to me to assert that you cannot safely assign attribution of authorship of software code that you think was generated algorithmically.

    I use the term algorithmic because I think it is stronger than "AI lol". I note they use terms like AI code generator in the policy, which might be just as strong but looks to me as unlikely to becoming a useful legal term (its hardly "a man on the Clapham omnibus").

    They finish with this, rather reasonable flourish:

    "The policy we set now must be for today, and be open to revision. It's best to start strict and safe, then relax."

    No doubt they do get a load of slop but they seem to want to close the legal angles down first and attribution seems a fair place to start off. This play book looks way better than curl's.

    10. javawizard ◴[] No.44383055{4}[source]
    This is so tautological that I can't really tell what point you're trying to make.
    replies(1): >>44383139 #
    11. catlifeonmars ◴[] No.44383115[source]
    > If the problem is too many submissions, that would suggest there needs to be structures in place to manage that. > Perhaps projects receiving lage quanties of updates need triage teams. I suspect most of the submissions are done in good faith.

    This ignores the fact that many open source projects do not have the resources to dedicate to a large number of contributions. A side effect of LLM generated code is probably going to be a lot of code. I think this is going to be an issue that is not dependent on the overall quality of the code.

    replies(1): >>44384220 #
    12. ehnto ◴[] No.44383122[source]
    Barrier of entry, automated submissions are two aspects I see changing with AI. You at least have to be able to code before submitting bad code.

    With AI you're going to get job hunters automating PRs for big name projects so they can stick the contributions in their resume.

    13. lmm ◴[] No.44383133{4}[source]
    No it hasn't? Net-negative contributions to open source have been extremely common for years, it's not like you need an LLM to make them.
    replies(1): >>44383393 #
    14. ants_everywhere ◴[] No.44383139{5}[source]
    how can it possibly be tautological? The comment just above me said something entirely different: that the primary benefit of open source is forking
    15. bobmcnamara ◴[] No.44383164[source]
    Have you seen how Monsanto enforces their seed right?
    16. b00ty4breakfast ◴[] No.44383174[source]
    I have an online acquaintance that maintains a very small and not widely used open-source project and the amount of (what we assume to be) automated AI submissions* they have to wade through is kinda wild given the very small number of contributors and users the thing has. It's gotta be clogging up these big projects like a DDoS attack.

    *"Automated" as in bots and "AI submissions" as in ai-generated code

    replies(1): >>44387317 #
    17. esjeon ◴[] No.44383177[source]
    Possibly, but QEMU is such a critical piece software in our industry. Its application stretches from one end to the other - desktop VM, cloud/remote instance, build server, security sandbox, cross-platform environment, etc. Even a small legal risk can hurt the industry pretty badly.
    18. loeg ◴[] No.44383393{5}[source]
    I guess we've had very different experiences!
    19. zahlman ◴[] No.44383418[source]
    For many projects you realistically can't contribute as a random outsider anyway, simply because of the effort involved in grokking enough of the existing architecture to figure out where to make changes.
    20. Lerc ◴[] No.44384220{3}[source]
    I thought that this could be an opportunity for volunteers who can't dedicate the time to learn a codebase thoroughly enough to be a regular committer. They just have to evaluate a patch to see if it meets a threshold of quality where they can pass it on to someone who does know the codebase well.

    The barrier to being able to do a first commit on any project is usually quite high, there are plenty of people who would like to contribute to projects but cannnot dedicate the time n effort to pass that initial threshold. This might allow people an ability to contribute at a lower level while gently introducing them to the codebase where perhaps they might become a regular contributer in the future.

    21. graemep ◴[] No.44385273[source]
    I think it is yet another reason (potentially malicious contributors are another) that open source projects are going to have to verify contributors.
    22. guappa ◴[] No.44387317{3}[source]
    I find that by being on codeberg instead of github i tune out a lot of the noise.
    23. LtWorf ◴[] No.44387502{4}[source]
    Nah. I've had a lot of bad contributions. One PR deleted and readded all of the lines in the project, and the entire test suite was failing.

    The person got upset at me for saying I could not accept such a thing.

    There's other examples.