←back to thread

728 points freetonik | 1 comments | | HN request time: 0s | source
Show context
Waterluvian ◴[] No.44976790[source]
I’m not a big AI fan but I do see it as just another tool in your toolbox. I wouldn’t really care how someone got to the end result that is a PR.

But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”

replies(16): >>44976860 #>>44976869 #>>44976945 #>>44977015 #>>44977025 #>>44977121 #>>44977142 #>>44977241 #>>44977503 #>>44978050 #>>44978116 #>>44978159 #>>44978240 #>>44978311 #>>44978533 #>>44979437 #
cvoss ◴[] No.44976945[source]
It does matter how and where a PR comes from, because reviewers are fallible and finite, so trust enters the equation inevitably. You must ask "Do I trust where this came from?" And to answer that, you need to know where it come from.

If trust didn't matter, there wouldn't have been a need for the Linux Kernel team to ban the University of Minnesota for attempting to intentionally smuggle bugs through the PR process as part of an unauthorized social experiment. As it stands, if you / your PRs can't be trusted, they should not even be admitted to the review process.

replies(4): >>44977169 #>>44977263 #>>44978862 #>>44979553 #
koolba ◴[] No.44977169[source]
> You must ask "Do I trust where this came from?" And to answer that, you need to know where it come from.

No you don’t. You can’t outsource trust determinations. Especially to the people you claim not to trust!

You make the judgement call by looking at the code and your known history of the contributor.

Nobody cares if contributors use an LLM or a magnetic needle to generate code. They care if bad code gets introduced or bad patches waste reviewers’ time.

replies(3): >>44977245 #>>44977531 #>>44978479 #
eschaton ◴[] No.44978479[source]
You’re completely incorrect. People care a lot about where code came from. They need to be able to trust that code you’re contributing was not copied from a project under AGPLv3, if the project you’re contributing to is under a different license.

Stop trying to equate LLM-generated code with indexing-based autocomplete. They’re not the same thing at all: LLM-generated code is equivalent to code copied off Stack Overflow, which is also something you’d better not be attempting to fraudulently pass off as your own work.

replies(2): >>44979814 #>>44981058 #
koolba ◴[] No.44979814[source]
I’m not equating any type of code generation. I’m saying that as a maintainer you have to evaluate any submission on the merits, not on a series of yes/no questions provided by the submitter. And your own judgement is influenced by what you know about the submitter.
replies(1): >>44981192 #
1. eschaton ◴[] No.44981192{3}[source]
And I’m saying, as a maintainer, you have to and are doing both, even if you don’t think you are.

For example, you either make your contributors attest that their changes are original or that they have the right to contribute their changes—or you assume this of them and consider it implicit in their submission.

What you (probably) don’t do is welcome contributions that the contributors do not have the right to make.