But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”
But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”
If trust didn't matter, there wouldn't have been a need for the Linux Kernel team to ban the University of Minnesota for attempting to intentionally smuggle bugs through the PR process as part of an unauthorized social experiment. As it stands, if you / your PRs can't be trusted, they should not even be admitted to the review process.
That's a pretty nice offer from one of the most famous and accomplished free software maintainers in the world. He's promising not to take a short-cut reviewing your PR, in exchange for you not taking a short-cut writing it in the first place.
This “short cut” language suggests that the quality of the submission is going to be objectively worse by way of its provenance.
Yet, can one reliably distinguish working and tested code generated by a person vs a machine? We’re well past passing Turing tests at this point.
IMO when people declare that LLMs "pass" at a particular skill, it's a sign that they don't have the taste or experience to judge that skill themselves. Or - when it's CEOs - they have an interest in devaluing it.
So yes if you're trying to fool an experienced open source maintainer with unrefined LLM-generated code, good luck (especially one who's said he doesn't want that).