←back to thread

728 points freetonik | 3 comments | | HN request time: 0.001s | source
Show context
Waterluvian ◴[] No.44976790[source]
I’m not a big AI fan but I do see it as just another tool in your toolbox. I wouldn’t really care how someone got to the end result that is a PR.

But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”

replies(16): >>44976860 #>>44976869 #>>44976945 #>>44977015 #>>44977025 #>>44977121 #>>44977142 #>>44977241 #>>44977503 #>>44978050 #>>44978116 #>>44978159 #>>44978240 #>>44978311 #>>44978533 #>>44979437 #
cvoss ◴[] No.44976945[source]
It does matter how and where a PR comes from, because reviewers are fallible and finite, so trust enters the equation inevitably. You must ask "Do I trust where this came from?" And to answer that, you need to know where it come from.

If trust didn't matter, there wouldn't have been a need for the Linux Kernel team to ban the University of Minnesota for attempting to intentionally smuggle bugs through the PR process as part of an unauthorized social experiment. As it stands, if you / your PRs can't be trusted, they should not even be admitted to the review process.

replies(4): >>44977169 #>>44977263 #>>44978862 #>>44979553 #
otterley ◴[] No.44978862[source]
If it comes with good documentation and appropriate tests, does that help?
replies(2): >>44979254 #>>44979456 #
mattbee ◴[] No.44979254[source]
The observation that inspired this policy is that if you used AI, it is likely you don't know if the code, the documentation or tests are good or appropriate.
replies(1): >>44979322 #
otterley ◴[] No.44979322[source]
What if you started with good documentation that you personally wrote, you gave that to the agent, and you verified the tests were appropriate and passed?
replies(1): >>44979539 #
mattbee ◴[] No.44979539[source]
I'd extrapolate that the OP's view would be: you've still put in less effort, so your PR is less worthy of his attention than someone who'd done the same without using LLMs.

That's a pretty nice offer from one of the most famous and accomplished free software maintainers in the world. He's promising not to take a short-cut reviewing your PR, in exchange for you not taking a short-cut writing it in the first place.

replies(1): >>44979731 #
1. otterley ◴[] No.44979731[source]
> in exchange for you not taking a short-cut writing it in the first place.

This “short cut” language suggests that the quality of the submission is going to be objectively worse by way of its provenance.

Yet, can one reliably distinguish working and tested code generated by a person vs a machine? We’re well past passing Turing tests at this point.

replies(1): >>44980065 #
2. mattbee ◴[] No.44980065[source]
LLMs can't count letters, their writing is boring, and you can trick them into talking gibberish. That is a long way off the Turing test, even if we were fooled for a couple of weeks in 2022.

IMO when people declare that LLMs "pass" at a particular skill, it's a sign that they don't have the taste or experience to judge that skill themselves. Or - when it's CEOs - they have an interest in devaluing it.

So yes if you're trying to fool an experienced open source maintainer with unrefined LLM-generated code, good luck (especially one who's said he doesn't want that).

replies(1): >>44984592 #
3. otterley ◴[] No.44984592[source]
We’re talking about code here, not prose.

Would you like to take the Pepsi challenge? Happy to put random code snippets in front of you and see whether you can accurately determine whether it was written by a human or an LLM.