←back to thread

728 points freetonik | 1 comments | | HN request time: 0s | source
Show context
hombre_fatal ◴[] No.44980547[source]
I see two things here.

1. The world has fundamentally changed due to LLMs. You don't know where a code submission falls between "written thoroughly with eternal vigilance" vs "completely vibe-coded" since it's now trivially to generate the later. There's no going back. And a lot of comments here seem stuck on this point.

2. The maintainer naively or stubbornly imagines that he can get everyone to pre-sort their code between the two buckets through self-reporting.

But that's futile.

It's like asking someone if they're a good person on a date because you don't want to waste your time with bad people. Unfortunately, that shortcut doesn't exist.

Now, maybe going forward we will be forced to come up with real solutions to the general problem of vetting people. But TFA feels like more of a stunt than a serious pitch.

replies(3): >>44980703 #>>44981628 #>>44981639 #
potsandpans ◴[] No.44980703[source]
In a nondismissive way, I see things like this (the gh issue) as part of the reactionary movement / counter culture of our time.

People want to feel agency and will react to mainstream pressures. And make up whatever excuses along the way to justify what theyre feeling.

replies(2): >>44981512 #>>44983004 #
1. simoncion ◴[] No.44983004[source]
It's not about "feeling agency" or fabricating a justification. As the PR says:

"AI tooling must be disclosed for contributions

I think, at this stage of AI, it is a common courtesy to disclose this.

In a perfect world, AI assistance would produce equal or higher quality work than any human. That isn't the world we live in today, and in many cases it's generating slop. I say this despite being a fan of and using them successfully myself (with heavy supervision)! I think the major issue is inexperienced human drivers of AI that aren't able to adequately review their generated code. As a result, they're pull requesting code that I'm sure they would be ashamed of if they knew how bad it was.

The disclosure is to help maintainers assess how much attention to give a PR. While we aren't obligated to in any way, I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.

I'm a fan of AI assistance and use AI tooling myself. But, we need to be responsible about what we're using it for and respectful to the humans on the other side that may have to review or maintain this code."