←back to thread

728 points freetonik | 5 comments | | HN request time: 0.786s | source
Show context
hombre_fatal ◴[] No.44980547[source]
I see two things here.

1. The world has fundamentally changed due to LLMs. You don't know where a code submission falls between "written thoroughly with eternal vigilance" vs "completely vibe-coded" since it's now trivially to generate the later. There's no going back. And a lot of comments here seem stuck on this point.

2. The maintainer naively or stubbornly imagines that he can get everyone to pre-sort their code between the two buckets through self-reporting.

But that's futile.

It's like asking someone if they're a good person on a date because you don't want to waste your time with bad people. Unfortunately, that shortcut doesn't exist.

Now, maybe going forward we will be forced to come up with real solutions to the general problem of vetting people. But TFA feels like more of a stunt than a serious pitch.

replies(3): >>44980703 #>>44981628 #>>44981639 #
1. potsandpans ◴[] No.44980703[source]
In a nondismissive way, I see things like this (the gh issue) as part of the reactionary movement / counter culture of our time.

People want to feel agency and will react to mainstream pressures. And make up whatever excuses along the way to justify what theyre feeling.

replies(2): >>44981512 #>>44983004 #
2. nullc ◴[] No.44981512[source]
I don't think it has anything to do with being a reactionary movement or counter culture. If it were I would expect, among other things, that it would absolutely prohibit the use of AI entirely rather than just require disclosure.

The background is that many higher profile open source projects are getting deluged by low quality AI slop "contributions", not just crappy code but when you ask questions about it you sometimes get an argumentative chatbot lying to you about what the PR does.

And this latest turn has happened on top of other trends in 'social' open source development that already had many developers considering adopting far less inclusive practice. RETURN TO CATHEDRAL, if you will.

The problem isn't limited to open source, it's also inundating discussion forums.

replies(2): >>44984218 #>>44986078 #
3. simoncion ◴[] No.44983004[source]
It's not about "feeling agency" or fabricating a justification. As the PR says:

"AI tooling must be disclosed for contributions

I think, at this stage of AI, it is a common courtesy to disclose this.

In a perfect world, AI assistance would produce equal or higher quality work than any human. That isn't the world we live in today, and in many cases it's generating slop. I say this despite being a fan of and using them successfully myself (with heavy supervision)! I think the major issue is inexperienced human drivers of AI that aren't able to adequately review their generated code. As a result, they're pull requesting code that I'm sure they would be ashamed of if they knew how bad it was.

The disclosure is to help maintainers assess how much attention to give a PR. While we aren't obligated to in any way, I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.

I'm a fan of AI assistance and use AI tooling myself. But, we need to be responsible about what we're using it for and respectful to the humans on the other side that may have to review or maintain this code."

4. potsandpans ◴[] No.44984218[source]
It's reactionary only in that this is the new world we live in. Right now is the least amount of ai assistance that will ever exist in prs.
5. hombre_fatal ◴[] No.44986078[source]
Yeah, but this is once again doing the thing where you read someone replying to situation #2 (in my comment) and then acting like they are denying situation #1.

I think we all grant that LLM slop is inundating everything. But a checkbox that says "I am human" is more of a performative stunt (which I think they are referring to when they say it's reactionary) than anything practical.

Cloudflare's "I am human" checkbox doesn't just take your word for it, and imagine if it did.

---

People who write good, thoughtful code and also use an LLM have no reason to disclose it just because the idea offends someone, just like I don't disclose when I do an amphetamine bender to catch up in work; I don't want to deal with any prejudices someone might have, but I know I do good work and that's what matters. I pressed tab so the LLM could autocomplete the unit test for me because it's similar to the other 20 unit tests, and then I vetted the completion. I'm not going to roleplay with anyone that I did something bad or dishonest or sloppy here; I'm too experienced to know better.

People who write slop that they can't even bother to read themselves aren't going to bother to read your rules either. Yet that's the group OP is pretending he's going to stop. Once you get past the "rah-rah LLM sux0rs amirite fellow HNers?" commentary, there's nothing here.