←back to thread

728 points freetonik | 1 comments | | HN request time: 0s | source
Show context
hombre_fatal ◴[] No.44980547[source]
I see two things here.

1. The world has fundamentally changed due to LLMs. You don't know where a code submission falls between "written thoroughly with eternal vigilance" vs "completely vibe-coded" since it's now trivially to generate the later. There's no going back. And a lot of comments here seem stuck on this point.

2. The maintainer naively or stubbornly imagines that he can get everyone to pre-sort their code between the two buckets through self-reporting.

But that's futile.

It's like asking someone if they're a good person on a date because you don't want to waste your time with bad people. Unfortunately, that shortcut doesn't exist.

Now, maybe going forward we will be forced to come up with real solutions to the general problem of vetting people. But TFA feels like more of a stunt than a serious pitch.

replies(3): >>44980703 #>>44981628 #>>44981639 #
potsandpans ◴[] No.44980703[source]
In a nondismissive way, I see things like this (the gh issue) as part of the reactionary movement / counter culture of our time.

People want to feel agency and will react to mainstream pressures. And make up whatever excuses along the way to justify what theyre feeling.

replies(2): >>44981512 #>>44983004 #
nullc ◴[] No.44981512[source]
I don't think it has anything to do with being a reactionary movement or counter culture. If it were I would expect, among other things, that it would absolutely prohibit the use of AI entirely rather than just require disclosure.

The background is that many higher profile open source projects are getting deluged by low quality AI slop "contributions", not just crappy code but when you ask questions about it you sometimes get an argumentative chatbot lying to you about what the PR does.

And this latest turn has happened on top of other trends in 'social' open source development that already had many developers considering adopting far less inclusive practice. RETURN TO CATHEDRAL, if you will.

The problem isn't limited to open source, it's also inundating discussion forums.

replies(2): >>44984218 #>>44986078 #
1. hombre_fatal ◴[] No.44986078[source]
Yeah, but this is once again doing the thing where you read someone replying to situation #2 (in my comment) and then acting like they are denying situation #1.

I think we all grant that LLM slop is inundating everything. But a checkbox that says "I am human" is more of a performative stunt (which I think they are referring to when they say it's reactionary) than anything practical.

Cloudflare's "I am human" checkbox doesn't just take your word for it, and imagine if it did.

---

People who write good, thoughtful code and also use an LLM have no reason to disclose it just because the idea offends someone, just like I don't disclose when I do an amphetamine bender to catch up in work; I don't want to deal with any prejudices someone might have, but I know I do good work and that's what matters. I pressed tab so the LLM could autocomplete the unit test for me because it's similar to the other 20 unit tests, and then I vetted the completion. I'm not going to roleplay with anyone that I did something bad or dishonest or sloppy here; I'm too experienced to know better.

People who write slop that they can't even bother to read themselves aren't going to bother to read your rules either. Yet that's the group OP is pretending he's going to stop. Once you get past the "rah-rah LLM sux0rs amirite fellow HNers?" commentary, there's nothing here.