←back to thread

728 points freetonik | 1 comments | | HN request time: 0s | source
Show context
Waterluvian ◴[] No.44976790[source]
I’m not a big AI fan but I do see it as just another tool in your toolbox. I wouldn’t really care how someone got to the end result that is a PR.

But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”

replies(16): >>44976860 #>>44976869 #>>44976945 #>>44977015 #>>44977025 #>>44977121 #>>44977142 #>>44977241 #>>44977503 #>>44978050 #>>44978116 #>>44978159 #>>44978240 #>>44978311 #>>44978533 #>>44979437 #
cvoss ◴[] No.44976945[source]
It does matter how and where a PR comes from, because reviewers are fallible and finite, so trust enters the equation inevitably. You must ask "Do I trust where this came from?" And to answer that, you need to know where it come from.

If trust didn't matter, there wouldn't have been a need for the Linux Kernel team to ban the University of Minnesota for attempting to intentionally smuggle bugs through the PR process as part of an unauthorized social experiment. As it stands, if you / your PRs can't be trusted, they should not even be admitted to the review process.

replies(4): >>44977169 #>>44977263 #>>44978862 #>>44979553 #
KritVutGu[dead post] ◴[] No.44977263[source]
[flagged]
ToucanLoucan ◴[] No.44977445[source]
The sheer amount of entitlement on display by very pro-AI people genuinely boggles the mind.
replies(1): >>44977972 #
mattgreenrocks ◴[] No.44977972[source]
They genuinely believe their use of chatbots is equivalent to multiple years of production experience in a language. They want to erase that distinction (“democratize”) so they can have the same privileges and status without the work.

Otherwise, what’s the harm in saying AI guides you to the solution if you can attest to it being a good solution?

replies(4): >>44977994 #>>44978056 #>>44978461 #>>45005379 #
macawfish ◴[] No.44978056[source]
That's just not true. I have 20 years of dev experience and also am using these tools. I won't commit slop. I'm open to being transparent about my usage of AI but tbh right now there's so much bias and vitriol coming from people afraid of these new tools that in the haze of their fear I don't trust people will actually take the time to neutrally determine whether or not the code is actually slop. I've had manually written, well thought through, well conceived, rough around the edges code get called "AI slop" by a colleague (who I very much respect and have a good relationship with) who admittedly hadn't had a chance to thoroughly understand the code yet.

If I just vibe-coded something and haven't looked at the code myself, that seems like a necessary thing to disclose. But beyond that, if the code is well understood and solid, I feel that I'd be clouding the conversation by unnecessarily bringing the tools I used into it. If I understand the code and feel confident in it, whether I used AI or not seems irrelevant and distracting.

This policy is just shoving the real problem under the rug. Generative AI is going to require us to come up with better curation/filtering/selection tooling, in general. This heuristic of "whether or not someone self-disclosed using LLMs" just doesn't seem very useful in the long run. Maybe it's a piece of the puzzle but I'm pretty sure there are more useful ways to sift through PRs than that. Line count differences, for example. Whether it was a person with an LLM or a 10x coder without one, a PR that adds 15000 lines is just not likely to be it.

replies(3): >>44978392 #>>44978442 #>>44978444 #
gizmo686 ◴[] No.44978442[source]
> I've had manually written, well thought through, well conceived, rough around the edges code get called "AI slop" by a colleague (who I very much respect and have a good relationship with) who admittedly hadn't had a chance to thoroughly understand the code yet.

This is the core problem with AI that makes so many people upset. In the old days, if you get a substantial submission, you know a substantial amount of effort went into it. You know that someone at some point had a mental model of what the submission was. Even if they didn't translate that perfectly, you can still try to figure out what they meant and we're thinking. You know the submitter put forth significant effort. That is a real signal that they are both willing and able to do so to address going forward to address issues you raise.

The existence of AI slop fundamentally breaks these assumptions. That is why we need enforced social norms around disclosure.

replies(2): >>44978592 #>>45005475 #
macawfish ◴[] No.44978592{3}[source]
We need better social norms about disclosure, but maybe those don't need to be about "whether or not you used LLMs" and might have more to do with "how well you understand the code you are opening a PR for" (or are reviewing, for that matter). Normalize draft PRs and sharing big messes of code you're not quite sure about but want to start a conversation about. Normalize admitting that you don't fully understand the code you've written / are tasked with reviewing and that this is perfectly fine and doesn't reflect poorly on you at all, in fact it reflects humility and a collaborative spirit.

10x engineers create so many bugs without AI, and vibe coding could multiply that to 100x. But let's not distract from the source of that, which is rewarding the false confidence it takes to pretend we understand stuff that we actually don't.

replies(3): >>44978880 #>>44979072 #>>45005522 #
1. risyachka ◴[] No.44979072{4}[source]
>> but maybe those don't need to be about "whether or not you used LLMs"

The only reason one may not want disclosure is if one can’t write anything by themselves, thus they will have to label all code as AI generated and everyone will see their real skill level.