←back to thread

728 points freetonik | 1 comments | | HN request time: 0.001s | source
Show context
Waterluvian ◴[] No.44976790[source]
I’m not a big AI fan but I do see it as just another tool in your toolbox. I wouldn’t really care how someone got to the end result that is a PR.

But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”

replies(16): >>44976860 #>>44976869 #>>44976945 #>>44977015 #>>44977025 #>>44977121 #>>44977142 #>>44977241 #>>44977503 #>>44978050 #>>44978116 #>>44978159 #>>44978240 #>>44978311 #>>44978533 #>>44979437 #
cvoss ◴[] No.44976945[source]
It does matter how and where a PR comes from, because reviewers are fallible and finite, so trust enters the equation inevitably. You must ask "Do I trust where this came from?" And to answer that, you need to know where it come from.

If trust didn't matter, there wouldn't have been a need for the Linux Kernel team to ban the University of Minnesota for attempting to intentionally smuggle bugs through the PR process as part of an unauthorized social experiment. As it stands, if you / your PRs can't be trusted, they should not even be admitted to the review process.

replies(4): >>44977169 #>>44977263 #>>44978862 #>>44979553 #
KritVutGu[dead post] ◴[] No.44977263[source]
[flagged]
ToucanLoucan ◴[] No.44977445[source]
The sheer amount of entitlement on display by very pro-AI people genuinely boggles the mind.
replies(1): >>44977972 #
mattgreenrocks ◴[] No.44977972[source]
They genuinely believe their use of chatbots is equivalent to multiple years of production experience in a language. They want to erase that distinction (“democratize”) so they can have the same privileges and status without the work.

Otherwise, what’s the harm in saying AI guides you to the solution if you can attest to it being a good solution?

replies(4): >>44977994 #>>44978056 #>>44978461 #>>45005379 #
macawfish ◴[] No.44978056[source]
That's just not true. I have 20 years of dev experience and also am using these tools. I won't commit slop. I'm open to being transparent about my usage of AI but tbh right now there's so much bias and vitriol coming from people afraid of these new tools that in the haze of their fear I don't trust people will actually take the time to neutrally determine whether or not the code is actually slop. I've had manually written, well thought through, well conceived, rough around the edges code get called "AI slop" by a colleague (who I very much respect and have a good relationship with) who admittedly hadn't had a chance to thoroughly understand the code yet.

If I just vibe-coded something and haven't looked at the code myself, that seems like a necessary thing to disclose. But beyond that, if the code is well understood and solid, I feel that I'd be clouding the conversation by unnecessarily bringing the tools I used into it. If I understand the code and feel confident in it, whether I used AI or not seems irrelevant and distracting.

This policy is just shoving the real problem under the rug. Generative AI is going to require us to come up with better curation/filtering/selection tooling, in general. This heuristic of "whether or not someone self-disclosed using LLMs" just doesn't seem very useful in the long run. Maybe it's a piece of the puzzle but I'm pretty sure there are more useful ways to sift through PRs than that. Line count differences, for example. Whether it was a person with an LLM or a 10x coder without one, a PR that adds 15000 lines is just not likely to be it.

replies(3): >>44978392 #>>44978442 #>>44978444 #
eschaton ◴[] No.44978392{6}[source]
You should not just be open to being transparent, you need to understand that there will be times you will be required to be transparent about the tools you’ve used and the ultimate origin of your contributions, and that trying to evade or even push back against it is a huge red flag that you cannot be trusted to abide by your commitments.

If you’re unwilling to stop using slop tools, then you don’t get to contribute to some projects, and you need to be accept that.

replies(2): >>44978632 #>>44978751 #
macawfish ◴[] No.44978632{7}[source]
Your blanket determination that the tools themselves are slop generators is an attitude I'm definitely not interested in engaging with in collaboration.
replies(1): >>44978669 #
eschaton[dead post] ◴[] No.44978669{8}[source]
[flagged]
macawfish ◴[] No.44978754{9}[source]
I'd way sooner quit my job / not contribute than deal with someone who projects on me the way you have in this conversation.
replies(1): >>44978790 #
eschaton ◴[] No.44978790{10}[source]
Enjoy being found out for fraudulently passing off work you didn’t do as your own then.
replies(1): >>44978855 #
1. macawfish ◴[] No.44978855{11}[source]
Ironically as a practice I'm actually quite transparent about how I use LLMs and believe that destigmatising open conversation about use of these tools is actually really important, just not that it's a useful heuristic for whether or not some code is slop.