←back to thread

728 points freetonik | 3 comments | | HN request time: 0.389s | source
Show context
epolanski ◴[] No.44977336[source]
This isn't an AI problem this is a human one.

Blaming it on the tool, and not the person's misusing it trying to get his name on a big os project, is like blaming the new automatic in the kitchen and not the chef for getting a raw pizza on the table.

replies(1): >>44979635 #
jeremyjh ◴[] No.44979635[source]
OP is not blaming the AI - did you read the post? AI does enable shitty humans to open PRs with code they have no comprehension of, wasting precious time donated by a skilled maintainer. That is a new thing that wasn’t possible without AI.
replies(1): >>44981828 #
1. rane ◴[] No.44981828[source]
Using AI to generate code in a PR does not necessarily mean however that the user has not taken time to understand the changes and is not willing to learn. There are AI users who generate whole files without understanding the contents, and then there are AI users who generate the exact same files but have known in advance what they want, and merely use AI as a tool to save typing.

The intention here seems to be to filter out low quality submissions for which the only purpose is to only pimp Github resume for having contributions in highly starred repo. Not sure if the people doing that will be disclosing use of AI anyway.

replies(2): >>44982183 #>>44983517 #
2. badosu ◴[] No.44982183[source]
> The intention here seems to be to filter out low quality submissions for which the only purpose is to only pimp Github resume for having contributions in highly starred repo. Not sure if the people doing that will be disclosing use of AI anyway.

That is a fair way to see it, and I agree that it is a losing battle if your battle is enforcing this rule.

However, from a different perspective - if one sees it more as a gentlemen agreement (which it de facto is) - it fosters an environment where like-minded folks can cooperate better.

The disclosure assists the reviewer in finding common issues in AI generated code, the specificity of the disclosure even more so.

For example, a submitter sends a PR where they disclose a substantial amount of the code was AI assisted but all tests were manually written. The disclosure allows the reviewer to first look at the tests to gauge how well the submitter understands and constrained the solution to their requirements, the next step being to then look at the solution from a high-level perspective before going into the details. It respects the reviewer time, not necessarily because the reviewer is above AI usage, but because without disclosure the whole collaborative process falls apart.

Not sure how long this can work though, it's still easy to distinguish bad code written by a human from AI slop. In the first case your review and assistance is an investment into the collaborative process, in the latter it's just some unread text included in the next prompt.

3. jeremyjh ◴[] No.44983517[source]
Sorry, I can't see how your comment is a reply to my comment. I said AI enables shitty humans to do this. I didn't say everyone using AI is a shitty human or whatever it is you took away from that.