Make a knowledgeable reply and give no reference to the AI you used- comment is celebrated.
We are already barreling full speed down the "hide your AI use" path.
If the PR has issues and requires more than superficial re-work to be acceptable, the authors don't want to spend time debugging code spit out by an AI tool. They're more willing to spend a cycle or two if the benefit is you learning (either generally as a dev or becoming more familiar with the project). If you can make clear that you created or understand the code end to end, then they're more likely to be willing to take these extra steps.
Seems pretty straightforward to me and thoughtful by the maintainers here.
Fraud and misrepresentation are always options for contributors, at some point one needs to trust that they’re adhering to the rules that they agreed to adhere to.
Yes, some companies do want to hire such people, the justification given is something along the lines of "we need devs who are using the latest tools/up to date on the latest trends! They will help bring in those techniques and make all of our current devs more productive!". This isn't a bad set of motivations or assumptions IMO.
Setting aside what companies _want_, they almost certainly are already hiring devs with llm-edited CVs, whether they want it or not. Such CVs/resumes are more likely to make it through HR filters.
> Do companies want to hire "honest" people whose CVs were written by some LLM?
Unfortunately yes, they very much seem to. Since many are using LLMs to assess CVs, those which use LLMs to help write their CV have a measured advantage.
If that were the case, why would this rule be necessary, if it indeed is the substance that matters? AI generated anything has a heavy slop stigma right now, even if the content is solid.
This would make for an interesting experiment to submit a PR that was absolute gold but with the disclaimer it was generated with help of ChatGPT. I would almost guarantee it would be received with skepticism and dismissals.
If you make a PR where you just used AI, it seems to work, but didn't go further then the maintainers can go "well I had a look, it looks bad, you didn't put effort in, I'm not going to coach you through this". But if you make a PR where you go "I used AI to learn about X then tried to implement X myself with AI writing some of it" then the maintainers can go "well this PR doesn't look good quality but looks like you tried, we can give some good feedback but still reject it".
In a world without AI, if they were getting a lot of PRs from people who obviously didn't spend any time on their PRs then maybe they would have a "tell us how long this change took you" disclosure as well.
> While we aren't obligated to in any way, I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.
If it's bad code from a person he'll help them get it fixed. If it's bad code from an AI why bother?
What you’re saying is essentially the code equivalent of “I found this image via Google search so of course it’s OK to put into a presentation, it’s on the web so that means I can use it.” This may not be looked at too hard for an investor presentation, but if you’re doing a high profile event like Apple’s WWDC you’ll learn quickly that all assets require clearance and “I found it on the web” won’t cut it—you’ll be made to use a different image or, if you actually present with the unlicensed image, you could be disciplined or outright fired for causing the company liability.
It’s amazing how many people in this industry think it’s OK to just wing this shit and even commit outright fraud just because it’s convenient.
You can talk about how we should act and be all high and mighty all you like, but it’s just burying your head in the sand about the reality of how code is written.
Also, technically, I never said this made it perfectly ok. It’s just that it’s the reality we live in and if we got rid of everyone doing it we’d have to fire 99% of programmers.
Look around. Do you see the majority of programmers getting fired for copying a line from stackoverflow or using AI?
You must either work in an ultra high security area or are so removed from the groundwork of most programming jobs that you don’t know how people do anything anymore. I’m not surprised you mentioned 30+ years, because that likely puts you squarely out of the trenches where the development is actually done.
Outside of like, the military or airplane software, companies really don’t care about provenance most of the time, their lack of processes to avoid looking into any of that are absolute PROOF of that. It’s don’t ask don’t tell out there.
You can be delusional all you like, it doesn’t change the reality of how most development is done.
Again, I didn’t say it’s a good thing, it’s just that it is reality.