But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”
But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”
If trust didn't matter, there wouldn't have been a need for the Linux Kernel team to ban the University of Minnesota for attempting to intentionally smuggle bugs through the PR process as part of an unauthorized social experiment. As it stands, if you / your PRs can't be trusted, they should not even be admitted to the review process.
Otherwise, what’s the harm in saying AI guides you to the solution if you can attest to it being a good solution?
For one: it threatens to make an entire generation of programmers lazy and stupid. They stop exercising their creative muscle. Writing and reviewing are different activities; both should be done continuously.
This is perfectly observable with a foreign language. If you stop actively using a foreign language after learning it really well, your ability to speak it fades pretty quickly, while your ability to understand it fades too, but less quickly.