Respectfully, Mr. Redis, sir, that's what's going on. I don't see any reason to make a video about it. From the PR that's TFA:
"In a perfect world, AI assistance would produce equal or higher quality work than any human. That isn't the world we live in today, and in many cases it's generating slop. I say this despite being a fan of and using them successfully myself (with heavy supervision)! I think the major issue is inexperienced human drivers of AI that aren't able to adequately review their generated code. As a result, they're pull requesting code that I'm sure they would be ashamed of if they knew how bad it was.
The disclosure is to help maintainers assess how much attention to give a PR. While we aren't obligated to in any way, I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.
I'm a fan of AI assistance and use AI tooling myself. But, we need to be responsible about what we're using it for and respectful to the humans on the other side that may have to review or maintain this code."
If research continues over the next few decades, these LLMs (and other code-generation robots) may well be able retrain themselves in real-time. However, right now, retraining is expensive (in many ways) and slow. For the foreseeable future, investing your time in providing feedback and coaching intended to develop a human programmer into a better human programmer to an LLM is a colossal waste of one's time.