←back to thread

728 points freetonik | 1 comments | | HN request time: 0s | source
Show context
antirez ◴[] No.44977896[source]
I'll cover in my YouTube why this is wrong but TLDR: you need to evaluate quality not process. AI can be used in diametrically different ways and the reason why this policy could be enforced is because it will be obvious if the code is produced via a solo flight of some AI agent. For the same reason that's not a policy that will improve anything.
replies(2): >>44981671 #>>44983097 #
simoncion ◴[] No.44983097[source]
> ...you need to evaluate quality not process.

Respectfully, Mr. Redis, sir, that's what's going on. I don't see any reason to make a video about it. From the PR that's TFA:

"In a perfect world, AI assistance would produce equal or higher quality work than any human. That isn't the world we live in today, and in many cases it's generating slop. I say this despite being a fan of and using them successfully myself (with heavy supervision)! I think the major issue is inexperienced human drivers of AI that aren't able to adequately review their generated code. As a result, they're pull requesting code that I'm sure they would be ashamed of if they knew how bad it was.

The disclosure is to help maintainers assess how much attention to give a PR. While we aren't obligated to in any way, I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.

I'm a fan of AI assistance and use AI tooling myself. But, we need to be responsible about what we're using it for and respectful to the humans on the other side that may have to review or maintain this code."

replies(1): >>44994319 #
fullautomation ◴[] No.44994319[source]
"I don't see any reason to make a video about it". This sentence is so wrong in its depth that's difficult to know where to start arguing against it. "The disclosure is to help maintainers assess how much attention to give a PR." For the same reason we should ask how many years contributing have been writing software and in the specific language, as they are also correlated with quality of produced code. "we need to be responsible about what we're using it for and respectful to the humans on the other side that may have to review or maintain this code" Yes, producing great code and documentation, regardless the process.
replies(1): >>45020844 #
1. simoncion ◴[] No.45020844[source]
There's a world of difference between giving feedback and coaching to a human that might be able to learn from that feedback and use it to do better, and giving feedback and coaching to a LLM that has a human acting as its go-between.

If research continues over the next few decades, these LLMs (and other code-generation robots) may well be able retrain themselves in real-time. However, right now, retraining is expensive (in many ways) and slow. For the foreseeable future, investing your time in providing feedback and coaching intended to develop a human programmer into a better human programmer to an LLM is a colossal waste of one's time.