←back to thread

97 points jay-baleine | 1 comments | | HN request time: 0.248s | source
Show context
sublinear ◴[] No.45148898[source]
This may produce some successes, but it's so much more work than just writing the code yourself that it's pointless. This structured way of working with generative AI is so strict that there is no scaling it up either. It feels like years since this was established to be a waste of time.

If the goal is to start writing code not knowing much, it may be a good way to learn how and establish a similar discipline within yourself to tackle projects? I think there's been research that training wheels don't work either though. Whatever works and gets people learning to write code for real can't be bad, right?

replies(3): >>45148990 #>>45149237 #>>45149588 #
1. CuriouslyC ◴[] No.45149588[source]
It's not. I can get a detailed spec in place via back and forth with chatgpt + some templates + a validation service in 10 minutes that will consistently get an agent to power for 3+ hours with the end result being 85% test coverage, E2E user story testing, etc so when I come back to the project I'm only doing acceptance testing.

The velocity taking yourself out of the loop with analytic guardrails buys is just insane, I can't overstate it. The clear plan/guardrails are important though, otherwise you end up with a pile of slop that doesn't work and is unmaintainable.