←back to thread

75 points throwaway-ai-qs | 1 comments | | HN request time: 0.201s | source

Between code reviews, and AI generated rubbish, I've had it. Whether it's people relying on AI to write pull request descriptions (that are crap by the way), or using it to generate tests.. I'm sick of it.

Over the year, I've been doing a tonne of consulting. The last three months I've watched at least 8 companies embrace AI generated pip for coding, testing, and code reviews. Honestly, the best suggestions I've seen are found by linters in CI, and spell checkers. Is this what we've come to?

My question for my fellow HNers.. is this what the future holds? Is this everywhere? I think I'm finally ready to get off the ride.

1. madamelic ◴[] No.45279271[source]
As others have said, LLM generation of code is no excuse for not self-reviewing, testing, and understanding their own code.

It's a tool. I still have the expectation of people being thoughtful and 'code craftspeople'.

The only caveat is verbosity of code. It drives me up the wall how these models try to one-shot production code and put a lot of cruft in. I am starting to have the expectation of having to go in and pare down overly ambitious code to reduce complexity.

I adopted LLM coding fairly early on (GPT3) and the difference between then and now is immense. It's a fast-moving technology still so I don't have the expectation that the model or tool I use today will be the one I use in 3 months.

I have switched modalities and models pretty regularly to try to keep cutting edge and getting the best results. I think people who refuse to leverage LLMs for code generation to some degree are going to be left behind. It's going to be the equivalent, in my opinion, of keeping hard cover reference manuals on your desk versus using a search engine.