←back to thread

383 points meetpateltech | 1 comments | | HN request time: 0.203s | source
Show context
prhn ◴[] No.44006680[source]
Is anyone using any of these tools to write non boilerplate code?

I'm very interested.

In my experience ChatGPT and Gemini are absolutely terrible at these types of things. They are constantly wrong. I know I'm not saying anything new, but I'm waiting to personally experience an LLM that does something useful with any of the code I give it.

These tools aren't useless. They're great as search engines and pointing me in the right direction. They write dumb bash scripts that save me time here and there. That's it.

And it's hilarious to me how these people present these tools. It generates a bunch of code, and then you spend all your time auditing and fixing what is expected to be wrong.

That's not the type of code I'm putting in my company's code base, and I could probably write the damn code more correctly in less time than it takes to review for expected errors.

What am I missing?

replies(14): >>44006706 #>>44006751 #>>44006766 #>>44006808 #>>44006858 #>>44006868 #>>44006872 #>>44007014 #>>44007038 #>>44007115 #>>44007288 #>>44007383 #>>44007699 #>>44009108 #
lispisok ◴[] No.44009108[source]
A lot of people are deeply invested in these things being better than they really are. From the OpenAI's and Google's spending $100s of billions EACH developing LLMs to VC backed startups promising their "AI agent" can replace entire teams of white collar employees. That's why your experience matches mine and every other developer I personally know but you see comments everywhere making much grander claims.
replies(2): >>44009789 #>>44009997 #
1. triMichael ◴[] No.44009789[source]
I agree, but I'd add that it's not just the tech giants who want them to be better than they are, but also non-programmers.

IMO LLMs are actually pretty good at writing small scripts. First, it's much more common for a small script to be in the LLM's training data, and second, it's much easier to find and fix a bug. So the LLM actually does allow a non-programmer to write correct code with minimal effort (for some simple task), and then they are blown away thinking writing software is a solved problem. However, these kinds of people have no idea of the difference between a hundred line script where an error is easily found and isn't a big deal and a million line codebase where an error can be invisible and shut everything down.

Worst of all is when the two sides of tech-giants and non-programmers meet. These two sides may sound like opposites but they really aren't. In particular, there are plenty of non-programmers involved at the C-level and the HR levels of tech companies. These people are particularly vulnerable to being wowed by LLMs seemingly able to do complex tasks that in their minds are the same tasks their employees are doing. As a result, they stop hiring new people and tell their current people to "just use LLMs", leading to the current hiring crisis.