Actually, no. When LLMs produce good, working code, it also tends to be efficient (in terms of lines, etc).
May vary with language and domain, though.
It may be the size of the changes you're asking for. I tend to micromanage it. I don't know your algorithm, but if it's complex enough, I may have done 4 separate prompts - one for each step.
Let the LLM do the boring stuff, and focus on writing the fun stuff.
Also, setting up logging in Python is never fun.
If it's a new, non-trivial algorithm, I enjoy writing it.
Oh, and the chatbot is cheap. I pay for API usage. On average I'm paying less than $5 per month.
> and I don't have to worry about random hallucinations.
For boilerplate code, I don't think I've ever had to fix anything. It's always worked the first time. If it didn't, my prompt was at fault.