> if the problem is that I need to be very specific with how I want LLM to fix the issue, like providing it the solution, why wouldn't I just make the change myself?
I type 105 wpm on a bad day. Try gpt-4.1. It types like 1000 wpm. If you can formally describe your problem in English and the number of characters in the English prompt is less than whatever code you write, gpt-4.1 will make you faster.
Obviously you have to account for gpt-4.1 being wrong sometimes. Even so, if you have to run two or three prompts to get it right, it still is going to be faster.
> I don't even know how you can think that not vibe coding means you lack experience
If you lack experience, you're going to prompt the LLM to do the wrong thing and engineer yourself into a corner and waste time. Or you won't catch the mistakes it makes. Only experience and "knowing more than LLM" allows you to catch its mistakes and fix them. (Which is still faster than writing the code yourself, merely by way of it typing 1000 wpm.)
> If the model keeps trying to use non-existent language feature or completely made up functions/classes that is a problem and nothing to do with "autism"
You know that you can tell it those functions are made up and paste it the latest documentation and then it will work, right? That knee-jerk response makes it sound like you have this rigidity problem, yourself.
> I personally am not as I have not seen LLMs actually being useful for anything but replacing google searches.
Nothing really of substance here. Just because you don't know how to use this tool that doesn't mean no one does.
This is the least convincing point for me, because I come along and say "Hey! This thing has let me ship far more working code than before!" and then your response is just "I don't know how to use it." I know that it's made me more productive. You can't say anything to deny that. Do you think I have some need to lie about this? Why would I feel the need to go on the internet and reap a bunch of downvotes while peddling some lie that does stand to get me anything if I convince people of the lie?
> I also don't quite get who is going against the tide here, are you going against the tide of the downvotes
Yeah, that's what I'm saying. People will actively shame and harass you for using LLMs. It's mind boggling that a tool, a technology, that works for me and has made me more productive, would be so vehemently criticized. That's why I listed these 5 reasons, the only reasons I have thought of yet.
> Means if there is a catastrophic error, you probably can't fix it yourself.
See my point about lacking experience. If you can't do the surgery yourself every once in a while, you're going to hate these tools.
Really, you've just made a bunch of claims about me that I know are false, so I'm left unconvinced.
I'm trying to have a charitable take. I don't find joy in arguing or leaving discussions with a bitter taste. I genuinely don't know why people are so mad at me claiming that a tool has helped me be more productive. They all just don't believe me, ultimately. They all come up with some excuse as to why my personal anecdotes can be dismissed and ignored: "even though you have X, we should feel bad for you because Y!" But it's never anything of substance. Never anything that has convinced me. Because at the end of the day, I'm shipping faster. My code works. My code has stood the test of time. Insults to my engineering ability I know are demonstrably false. I hope you can see the light one day. These are extraordinary tools that are only getting better, at least by a little bit, in the foreseeable future. Why deny?