←back to thread

181 points thunderbong | 1 comments | | HN request time: 0.292s | source
Show context
stavros ◴[] No.45083136[source]
I've come to view LLMs as a consulting firm where, for each request, I have a 50% chance of getting either an expert or an intern writing my code, and there's no way to tell which.

Sometimes I accept this, and I vibe-code, when I don't care about the result. When I do care about the result, I have to read every line myself. Since reading code is harder than writing it, this takes longer, but LLMs have made me too lazy to write code now, so that's probably the only alternative that works.

I have to say, though, the best thing I've tried is Cursor's autocomplete, which writes 3-4 lines for you. That way, I can easily verify that the code does what I want, while still reaping the benefit of not having to look up all the APIs and function signatures.

replies(7): >>45083197 #>>45083541 #>>45085734 #>>45086548 #>>45087076 #>>45092938 #>>45092950 #
kaptainscarlet ◴[] No.45083197[source]
I've also had a similar experience. I have become too lazy since I started vibe-coding. My coding has transitioned from coder to code reviewer/fixer vey quickly. Overall I feel like it's a good thing because the last few years of my life has been a repetition of frontend components and api endpoints, which to me has become too monotonous so I am happy to have AI take over that grunt work while I supervise.
replies(3): >>45083285 #>>45086478 #>>45090684 #
latexr ◴[] No.45090684[source]
> My coding has transitioned from coder to code reviewer/fixer vey quickly. Overall I feel like it's a good thing

Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

> the last few years of my life has been a repetition of frontend components and api endpoints, which to me has become too monotonous

It’s a surprise that so many people have this problem/complaint. Why don’t you use a snippet manager?! It’s lightweight, simple, fast, predictable, offline, and includes the best version of what you learned. We’ve had the technology for many many years.

replies(4): >>45091141 #>>45092177 #>>45092681 #>>45106269 #
TuringTest ◴[] No.45106269[source]
> Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

You can locally run pretty decent coding models such as Qwen3 Coder in a RTX 4090 GPU through LM Studio or Ollama with Cline.

It's a good idea even if they give slightly worse results in average, as you can limit your spending of expensive tokens for trivial grunt work and use them only for the really hard questions where Claude or ChatGPT 5 will excel.

replies(1): >>45115751 #
1. latexr ◴[] No.45115751[source]
Or you could use your brain, which will actually learn and improve.