←back to thread

181 points thunderbong | 1 comments | | HN request time: 0.001s | source
Show context
stavros ◴[] No.45083136[source]
I've come to view LLMs as a consulting firm where, for each request, I have a 50% chance of getting either an expert or an intern writing my code, and there's no way to tell which.

Sometimes I accept this, and I vibe-code, when I don't care about the result. When I do care about the result, I have to read every line myself. Since reading code is harder than writing it, this takes longer, but LLMs have made me too lazy to write code now, so that's probably the only alternative that works.

I have to say, though, the best thing I've tried is Cursor's autocomplete, which writes 3-4 lines for you. That way, I can easily verify that the code does what I want, while still reaping the benefit of not having to look up all the APIs and function signatures.

replies(7): >>45083197 #>>45083541 #>>45085734 #>>45086548 #>>45087076 #>>45092938 #>>45092950 #
1. _fat_santa ◴[] No.45092938[source]
> I have a 50% chance of getting either an expert or an intern writing my code

The way I describe it is almost gambling with your time. Every time I want to reach for the Cline extension in VSCode, I always ask myself "if this gamble worth it?" and "what are my odds for this gamble?".

For some things like simple refactoring I'm usually getting great odds so I use AI, but I would say at least 5-6 times last week I've thought about it and ended up doing it by hand as the odds were not in my favor.

One thing I've picked up using AI over the past few months is this sense of what it can and can't do. For some things I'm like "yeah it can do this no problem" but for other tasks I find myself going "better do this by hand, AI will just fuck it up"