←back to thread

688 points dheerajvs | 1 comments | | HN request time: 0s | source
Show context
noisy_boy ◴[] No.44523098[source]
It is 80/20 again - it gets you 80% of the way in 20% of the time and then you spend 80% of the time to get the rest of the 20% done. And since it always feels like it is almost there, sunk-cost fallacy comes into play as well and you just don't want to give up.

I think an approach that I tried recently is to use it as a friction remover instead of a solution provider. I do the programming but use it to remove pebbles such as that small bit of syntax I forgot, basically to keep up the velocity. However, I don't look at the wholesale code it offers. I think keeping the active thinking cap on results in code I actually understand while avoiding skill atrophy.

replies(9): >>44523200 #>>44523227 #>>44523342 #>>44523381 #>>44523532 #>>44523832 #>>44525241 #>>44528585 #>>44532723 #
emodendroket ◴[] No.44523227[source]
I think it’s most useful when you basically need Stack Overflow on steroids: I basically know what I want to do but I’m not sure how to achieve it using this environment. It can also be helpful for debugging and rubber ducking generally.
replies(4): >>44523343 #>>44523436 #>>44523560 #>>44523787 #
skydhash ◴[] No.44523560[source]
The issue is that it is slow and verbose, at least in its default configuration. The amount of reading is non trivial. There’s a reason most references are dense.
replies(2): >>44523644 #>>44527160 #
lukan ◴[] No.44523644[source]
Those issues you can partly solve by changing the prompt to tell it to be concise and don't explain its code.

But nothing will make them stick to the one API version I use.

replies(2): >>44523854 #>>44526575 #
malfist ◴[] No.44526575[source]
The less verbosity you allow the dumber the LLM is. It thinks in tokens and if you keep it from using tokens it's lobotomized.
replies(1): >>44529492 #
1. lukan ◴[] No.44529492{3}[source]
It can think as much as it wants and still return just code in the end.