←back to thread

688 points dheerajvs | 9 comments | | HN request time: 0.672s | source | bottom
Show context
noisy_boy ◴[] No.44523098[source]
It is 80/20 again - it gets you 80% of the way in 20% of the time and then you spend 80% of the time to get the rest of the 20% done. And since it always feels like it is almost there, sunk-cost fallacy comes into play as well and you just don't want to give up.

I think an approach that I tried recently is to use it as a friction remover instead of a solution provider. I do the programming but use it to remove pebbles such as that small bit of syntax I forgot, basically to keep up the velocity. However, I don't look at the wholesale code it offers. I think keeping the active thinking cap on results in code I actually understand while avoiding skill atrophy.

replies(9): >>44523200 #>>44523227 #>>44523342 #>>44523381 #>>44523532 #>>44523832 #>>44525241 #>>44528585 #>>44532723 #
emodendroket ◴[] No.44523227[source]
I think it’s most useful when you basically need Stack Overflow on steroids: I basically know what I want to do but I’m not sure how to achieve it using this environment. It can also be helpful for debugging and rubber ducking generally.
replies(4): >>44523343 #>>44523436 #>>44523560 #>>44523787 #
1. skydhash ◴[] No.44523560[source]
The issue is that it is slow and verbose, at least in its default configuration. The amount of reading is non trivial. There’s a reason most references are dense.
replies(2): >>44523644 #>>44527160 #
2. lukan ◴[] No.44523644[source]
Those issues you can partly solve by changing the prompt to tell it to be concise and don't explain its code.

But nothing will make them stick to the one API version I use.

replies(2): >>44523854 #>>44526575 #
3. diggan ◴[] No.44523854[source]
> But nothing will make them stick to the one API version I use.

Models trained for tool use can do that. When I use Codex for some Rust stuff for example, it can grep from source files in the directory dependencies are stored, so looking up the current APIs is trivial for them. Same works for JavaScript and a bunch of other languages too, as long as it's accessible somewhere via the tools they have available.

replies(1): >>44524084 #
4. lukan ◴[] No.44524084{3}[source]
Hm, I never tried codex so far, but quite some other tools and models and none could help me in a consistent way. But I am sceptical, because also if I tell them explicitel, to only use one specific version they might or not might use that, depending on their training corpus and temperature I assume.
5. malfist ◴[] No.44526575[source]
The less verbosity you allow the dumber the LLM is. It thinks in tokens and if you keep it from using tokens it's lobotomized.
replies(1): >>44529492 #
6. emodendroket ◴[] No.44527160[source]
Well, compared to what method that would be faster to answer that kind of question?
replies(1): >>44530471 #
7. lukan ◴[] No.44529492{3}[source]
It can think as much as it wants and still return just code in the end.
8. skydhash ◴[] No.44530471[source]
Learning the thing. It’s not like I have to use all the libraries of the whole world at the job. You can really fly over a reference documentation if you’re familiar with the domain.
replies(1): >>44533299 #
9. emodendroket ◴[] No.44533299{3}[source]
If your job only ever calls on you to use the same handful of libraries of course just becoming deeply familiar is better but that’s obviously not realistic if you’re jumping from this thing to that thing. Nobody would use resources like Stack Overflow either if it were that easy and practical to just “learn the thing.”