Most active commenters
  • emodendroket(6)
  • lukan(3)

←back to thread

688 points dheerajvs | 18 comments | | HN request time: 0.002s | source | bottom
Show context
noisy_boy ◴[] No.44523098[source]
It is 80/20 again - it gets you 80% of the way in 20% of the time and then you spend 80% of the time to get the rest of the 20% done. And since it always feels like it is almost there, sunk-cost fallacy comes into play as well and you just don't want to give up.

I think an approach that I tried recently is to use it as a friction remover instead of a solution provider. I do the programming but use it to remove pebbles such as that small bit of syntax I forgot, basically to keep up the velocity. However, I don't look at the wholesale code it offers. I think keeping the active thinking cap on results in code I actually understand while avoiding skill atrophy.

replies(9): >>44523200 #>>44523227 #>>44523342 #>>44523381 #>>44523532 #>>44523832 #>>44525241 #>>44528585 #>>44532723 #
1. emodendroket ◴[] No.44523227[source]
I think it’s most useful when you basically need Stack Overflow on steroids: I basically know what I want to do but I’m not sure how to achieve it using this environment. It can also be helpful for debugging and rubber ducking generally.
replies(4): >>44523343 #>>44523436 #>>44523560 #>>44523787 #
2. threetonesun ◴[] No.44523343[source]
Absolutely this. For a while I was working with a language I was only partially familiar with, and I'd say "here's how I would do this in [primary language], rewrite it in [new language]" and I'd get a decent piece of code back. A little searching in the project to make sure it was stylistically correct and then done.
replies(1): >>44527167 #
3. some-guy ◴[] No.44523436[source]
All those things are true, but it's such a small part of my workflow at this point that the savings, while nice, aren't nearly as life-changing to my job as my CEO is forcing us to think it is.

Once AI can actually untangle our 14 year old codebase full of hosh-posh code, read every commit message, JIRA ticket, and Slack conversation related to the changes in full context, it's not going to solve a lot of the hard problems at my job.

replies(1): >>44527088 #
4. skydhash ◴[] No.44523560[source]
The issue is that it is slow and verbose, at least in its default configuration. The amount of reading is non trivial. There’s a reason most references are dense.
replies(2): >>44523644 #>>44527160 #
5. lukan ◴[] No.44523644[source]
Those issues you can partly solve by changing the prompt to tell it to be concise and don't explain its code.

But nothing will make them stick to the one API version I use.

replies(2): >>44523854 #>>44526575 #
6. GuinansEyebrows ◴[] No.44523787[source]
> rubber ducking

i don't mean to pick on your usage of this specifically, but i think it's noteworthy that the colloquial definition of "rubber ducking" seems to have expanded to include "using a software tool to generate advice/confirm hunches". I always understood the term to mean a personal process of talking through a problem out loud in order to methodically, explicitly understand a theoretical plan/process and expose gaps.

based on a lot of articles/studies i've seen (admittedly haven't dug into them too deeply) it seems like the use of chatbots to perform this type of task actually has negative cognitive impacts on some groups of users - the opposite of the personal value i thought rubber-ducking was supposed to provide.

replies(3): >>44526810 #>>44526955 #>>44527095 #
7. diggan ◴[] No.44523854{3}[source]
> But nothing will make them stick to the one API version I use.

Models trained for tool use can do that. When I use Codex for some Rust stuff for example, it can grep from source files in the directory dependencies are stored, so looking up the current APIs is trivial for them. Same works for JavaScript and a bunch of other languages too, as long as it's accessible somewhere via the tools they have available.

replies(1): >>44524084 #
8. lukan ◴[] No.44524084{4}[source]
Hm, I never tried codex so far, but quite some other tools and models and none could help me in a consistent way. But I am sceptical, because also if I tell them explicitel, to only use one specific version they might or not might use that, depending on their training corpus and temperature I assume.
9. malfist ◴[] No.44526575{3}[source]
The less verbosity you allow the dumber the LLM is. It thinks in tokens and if you keep it from using tokens it's lobotomized.
replies(1): >>44529492 #
10. jonathanlydall ◴[] No.44526810[source]
There is something that happens to our thought processes when we verbalise or write down our thoughts.

I like to think of it that instead of having seemingly endless amounts of half thoughts spinning around inside your head, you make an idea or thought more “fully formed” when you express it verbally or with written (or typed) words.

I believe this is part of why therapy can work, by actually expressing our thoughts, we’re kind of forced to face realities and after doing so it’s often much easier to reflect on it. Therapists often recommend personal journals as they can also work for this.

I believe rubber ducking works because in having to explain the problem, it forces you to actually gather your thoughts into something usable from which you can more effectively reflect on.

I see no reason why doing the same thing except in writing to an LLM couldn’t be equally effective.

11. danparsonson ◴[] No.44526955[source]
Indeed the duck is supposed to sit there in silence while the speaker does the thinking ^^

This is what human language does though, isn't it? Evolves over time, in often weird ways; like how many people "could care less" about something they couldn't care less about.

12. emodendroket ◴[] No.44527088[source]
Some of the “explain what it does” functionality is better than you might think but to be honest I find myself called on to work with unfamiliar tools all the time so I find plenty of value.
13. emodendroket ◴[] No.44527095[source]
Well OK, sure. But I’m having a “conversation” with nobody still. I’m surprised how often it happens that the AI a gives me a totally wrong answer but a combination of formulating the question and something in the answer made me think of the right thing after all.
14. emodendroket ◴[] No.44527160[source]
Well, compared to what method that would be faster to answer that kind of question?
replies(1): >>44530471 #
15. emodendroket ◴[] No.44527167[source]
Those kind of tasks are good for it, yeah. “Here’s some JSON. Please generate a Java class I can deserialize it into” is similar.
16. lukan ◴[] No.44529492{4}[source]
It can think as much as it wants and still return just code in the end.
17. skydhash ◴[] No.44530471{3}[source]
Learning the thing. It’s not like I have to use all the libraries of the whole world at the job. You can really fly over a reference documentation if you’re familiar with the domain.
replies(1): >>44533299 #
18. emodendroket ◴[] No.44533299{4}[source]
If your job only ever calls on you to use the same handful of libraries of course just becoming deeply familiar is better but that’s obviously not realistic if you’re jumping from this thing to that thing. Nobody would use resources like Stack Overflow either if it were that easy and practical to just “learn the thing.”