20 points OnionBlender | 8 comments | | HN request time: 0.833s | source | bottom
1. duxup ◴[] No.44521785[source]
I certainly stop to explore some topics where I may not have in the past, but that does lead to better code sometimes too.

>“When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what's needed,”

Being aware of this and investigating just sounds like responsible use of AI.

2. mertleee ◴[] No.44521883[source]
Idk, I'm at a point where I've forgotten some shorthand needed to pass mildly more complex tech screens.

Can't tell if maybe I'm just an idiot or if these tools are actually really useful.

3. jtc-hn ◴[] No.44521963[source]
This report tallies with my own experience.

It's especially an issue with type-ahead tools that hallucinate function names or introduce subtle bugs: you lose time and get bumped out of the flow as you evaluate the AI's proposal. (Or fix it, if you were unfortunate enough to accidentally hit tab while indenting a line and the AI slipped in a change.)

The agentic tools do better but don't yet have enough context to know that making this change _here_ will break that thing over _there_, so they require a lot of management, which seems to engage a part of the brain than coding.

replies(1): >>44522141 #
4. reverendsteveii ◴[] No.44522141[source]
for me the issue is that the non-AI predictive typeahead and the AI seem to compete, so that I'll see one suggestion and go to take it but it gets overwritten by another from the other suggestion engine.
5. SkyRocknRoll ◴[] No.44522362[source]
This is true.

If I want to troubleshoot production environment most of the time ai slows me down. It is better if I think and debug than asking the AI

6. tucson-josh ◴[] No.44522730[source]
I feel like what still needs to be studied, but which is significantly more difficult to quantify, is the long-term impact of AI-assisted or AI-generated code when it comes time to debug a production problem. Going from observed symptoms to finding a subtle bug is a task that is made much more tractable by intimate experience with the code in question.
7. comebhack ◴[] No.44526558[source]
> Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%.

This in particular is very interesting to me. I haven't read the study yet but this makes me consider my own use of AI - I often feel like it is speeding me up, but is it really? Can I measure it in a better way?