←back to thread

688 points dheerajvs | 1 comments | | HN request time: 0s | source
Show context
pera ◴[] No.44524261[source]
Wow these are extremely interesting results, specially this part:

> This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.

I wonder what could explain such large difference between estimation/experience vs reality, any ideas?

Maybe our brains are measuring mental effort and distorting our experience of time?

replies(7): >>44524872 #>>44524974 #>>44525239 #>>44525349 #>>44528508 #>>44528626 #>>44530564 #
evanelias ◴[] No.44525349[source]
Here's a scary thought, which I'm admittedly basing on absolutely nothing scientific:

What if agentic coding sessions are triggering a similar dopamine feedback loop as social media apps? Obviously not to the same degree as social media apps, I mean coding for work is still "work"... but there's maybe some similarity in getting iterative solutions from the agent, triggering something in your brain each time, yes?

If that was the case, wouldn't we expect developers to have an overly positive perception of AI because they're literally becoming addicted to it?

replies(5): >>44525418 #>>44525471 #>>44526779 #>>44528433 #>>44532628 #
EarthLaunch ◴[] No.44525418[source]
> The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con

https://softwarecrisis.dev/letters/llmentalist/

Plus there's a gambling mechanic: Push the button, sometimes get things for free.

replies(1): >>44526719 #
1. lll-o-lll ◴[] No.44526719[source]
This is very interesting and disturbing. We are outsourcing our decision making to an algorithmic “Mentalist” and will reap a terrible reward. I need to ween myself off the comforting teat of the chatbot psychic.