←back to thread

688 points dheerajvs | 1 comments | | HN request time: 0.259s | source
Show context
pera ◴[] No.44524261[source]
Wow these are extremely interesting results, specially this part:

> This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.

I wonder what could explain such large difference between estimation/experience vs reality, any ideas?

Maybe our brains are measuring mental effort and distorting our experience of time?

replies(7): >>44524872 #>>44524974 #>>44525239 #>>44525349 #>>44528508 #>>44528626 #>>44530564 #
1. rsynnott ◴[] No.44530564[source]
> I wonder what could explain such large difference between estimation/experience vs reality, any ideas?

This bit I wasn't at all surprised by, because this is _very common_. People who are doing a [magic thing] which they believe in often claim that it is improving things even where it empirically isn't; very, very common with fad diets and exercise regimens, say. You really can't trust subjects' claims of efficacy of something that's being tested on them, or that they're testing on themselves.

And particularly for LLM tools, there is this strong sense amongst many fans that they are The Future, that anyone who doesn't get onboard is being Left Behind, and so forth. I'd assume a lot of users aren't thinking particularly rationally about them.