←back to thread

688 points dheerajvs | 3 comments | | HN request time: 0.932s | source
1. thesz ◴[] No.44525492[source]
What is interesting here is that all predictions were positive, but results are negative.

This shows that everyone in the study (economic experts, ML experts and even developers themselves, even after getting experience) are novices if we look at them from the Dunning-Kruger effect [1] perspective.

[1] https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

"The Dunning–Kruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities."

replies(1): >>44536471 #
2. 59nadir ◴[] No.44536471[source]
> "The Dunning–Kruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities."

No, they underestimated their own abilities for the most part; the estimates for AI-disallowed tasks were all undershot in terms of real implementation time.

What they overestimated was the ability of LLMs to provide real productivity gains on a given task.

replies(1): >>44537302 #
3. thesz ◴[] No.44537302[source]

  > What they overestimated was the ability of LLMs to provide real productivity gains on a given task.
This is exactly my point.

This is not about the ability of LLM overestimated by developers, this is about the ability of developer interacting with LLM overestimated by developers themselves, economic experts and ML experts.

LLMs are not "able" per se, they are "prompted" to be "able." They are not agents, but behave as agents on someone's behalf - and no one have a clue whether use of LLMs is positive or detrimental, with the bias being "LLM's use is net positive".

The overestimation of LLM's abilities by everyone calls for Dunning-Kruger.