←back to thread

51 points ForHackernews | 5 comments | | HN request time: 0.843s | source
Show context
frankc ◴[] No.45161945[source]
I don't know exactly where AI is going to go, but the fact that I keep seeing this one study of programmer productivity, with like 16 people with limited experience with cursor,uncritically quoted over and over assures me that the anti-AI fervor is at least as big of a bubble as the AI bubble itself.
replies(4): >>45161983 #>>45162080 #>>45162269 #>>45166274 #
1. ath3nd ◴[] No.45162269[source]
> uncritically quoted over and over assures me that the anti-AI fervor is at least as big of a bubble as the AI bubble itself.

Counterpoint: the AI fanboys and AI companies, with all their insane funding couldn't come up with a better study and bigger sample size, because LLMs simply don't help experienced developers.

What follows is that the billion dollar companies just couldn't create a better study, either because they tried and didn't like the productivity numbers not being in their favor (very likely), or because they are that sloppy and vibey that they don't know how to make a proper study (I wouldn't be surprised, see ChatGPT's latest features: "study mode" which had a blog post! and you know that the level is not very high).

Again, until there is a better study, the consensus is LLMs are 19% productivity drain for experienced developers, and if they help certain developers, then most likely those developers are not experienced.

How's that for an interpretation?

replies(1): >>45163946 #
2. frankc ◴[] No.45163946[source]
I never tell anyone they have to use AI tools. You do you. In a few years we will see who is better off.
replies(3): >>45167333 #>>45168262 #>>45172070 #
3. goalieca ◴[] No.45167333[source]
It has already been a couple of years. What time period should we revisit? And also how would we measure success?
4. rsynnott ◴[] No.45168262[source]
I mean, surely, if and when they demonstrably work, the sceptic can just adopt them, having lost nothing? There seems to be a new one every month anyway, so it’s not like experience from using the one from three years ago is going to be particularly helpful.

There seems to be an attitude, or at least a pretended attitude, amongst the true believers that the heretics are dooming themselves, left behind in a glorious AI future. But the AI coding tools du jour are completely different from the ones a year ago! And in six months they'll be different again!

5. ath3nd ◴[] No.45172070[source]
LLMs have been existing for 5-6 years already in some shape and form. How long do I have to wait for Claude to actually do something and me starting it to see it in OSS?

- Cause currently what we see in OSS is LLM trash. https://www.reddit.com/r/webdev/comments/1kh72zf/open_source...

- And a large majority of users don't want that copilot trash in their default github experience: https://www.techradar.com/pro/angry-github-users-want-to-dit...

At what point that trash will become gold? 5 more years? And if it doesn't, at what point trash stays trash?

- When there is a study showing that trash is actually sapping 19% of your performance? https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

- When multiple studies show using it makes you dumb? https://tech.co/news/another-study-ai-making-us-dumb

Cause I am pretty sure NFT still has people who swear by them and say "just give it time". At what point can we confidently declare that NFTs are useless without the cultist fanbase going hurr durr? What about LLMs?