←back to thread

214 points Brajeshwar | 1 comments | | HN request time: 0s | source
Show context
Rochus ◴[] No.45090991[source]
The article claims, that senior developers with over 10 years of experience are more than twice as likely to heavily rely on AI tools compared to their junior counterparts. No p-values or statistical significance tests are reported in either The Register article or Fastly's original blog post.

I have over 30 years of experience and recently used Claude Opus 4.1 (via browser and claude.ai) to generate an ECMA-335 and an LLVM code generator for a compiler, and a Qt adapter for the Mono soft debugging protocol. Each task resulted in 2-3kLOC of C++.

The Claude experience was mixed; there is a high probability that the system doesn't respond or just quickly shows an overloaded message and does nothing. If it generates code, I quckly run in some output limitation and have to manually press "continue", and then often the result gets scrambled (i.e. the order of the generated code fragments gets mixed up, which requires another round with Claude to fix).

After this process, the resulting code then compiled immediately, which impressed me. But it is full of omissions and logical errors. I am still testing and correcting. All in all, I can't say at this point that Claude has really taken any work off my hands. In order to understand the code and assess the correctness of the intermediate results, I need to know exactly how to implement the problem myself. And you have to test everything in detail and do a lot of redesigning and correcting. Some implementations are just stubs, and even after several attempts, there was still no implementation.

In my opinion, what is currently available (via my $20 subscription) is impressive, but it neither replaces experience nor does it really save time.

So yes, now I'm one of the 30% seniors who used AI tools, but I didn't really benefit from them in these specific tasks. Not surprisingly, also the original blog states, that nearly 30% of senior developers report "editing AI output enough to offset most of the time savings". So not really a success so far. But all in all I'm still impressed.

replies(5): >>45091343 #>>45091757 #>>45092344 #>>45092985 #>>45099223 #
1. furyofantares ◴[] No.45092985[source]
I'm just shy of 30 years experience. I think I've spent more time learning how to use these tools than any other technology I've learned, and I still don't know the best way to use them.

They certainly weren't a time-saver right away but they became one after some time giving them a real shot. I tested their + my limits on small projects, working out how to get them to do the whole project, figuring out when they stop working and why, figuring out which technology they work best with, figuring out the right size problems to give them, figuring out how to recognize if I'm asking them something they can't do well a ask something different instead, guide them into creating code that they can't actually continue to be successful with.

I started last December in Cursor's agentic mode and have been in Claude Code ever since probably March or April. It's definitely been a huge boost all year for side projects - but only in the last couple months have I been having success in a large codebase.

Even with all this experience I don't know that I would really be able to get much value out of the chat interface for them. They need to be proposing changes I can just hit accept or reject on (this is how both Claude Code and Cursor work btw - you don't have to allow it to write to any file you don't want or execute any command you don't want).