←back to thread

214 points Brajeshwar | 9 comments | | HN request time: 0.696s | source | bottom
Show context
Rochus ◴[] No.45090991[source]
The article claims, that senior developers with over 10 years of experience are more than twice as likely to heavily rely on AI tools compared to their junior counterparts. No p-values or statistical significance tests are reported in either The Register article or Fastly's original blog post.

I have over 30 years of experience and recently used Claude Opus 4.1 (via browser and claude.ai) to generate an ECMA-335 and an LLVM code generator for a compiler, and a Qt adapter for the Mono soft debugging protocol. Each task resulted in 2-3kLOC of C++.

The Claude experience was mixed; there is a high probability that the system doesn't respond or just quickly shows an overloaded message and does nothing. If it generates code, I quckly run in some output limitation and have to manually press "continue", and then often the result gets scrambled (i.e. the order of the generated code fragments gets mixed up, which requires another round with Claude to fix).

After this process, the resulting code then compiled immediately, which impressed me. But it is full of omissions and logical errors. I am still testing and correcting. All in all, I can't say at this point that Claude has really taken any work off my hands. In order to understand the code and assess the correctness of the intermediate results, I need to know exactly how to implement the problem myself. And you have to test everything in detail and do a lot of redesigning and correcting. Some implementations are just stubs, and even after several attempts, there was still no implementation.

In my opinion, what is currently available (via my $20 subscription) is impressive, but it neither replaces experience nor does it really save time.

So yes, now I'm one of the 30% seniors who used AI tools, but I didn't really benefit from them in these specific tasks. Not surprisingly, also the original blog states, that nearly 30% of senior developers report "editing AI output enough to offset most of the time savings". So not really a success so far. But all in all I'm still impressed.

replies(5): >>45091343 #>>45091757 #>>45092344 #>>45092985 #>>45099223 #
1. kelnos ◴[] No.45091757[source]
I hate to play the "you're holding it wrong" card, but when I started, I had more or less the same experience. Eventually you start to learn how to better talk to it in order to get better results.

Something I've found useful with Claude Code is that it works a lot better if I give it many small tasks to perform to eventually get the big thing done, rather than just dumping the big thing in its lap. You can do this interactively (prompt, output, prompt, output, prompt, output...) or by writing a big markdown file with the steps to build it laid out.

replies(5): >>45091812 #>>45091899 #>>45091954 #>>45091998 #>>45092320 #
2. Rochus ◴[] No.45091812[source]
I just tried to use it for something I consider it to provide the most benefit (for my case). Being able to fully delegate a complicated (and boring) part to a machine would give me more time for the things I'm really interested in. I think we are on the right track in this regard, but we still have a long way to go.
3. chillingeffect ◴[] No.45091899[source]
Similar here. AI works much better as a consultant than as a developer. I ask it all kinds of things I have suspicions and intuitions about and it provides clarity and examples. It's great for subroutines. Trying to make full programs is just too large of a space. It's difficult to communicate all the implicit requirements.
replies(1): >>45092788 #
4. JeanMarcS ◴[] No.45091954[source]
This. For me (senior as in I am in the field since last century), that's how I use it. "I want to do that, with this data, to obtain that"

I still do the part of my job that I got experience on, analyze the need, and use the AI like an assistant to do small libraries or part of code. Like these, errors have less chance to appear. Then I glue that together.

For me the time ratio is best use like that. If I have to describe the whole thing, I'm not far from doing it myself, so there's no need for me.

Important: I work alone, not in a team, so maybe it has an impact on my thought

5. deadbabe ◴[] No.45091998[source]
It’d be nice if we could “pipe” prompts directly similar to how we pipe multiple Unix commands to eventually get what we really want.

Then we can give someone that entire string of prompts as a repeatable recipe.

replies(1): >>45092034 #
6. kasey_junk ◴[] No.45092034[source]
You can send prompts on the command line to Claude, I typically save prompts in the repo. But note it won’t produce deterministic output.
7. JeremyNT ◴[] No.45092320[source]
While this matches my experience, it's worth mentioning thar the act of breaking a task up into the correct chunk size and describing it in English is itself a non trivial task which can be more time consuming than simply writing the actual code.

The fact that it works is amazing, but I'm less convinced that it's enhancing my productivity.

(I think they real productivity boost for me is if I still write the code and have the assistant write test coverage based on diffs, which is trivial to prompt for good results)

replies(1): >>45092652 #
8. kristianbrigman ◴[] No.45092652[source]
And one that a lot of people skip, so that forcing function might make for better code, even if it isn’t faster.
9. jennyholzer ◴[] No.45092788[source]
People who consistently consult LLMs for product direction or software feature design overwhelmingly appear to me as willfully ignorant dullards.

I mean it's even further than willful ignorance. It's delight in one's own ignorance.