←back to thread

214 points Brajeshwar | 6 comments | | HN request time: 0s | source | bottom
Show context
Rochus ◴[] No.45090991[source]
The article claims, that senior developers with over 10 years of experience are more than twice as likely to heavily rely on AI tools compared to their junior counterparts. No p-values or statistical significance tests are reported in either The Register article or Fastly's original blog post.

I have over 30 years of experience and recently used Claude Opus 4.1 (via browser and claude.ai) to generate an ECMA-335 and an LLVM code generator for a compiler, and a Qt adapter for the Mono soft debugging protocol. Each task resulted in 2-3kLOC of C++.

The Claude experience was mixed; there is a high probability that the system doesn't respond or just quickly shows an overloaded message and does nothing. If it generates code, I quckly run in some output limitation and have to manually press "continue", and then often the result gets scrambled (i.e. the order of the generated code fragments gets mixed up, which requires another round with Claude to fix).

After this process, the resulting code then compiled immediately, which impressed me. But it is full of omissions and logical errors. I am still testing and correcting. All in all, I can't say at this point that Claude has really taken any work off my hands. In order to understand the code and assess the correctness of the intermediate results, I need to know exactly how to implement the problem myself. And you have to test everything in detail and do a lot of redesigning and correcting. Some implementations are just stubs, and even after several attempts, there was still no implementation.

In my opinion, what is currently available (via my $20 subscription) is impressive, but it neither replaces experience nor does it really save time.

So yes, now I'm one of the 30% seniors who used AI tools, but I didn't really benefit from them in these specific tasks. Not surprisingly, also the original blog states, that nearly 30% of senior developers report "editing AI output enough to offset most of the time savings". So not really a success so far. But all in all I'm still impressed.

replies(5): >>45091343 #>>45091757 #>>45092344 #>>45092985 #>>45099223 #
oliwary ◴[] No.45091343[source]
Hey! I would encourage you to try our Claude code instead, which is also part of your subscription. It's a CLI that takes care of many of the issues you encountered, as it works directly on the code files in a directory. No more copy pasting or unscrambling results. Likewise, it can run commands itself to e.g. compile or even test code.
replies(3): >>45091480 #>>45091506 #>>45091691 #
Rochus ◴[] No.45091480[source]
I'm working on old hardware and not-recent Linux and compiler versions, and I have no confidence yet in allowing AI direct (write) access to my repositories.

Instead I provided Claude with the source code of a transpiler to C (one file) which is known to work, uses the same IR as the new code generators were supposed to use.

This is a controlled experiment with a clear and complete input and clear expectations and specifications of the output. I don't think I would be able to clearly isolate the contributions and assess the performance of Claude when it has access to arbitrary parts of the source code.

replies(4): >>45091766 #>>45091898 #>>45091902 #>>45092832 #
stavros ◴[] No.45091902[source]
I use Claude Code with the Max plan, and the experience isn't far off from what you describe. You still need to understand the system and review the implementation, because it makes many mistakes.

That's not the part it saves me time in, it saves me time in looking up the documentation. Other than that, it might be slower, because the larger the code change is, the more time I need to spend reviewing, and past a point I just can't be bothered.

The best way I've found is to have it write small functions, and then I tell it to compose them together. That way, I know exactly what's happening in the code, and I can trust that it works correctly. Cursor is probably a better way to do that than Claude Code, though.

replies(3): >>45092089 #>>45092284 #>>45092326 #
1. t_mahmood ◴[] No.45092326[source]
So, I am paying $20, for a glorified code generator, that may or may not be correct, to write a small function that I can do for free, and be confident about the correctness, if I have not been lazy to implement a test for it.

If you point out, with test it's also the same with any AI tool available, but to come to that result, I have to continuously prompt it till it gives me the desired output, while I may be able to do it in 2/3 iterations.

Reading documentation always made me little bit knowledgeable than before, while prompting the LLM, gives me nothing of knowledge.

And, I also have to decide which LLM would be good for the task at hand, and most of them will not be free (unless I use a local, but that will also use GPU, and add an energy cost)

I may be nitpicking, but I see too many holes with this approach

replies(3): >>45092749 #>>45094216 #>>45094609 #
2. stavros ◴[] No.45092749[source]
The biggest hole you don't see is that it's worth the $20 to make me overcome my laziness, because I don't like writing code, but I like making stuff, and this way I can make stuff while fooling my brain into thinking I'm not writing code.
replies(1): >>45092972 #
3. t_mahmood ◴[] No.45092972[source]
Sure, that can be a point, which is helping you overcome your personal barrier, But that can be anything,

That is not you were vouching for on the original comment. It was about saving time.

4. weard_beard ◴[] No.45094216[source]
Not only that, but the process described is how you train a junior dev.

There, at least, the wasted time results in the training of a human being who can become sophisticated enough to become a trusted independent implementer in a relatively short duration

5. turtlebits ◴[] No.45094609[source]
Your time isn't free, and I'd certainly with more than $20/month.

I find it extremely useful as a smarter autocomplete, especially for the tedious work - changing function definitions, updating queries when DB schema changes, and writing http requests/api calls from vendor/library documentation.

replies(1): >>45096894 #
6. t_mahmood ◴[] No.45096894[source]
Certainly, So I use an IDE, IntelliJ Ultimate to be precise.

None of the use-cases you mention requires LLM. Just available as IDE functionalities.

IntelliJ has LLM based auto complete, with which I am okay, But it still wrong too many times. Works extremely well with Rust. Their non-llm autocomplete is also superb, which uses ML for suggesting closest, relevant match, IIRC.

It also makes refactoring a breeze, I know what it's going to do exactly.

Also, it can handle database refactoring to a certain capacity! And for that it does not require LLM, so no nondeterministic behavior.

Also, the IDE have its own way of doing http requests, and it's really nice! But, I can use their live template to do autocomplete any boilerplate code. It only requires setting once. No need to fiddle with prompts.