←back to thread

925 points dmitrybrant | 1 comments | | HN request time: 0.312s | source
Show context
theptip ◴[] No.45163517[source]
A good case study. I have found these two to be good categories of win:

> Use these tools as a massive force multiplier of your own skills.

Claude definitely makes me more productive in frameworks I know well, where I can scan and pattern-match quickly on the boilerplate parts.

> Use these tools for rapid onboarding onto new frameworks.

I’m also more productive here, this is an enabler to explore new areas, and is also a boon at big tech companies where there are just lots of tech stacks and frameworks in use.

I feel there is an interesting split forming in ability to gauge AI capabilities - it kinda requires you to be on top of a rapidly-changing firehose of techniques and frameworks. If you haven’t spent 100 hours with Claude Code / Claude 4.0 you likely don’t have an accurate picture of its capabilities.

“Enables non-coders to vibe code their way into trouble” might be the median scenario on X, but it’s not so relevant to what expert coders will experience if they put the time in.

replies(16): >>45163642 #>>45163857 #>>45163954 #>>45163957 #>>45164146 #>>45164186 #>>45165282 #>>45165556 #>>45166441 #>>45166708 #>>45167115 #>>45167361 #>>45168913 #>>45169267 #>>45178891 #>>45193900 #
marcus_holmes ◴[] No.45164186[source]
>> Use these tools for rapid onboarding onto new frameworks.

Also new languages - our team uses Ruby, and Ruby is easy to read, so I can skip learning the syntax and get the LLM to write the code. I have to make all the decisions, and guide it, but I don't need to learn Ruby to write acceptable-level code [0]. I get to be immediately productive in an unfamiliar environment, which is great.

[0] acceptable-level as defined by the rest of the team - they're checking my PRs.

replies(1): >>45164437 #
AdieuToLogic ◴[] No.45164437[source]
>>> Use these tools for rapid onboarding onto new frameworks.

> Also new languages - our team uses Ruby, and Ruby is easy to read, so I can skip learning the syntax and get the LLM to write the code.

If Ruby is "easy to read" and assuming you know a similar programming language (such as Perl or Python), how difficult is it to learn Ruby and be able to write the code yourself?

> ... but I don't need to learn Ruby to write acceptable-level code [0].

Since the team you work with uses Ruby, why do you not need to learn it?

> [0] acceptable-level as defined by the rest of the team - they're checking my PRs.

Ah. Now I get it.

Instead of learning the lingua franca and being able to verify your own work, "the rest of the team" has to make sure your PR's will not obviously fail.

Here's a thought - has it crossed your mind that team members needing to determine if your PR's are acceptable is "a bad thing", in that it may indicate a lack of trust of the changes you have been introducing?

Furthermore, does this situation qualify as "immediately productive" for the team or only yourself?

EDIT:

If you are not a software engineer by trade and instead a stakeholder wanting to formally specify desired system changes to the engineering team, an approach to consider is authoring RSpec[0] specs to define feature/integration specifications instead of PR's.

This would enable you to codify functional requirements such that their satisfaction is provable, assist the engineering team's understanding of what must be done in the context of existing behavior, identify conflicting system requirements (if any) before engineering effort is expended, provide a suite of functional regression tests, and serve as executable documentation for team members.

0 - https://rspec.info/features/6-1/rspec-rails/feature-specs/fe...

replies(4): >>45164821 #>>45165710 #>>45166293 #>>45168734 #
nchmy ◴[] No.45164821[source]
are you advocating for not having code reviews...? Just straight force push to main?
replies(2): >>45164900 #>>45166688 #
1. cyphar ◴[] No.45166688[source]
Code reviews (especially internal ones) generally assume that the person writing the original code has an idea of what they are doing and are designed to catch mistakes that humans might make. Just because they probably work to improve codebases with human submissions doesn't mean that they are good enough filter for LLM-generated code that the submitter doesn't sufficiently understand and has submitted without their own review. Same goes for CI and testing.

This reminds of some of the comments made by reviewers during the infamous Schön scientific fraud case. The scientific review process is designed to catch mistakes and honest flaws in research. It is not designed to catch fraud, and the evidence shows that it is bad at it.

Another applicable example would be the bad patches fiasco with the Linux kernel. (And there is going to be a session at the upcoming maintainers' summit about LLM-generated kernel patches.)