[0] https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what...
If AI is a threat to software engineering, I wouldn't expect many software engineers to actively accelerate that trend. I personally don't view it as a threat, but some people (non engineers?) obviously do.
I'd be curious if any OpenAI engineers can share a rough estimate of their day to day composition of human generated code vs AI generated.
[0] https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what...
for some products.
If it were 95% of anything useful, Anthropic would not still have >1000 employees, and the rest of the economy would be collapsing, and governments would be taking some kind of action.
Yet none of that appears to be happening. Why?
https://www.anthropic.com/candidate-ai-guidance
> During take-home assessments Complete these without Claude unless we indicate otherwise. We’d like to assess your unique skills and strengths. We'll be clear when AI is allowed (example: "You may use Claude for this coding challenge").
> During live interviews This is all you–no AI assistance unless we indicate otherwise. We’re curious to see how you think through problems in real time. If you require any accommodations for your interviews, please let your recruiter know early in the process.
He'd have to ask yet did not ask? A CPO of an AI company?
So let's take it on face value and say 95% is written by AI. When you free one bottleneck you expose the next. You still need developers to review it to make sure it's doing the right thing. You still need developers to be able to translate the business context into instructions that make the right product. You have to engage with the product. You need to architect the system - the context windows mean that the tasks can't just be handed off to AI.
So, The role of the programmer changes - you still need technical competence, but to serve the judgement calls of "what is right for the product?" Perhaps there's a world where developers and product management merges, but I think we will still need the people.
The other tactic is saying two unrelated things in a sentence and hoping you think it’s causal, not a fuck up and some marketing at the same time.
I think firing people does not come as a logical conclusion of 95% of code being written by Claude Code. There is a big difference between AI autonomously writing code and developers just finding it easier to prompt changes rather than typing them manually.
In one case, you have an automated software engineer, and may be able to reduce your headcount. In the other, developers may just be slightly more productive or even just enjoy writing code using AI more, but the coding is still very much driven by the developers themselves. I think right now Claude Code shows signs of (1) for simple cases, but mostly falls into the (2) bucket.
When you drill in you find out the real claims distill into something like "95% of the code, in some of the projects, was written by humans who sometimes use AI in their coding tasks."
If they don't produce data, show the study or other compelling examples, don't believe the claims; it's just marketing and marketing can never be trusted because marketing is inherently manipulative.
If we assume it isn't a lie, then given current AI capabilities we should assume that AI isn't being used in a maximally efficient way.
However, developer efficiency isn't the only metric a company like Anthropic would care about, after all they're trying to build the best coding assistant with Claude Code. So for them understanding the failure cases, and the prompting need to recover from those failures is likely more important than just lines of code their developers are producing per hour.
So my guess (assuming the claim is true) is that Anthropic are forcing their employees to use Claude Code to write as much code as possible to collect data on how to improve it.