[0] https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what...
If AI is a threat to software engineering, I wouldn't expect many software engineers to actively accelerate that trend. I personally don't view it as a threat, but some people (non engineers?) obviously do.
I'd be curious if any OpenAI engineers can share a rough estimate of their day to day composition of human generated code vs AI generated.
[0] https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what...
When you drill in you find out the real claims distill into something like "95% of the code, in some of the projects, was written by humans who sometimes use AI in their coding tasks."
If they don't produce data, show the study or other compelling examples, don't believe the claims; it's just marketing and marketing can never be trusted because marketing is inherently manipulative.
If we assume it isn't a lie, then given current AI capabilities we should assume that AI isn't being used in a maximally efficient way.
However, developer efficiency isn't the only metric a company like Anthropic would care about, after all they're trying to build the best coding assistant with Claude Code. So for them understanding the failure cases, and the prompting need to recover from those failures is likely more important than just lines of code their developers are producing per hour.
So my guess (assuming the claim is true) is that Anthropic are forcing their employees to use Claude Code to write as much code as possible to collect data on how to improve it.