←back to thread

67 points growbell_social | 1 comments | | HN request time: 0.204s | source

Amidst the nascent concerns of AI replacing software engineers, it seems a proxy for that might be the amount of code written at OpenAI by the various models they have.

If AI is a threat to software engineering, I wouldn't expect many software engineers to actively accelerate that trend. I personally don't view it as a threat, but some people (non engineers?) obviously do.

I'd be curious if any OpenAI engineers can share a rough estimate of their day to day composition of human generated code vs AI generated.

Show context
notfried ◴[] No.44554230[source]
Not OpenAI, but Anthropic CPO Mike Krieger said in response to a question of how much of Claude Code is written by Claude Code: "At this point, I would be shocked if it wasn't 95% plus. I'd have to ask Boris and the other tech leads on there."

[0] https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what...

replies(10): >>44554351 #>>44554371 #>>44554536 #>>44554691 #>>44555029 #>>44555126 #>>44555211 #>>44559459 #>>44562074 #>>44562426 #
asadotzler ◴[] No.44555126[source]
Weasel words. No different than Nadella claiming 50%.

When you drill in you find out the real claims distill into something like "95% of the code, in some of the projects, was written by humans who sometimes use AI in their coding tasks."

If they don't produce data, show the study or other compelling examples, don't believe the claims; it's just marketing and marketing can never be trusted because marketing is inherently manipulative.

replies(1): >>44555981 #
1. kypro ◴[] No.44555981[source]
It could be true, the primary issue here is that it's the wrong metric. I mean you could write 100% of your code with AI if you were basically telling it exactly what to write...

If we assume it isn't a lie, then given current AI capabilities we should assume that AI isn't being used in a maximally efficient way.

However, developer efficiency isn't the only metric a company like Anthropic would care about, after all they're trying to build the best coding assistant with Claude Code. So for them understanding the failure cases, and the prompting need to recover from those failures is likely more important than just lines of code their developers are producing per hour.

So my guess (assuming the claim is true) is that Anthropic are forcing their employees to use Claude Code to write as much code as possible to collect data on how to improve it.