←back to thread

67 points growbell_social | 2 comments | | HN request time: 2.308s | source

Amidst the nascent concerns of AI replacing software engineers, it seems a proxy for that might be the amount of code written at OpenAI by the various models they have.

If AI is a threat to software engineering, I wouldn't expect many software engineers to actively accelerate that trend. I personally don't view it as a threat, but some people (non engineers?) obviously do.

I'd be curious if any OpenAI engineers can share a rough estimate of their day to day composition of human generated code vs AI generated.

Show context
ivraatiems ◴[] No.44554327[source]
I absolutely believe that a large proportion of new code written is at least in-part AI generated, but that doesn't mean a large proportion of new code is 100% soup-to-nuts/pull-request-to-merge the result of decisions made by an agent and not a human. I doubt that very much.

I think the difference between situations where AI-driven development works and doesn't is going to be largely down to the quality of the engineers who are supervising and prompting to generate that code, and the degree to which they manually evaluate it before moving it forward. I think you'll find that good engineers who understand what they're telling an agent to do are still extremely valuable, and are unlikely to go anywhere in the short to mid term. AI tools are not yet at the point where they are reliable on their own, even for systems they helped build, and it's unclear whether they will be any time soon purely through model scaling (though it's possible).

I think you can see the realities of AI tooling in the fact that the major AI companies are hiring lots and lots of engineers, not just for AI-related positions, but for all sorts of general engineering positions. For example, here's a post for a backend engineer at OpenAI: https://openai.com/careers/backend-software-engineer-leverag... - and one from Anthropic: https://job-boards.greenhouse.io/anthropic/jobs/4561280008.

Note that neither of these require direct experience with using AI coding agents, just an interest in the topic! Contrast that with many companies who now demand engineers explain how they are using AI-driven workflows. When they are being serious about getting people to do the work that will make them money, rather than engaging in marketing hype, AI companies are honest: AI agents are tools, just like IDEs, version control systems, etc. It's up to the wise engineer to use them in a valuable way.

Is it possible they're just hiring these folks to try and make their models better to later replace those people? It's possible. But I'm not sure when in time, if ever, they'll reach the point where that was viable.

replies(2): >>44554414 #>>44555146 #
1. asadotzler ◴[] No.44555146[source]
A large proportion of code written a quarter century ago was also in part AI generated. IntelliSense is AI and it's been around since the 90s.
replies(1): >>44555996 #
2. mdaniel ◴[] No.44555996[source]
I would argue intellisense is far closer to a select statement than "ai" anything. What could come after myString.s is <<select method_name from all_methods where type_name = 'String' and method_name like '%s%'>> where some IDEs prefer <<like 's%'>> and others the <<contains>> style

IJ does some truly stellar introspection to offer sane defaults in the current completion context, such as offering only variables of the correct type for parameters but I think of that as discipline and not AI. Plus, IJ never once made up an API that didn't exist