←back to thread

14 points johnwheeler | 2 comments | | HN request time: 0s | source

On Hacker News and Twitter, the consensus view is that no one is afraid. People concede that junior engineers and grad students might be the most affected. But, they still seem to hold on to their situations as being sustainable. My question is, is this just a part of wishful thinking and human nature, trying to combat the inevitable? The reason I ask is because I seriously don't see a future where there's a bunch of programmers anymore. I see mass unemployment for programmers. People are in denial, and all of these claims that the AI can't write code without making mistakes are no longer valid once an AI is released potentially overnight, that writes flawless code. Claude 4.5 is a good example. I just really don't see any valid arguments that the technology is not going to get to a point where it makes the job irrelevant, not irrelevant, but completely changes the economics.
Show context
uberman ◴[] No.46339809[source]
I use Claude 4.5 almost every day. It makes mistakes every day. The worst mistakes are the ones that are not obvious and only by careful review do you see the flaws. At the moment, even the best AI cant be reliable event to make modest refactoring. What AI does at the moment is make senior developers worth more and junior developers worth less. I am not at all worried about my own job.
replies(1): >>46339949 #
johnwheeler ◴[] No.46339949[source]
Thank you for your response. This is exactly the type of commentary I'm talking about. The key phrase is "at the moment." It's not that developers will be replaced, but there will be far less need for developers, is what I think.

I think the flaws are going to be solved for, and if that happens, what do you think? I do believe there needs to be a human in the loop, but I don't think there needs to be humans, plural. Eventually.

I believe this is denial. The statement that the best AI can't be reliable enough to do a modest refactoring is not correct. Yes, it can. What it currently cannot do is write a full app from start to finish, but they're working on longer task execution. And this is before any of the big data centers have even been built. What happens then? You get the naysayers that say, "Well, the scaling laws don't apply," but there's a lot of people who think they do apply.

replies(2): >>46340093 #>>46341497 #
ThrowawayR2 ◴[] No.46340093[source]
If anybody who disagrees with your assessment is "in denial (sic)", why should people bother responding to your question seriously?
replies(1): >>46340450 #
johnwheeler ◴[] No.46340450[source]
it's not about people disagreeing with my assessment. It's that people keep saying, "I'm not afraid of AI because it makes mistakes." That's the main argument I've heard. I don't know if those people are ignorant, arrogant, or in denial. Or maybe they're right. I don't know. But I don't think they're right. Human nature leads me to believe they're in denial. Or they're ignorant. I don't think there's necessarily any shame in being in denial or ignorant. They don't know or see what I see.

I don't have to write code anymore, and the code that's coming out needs less and less of my intervention. Maybe I'm just much better at prompting than other people. But I doubt that

The two things I hear are:

1. You'll always need a human in the loop

2. AI isn't any good at writing code

The first one sounds more plausible, but it means less programmers over time.

replies(1): >>46341557 #
uberman ◴[] No.46341557[source]
Claude regularly looses it's mind when refactoring or generating code. I'm talking about failures to the point where files are unrecoverable except to fall back to the head of main. I see this even with opus 4.5 every day. I can't imagine anyone "not writing code anymore" and still able to pass a code review if they are just committing what Claude vibe coded. If you feel good about the code Claude wrote for you to the extent that you are just going to commit it with your name on it, power to you. If you did so and you worked for me and the result was a failed deployment and you could not justify it other than you committed what Claude wrote then I would simply fire you.
replies(1): >>46342846 #
1. johnwheeler ◴[] No.46342846[source]
I've been doing this for 25 years though so maybe that's why. But the bigger point is that again you're not giving me anything more than "it makes mistakes" Sure it does but it makes less of them now. It will make less in the future. Also Anthropic guys are in the same boat: They don't write _much_ code anymore. They just use Claude code, So I'm not the only one.
replies(1): >>46346542 #
2. uberman ◴[] No.46346542[source]
I've been doing this for more than 35 years. Honestly, why make this a superiority contest?

My perspective is that claude (the best at the moment in my opinion) cant make unsupervised changes. If you want to allow it to do so, power to you. While I love what anthropic have done, of course they are going to say they dog food and it is great.

The part that I find personally important is that senior people can be relied on to ask the right questions, give the right prompts and spot confident AI bullshit. As such, AI amplifies experience it does not replace it. I don't see that changing.