←back to thread

14 points johnwheeler | 3 comments | | HN request time: 0.631s | source

On Hacker News and Twitter, the consensus view is that no one is afraid. People concede that junior engineers and grad students might be the most affected. But, they still seem to hold on to their situations as being sustainable. My question is, is this just a part of wishful thinking and human nature, trying to combat the inevitable? The reason I ask is because I seriously don't see a future where there's a bunch of programmers anymore. I see mass unemployment for programmers. People are in denial, and all of these claims that the AI can't write code without making mistakes are no longer valid once an AI is released potentially overnight, that writes flawless code. Claude 4.5 is a good example. I just really don't see any valid arguments that the technology is not going to get to a point where it makes the job irrelevant, not irrelevant, but completely changes the economics.
1. charlie-83 ◴[] No.46340652[source]
None of the models currently are able to make competent changes to the codebases I work on. This isn't about them "making mistakes" which I have to fix. They completely fail to the point where I cannot use any of their output except in the simplest of cases (even then it's faster to code it myself).

So no, I'm not worried.

replies(1): >>46340802 #
2. johnwheeler ◴[] No.46340802[source]
I'm not trying to be snarky at all but maybe you're less experienced at prompting than me or you just have to work on some really gnarly code. Is that a possibility? Because yours is the argument that I can't say for myself.

The codebases I work on, I can pretty much delegate more and more to AI as time goes on. There's no doubt about that. They're not big unwieldy codebases that have lots of technical debt necessarily but maybe those get quickly replaced over time.

I just don't see this argument holding out over time that the AI will always make mistakes. I would love to be proven wrong though with a counter argument that jives

replies(1): >>46341101 #
3. charlie-83 ◴[] No.46341101[source]
I have tried plenty of times to get it to work. My colleages have done the same. It just doesn't work for the kind of coding we are doing I guess.

Maybe I am terrible at prompting. But I am using AI all the time in the form of chat (rather than it coding directly) and find it very useful for that. I have also used code gen in other contexts (making websites/apps) and been able to generate lots of great stuff quickly. I also compare my style of prompting to people who are claiming it writes all their code for them and don't see much difference. So while it's possible my prompts aren't perfect in my scenarios at work it doesn't seem likely that they are so bad that I could ever improve them enough to change the output from literally useless to taking my job.

To give some context to this, the core of my job that I am refering to here is basically taking documentation about Bluetooth and other wireless protocols and implementing code that parses all that and shows it in a GUI at runs on desktop and android.

There are a lot of immediate barriers for gen ai. Half of my debugging involves stuff like repeatedly pairing Bluetooth speakers to my phone or physical pluging stuff in and AI just can't do that.

Second, the documentation about how these protocols works is gnarly. Giving AI the raw PDFs and asking basic questions about them yields very poor results. I assume there isn't enough of these kinds of documents in their training data. Also lot of the information is not text based and is instead contained in diagrams which I don't think the AI parses at all. This is all assuming there is an actual document to work with. For the latest Bluetooth features what you actually have is a bunch of word documents with people arguing half in the tracked changes and the other half of the argument is in the email chain.

Maybe I could take all that information and condense it into a form that the AI can parse? Not really. The information is already very complex and specified and I don't see how I could explain it to an AI in a way that would be any less ambiguous than just writing the code myself. Also that would assume I actually understand the specification which I never do until I write the code.

Maybe I could just choose one specific little feature I need and get AI to do it? That works, but the feature was probably only 5 lines of code anyways so I spent more time writing the the prompt than I would have writing the prompt. It was the last 2 hours of reading the spec that would actually be useful to automate. Maybe AI could have written all the code in my 1k lines PR but when the PR took 4 months and me literally flying to another country to test it with other hardware, writing the code is not the bottleneck.

Maybe the AI models will get better and be able to do all this. But that isn't just a case of ai models continuing to get the kinds of incremental imrpoves we have been seeing. They would need a leap forward to something people might call AGI to be able to do all this. Maybe that will happen tommorow but it seems just as likely to happen in 5 years or 5 decades. I don't see anyone right now with an idea of how to get there.

And if I'm wrong and AI can do all this by next year? I'll just switch to writing FPGA code which my company desperately needs more people to do and which AI is another order of magnitude more useless at doing.