This seems like a very flawed assumption to me. My take is that people look at hallucinations and say "wow, if it can't even get the easiest things consistently right, no way am I going to trust it with harder things".
This seems like a very flawed assumption to me. My take is that people look at hallucinations and say "wow, if it can't even get the easiest things consistently right, no way am I going to trust it with harder things".
These code AIs are just going to get better and better. Fixing this "tsunami of bad code" will consist of just passing it through the better AIs that will easily just fix most of the problems. I can't help but feel like this will be mostly a non-problem in the end.
At this point in time there's no obvious path to that reality, it's just unfounded optimism and I don't think it's particularly healthy. What happens 5, 10, or 20 years down the line when this magical solution doesn't arrive?
You can claim that continued progression is speculative, and some aspects are, but it's hardly "an article of faith", unlike "we've suddenly hit a surprising wall we can't surmount".
Except that's not how it's actually gone. It's more like, improvements happen in erratic jumps as new methods are discovered, then improvements slow or stall out when the limits of those methods are reached.
https://hai.stanford.edu/news/ais-ostensible-emergent-abilit...
And really, there was a version of what I'm talking about in the shorter timespan with LLMs - OpenAI's GPT models existed for several years before someone got the idea to put it behind a chat interface and the popularity / apparent capability exploded a few years ago.
That's exactly what I said in the post you responded to: there weren't erratic jumps, there was steady progress over decades.
* Granted we don't know for sure it'll be short this time, but hints are that we're starting to hit that wall with improvements slowing down.