I believe that is so far off the mark for a couple reasons:
1) It's possible to work around hallucinations in a more cost effective way than relying on humans to always be correct.
2) There are many use cases where hallucinations aren't such a bad thing (or even a good thing) for which we've never really had a system as powerful as LLMs to build for.
There's absolutely very large use cases for LLMs and it will be pretty disruptive. But it will also create net new value that wasn't possible before.
I say that as someone who thinks we have enough technology as it is and don't need any more.
I kind of like the Chipotle approach. I have a problem with my order, it just refunds me instantly and sometimes gives me a add-on for free.
Honestly I only use LLM for one thing - I give it a set of TS definitions and user input, and ask it to fit those schemas if it can and to not force something if it isn't 100% confident.
I know some people whose whole company is based around the use of AI to send emails or messages, and in reality they're logged into their terminals real time fixing errors before actually sending out the emails. Basically, they are mechanical turks and they even say they're looking at labor in India or Africa to pay them peanuts to address these.