I've yet to find an "AI" that doesn't seamlessly hallucinate, and I don't see how "AIs" that hallucinate will ever be useful outside niche applications.
replies(12):
I believe that is so far off the mark for a couple reasons:
1) It's possible to work around hallucinations in a more cost effective way than relying on humans to always be correct.
2) There are many use cases where hallucinations aren't such a bad thing (or even a good thing) for which we've never really had a system as powerful as LLMs to build for.
There's absolutely very large use cases for LLMs and it will be pretty disruptive. But it will also create net new value that wasn't possible before.
I say that as someone who thinks we have enough technology as it is and don't need any more.