> I'm assuming we will solve the hallucination problemIt's unclear what this would even mean, since "hallucination" carries a surprising number of different definitions and commentators are rarely precise about what they mean when they say hallucination.
But, color me skeptical. We will never solve the problem of a token prediction engine being able to generate a sequence of tokens that the vast majority of humans interpret as not corresponding to a true statement. Perhaps in very particular and constrained domains we can build systems that, through a variety of mechanisms, are capable of providing trustworthy automation despite the ever-present risk of hallucination. Something like mathematical proofs checked by a computer are an obvious case where the model can hallucinate because the overall system can gate-keep truth. Doing this in any other domain will, of course, be more difficult.
In other words: we may be able to mitigate and systemically manage the risk for some types of particular tasks, but the problem of generating untrue statements is fundamental to the technology and will always require effort to manage and mitigate. In that sense, the whole conversation around hallucination is reminiscent of the frame problem.