Nice.
But much more than an arithmetic engine, the current crop of AI needs an epistemic engine, something that would help follow logic and avoid contradictions, to determine what is a well-established fact, and what is a shaky conjecture. Then we might start trusting the AI.
To me this is the most bizarre part. Have we ever had a technology deployed at this scale without a true understanding of its inner workings?
My fear is that the general public perception of AI will be damaged since for most LLMs = AI.
The idea we don't is tabloid journalism, it's simply because the output is (usually) randomised - taken to mean, by those who lack the technical chops, that programmers "don't know how it works" because the output is indeterministic.
This is not withstanding we absolutely can repeat the output by using not randomisation (temperature 0).