Amazing that Musk did it first. (Although it was suggested to him as part of an interview a month before release).
These systems are very good at finding obscure references that were overlooked by mere mortals.
Is it though?
LLMs are great at answering questions based on information you make available to them, especially if you have the instincts and skill to spot when they are likely to make mistakes and to fact-check key details yourself.
That doesn't mean that using them to build a knowledge base itself is a good idea! We need reliable, verified knowledge bases that LLMs can make use-of.
https://en.wikipedia.org/wiki/Charlie_Kirk#Assassination
https://grokipedia.com/page/Charlie_Kirk : Assassination Details and Investigation
This is an active case that has not gone to trial, and the alleged text messages and Discords have not had their forensics cross-examined. Yet Grokipedia is already citing them as fact, not allegation. (What is considered the correct neutral way to report on alleged facts in active cases?)