Is this really how SOTA LLMs parse our queries? To what extent is this a simplified representation of what they really "see"?
replies(2):
Subject–Verb–Object triples, POS tagging and dependency structures are not used by LLMs. One of the fundamental differences between modern LLMs and traditional NLP is that heuristics like those are not defined.
And assuming that those specific heuristics are the ones which LLMs would converge on after training is incorrect.