←back to thread

114 points jdspiral | 1 comments | | HN request time: 1.525s | source
Show context
georgewsinger ◴[] No.43767392[source]
Is this really how SOTA LLMs parse our queries? To what extent is this a simplified representation of what they really "see"?
replies(2): >>43768037 #>>43768958 #
1. helloplanets ◴[] No.43768958[source]
This is partly completely misleading and partly simplified, when it comes to SOTA LLMs.

Subject–Verb–Object triples, POS tagging and dependency structures are not used by LLMs. One of the fundamental differences between modern LLMs and traditional NLP is that heuristics like those are not defined.

And assuming that those specific heuristics are the ones which LLMs would converge on after training is incorrect.