/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
Meaning Machine – Visualize how LLMs break down and simulate meaning
(meaning-machine.streamlit.app)
114 points
jdspiral
| 2 comments |
22 Apr 25 22:55 UTC
|
HN request time: 0.745s
|
source
Show context
georgewsinger
◴[
22 Apr 25 23:53 UTC
]
No.
43767392
[source]
▶
>>43767058 (OP)
#
Is this really how SOTA LLMs parse our queries? To what extent is this a simplified representation of what they really "see"?
replies(2):
>>43768037
#
>>43768958
#
1.
jdspiral
◴[
23 Apr 25 02:04 UTC
]
No.
43768037
[source]
▶
>>43767392
#
Yes, tokenization and embeddings are exactly how LLMs process input—they break text into tokens and map them to vectors. POS tags and SVOs aren't part of the model pipeline but help visualize structures the models learn implicitly.
replies(1):
>>43769334
#
ID:
GO
2.
◴[
23 Apr 25 06:50 UTC
]
No.
43769334
[source]
▶
>>43768037 (TP)
#
↑