←back to thread

178 points themgt | 1 comments | | HN request time: 0.214s | source
Show context
sunir ◴[] No.45767774[source]
Even if their introspection within the inference step is limited, by looping over a core set of documents that the agent considers itself, it can observe changes in the output and analyze those changes to deduce facts about its internal state.

You may have experienced this when the llms get hopelessly confused and then you ask it what happened. The llm reads the chat transcript and gives an answer as consistent with the text as it can.

The model isn’t the active part of the mind. The artifacts are.

This is the same as Searles Chinese room. The intelligence isn’t in the clerk but the book. However the thinking is in the paper.

The Turing machine equivalent is the state table (book, model), the read/write/move head (clerk, inference) and the tape (paper, artifact).

Thus it isn’t mystical that the AIs can introspect. It’s routine and frequently observed in my estimation.

replies(2): >>45770660 #>>45793812 #
1. conscion ◴[] No.45793812[source]
> This is the same as Searles Chinese room. The intelligence isn’t in the clerk but the book. However the thinking is in the paper.

This feels like a misrepresentation of the "Chinese Room" thought experiment. That the "thinking" isn't the clerk nor the book; it's the entire room itself.