←back to thread

174 points Philpax | 1 comments | | HN request time: 0s | source
Show context
andrewstuart ◴[] No.43719877[source]
LLMs are basically a library that can talk.

That’s not artificial intelligence.

replies(3): >>43719994 #>>43720037 #>>43722517 #
futureshock ◴[] No.43720037[source]
There’s increasing evidence that LLMs are more than that. Especially work by Anthropic has been showing how to trace the internal logic of an LLM as it answers a question. They can in fact reason over facts contained in the model, not just repeat already seen information.

A simple example is how LLMs do math. They are not calculators and have not memorized every sum in existence. Instead they deploy a whole set of mental math techniques that were discovered at training time. For example, Claude uses a special trick for adding 2 digit numbers ending in 6 and 9.

Many more examples in this recent reach report, including evidence of future planning while writing rhyming poetry.

https://www.anthropic.com/research/tracing-thoughts-language...

replies(4): >>43720298 #>>43721540 #>>43722641 #>>43735729 #
1. ahamilton454 ◴[] No.43721540[source]
I don’t think that is the core of this paper. If anything the paper shows that LLMs have no internal reasoning for math at all. The example they demonstrate is that it triggers the same tokens in randomly unrelated numbers. They kind of just “vibe” there way to a solution