←back to thread

174 points Philpax | 1 comments | | HN request time: 0s | source
Show context
andrewstuart ◴[] No.43719877[source]
LLMs are basically a library that can talk.

That’s not artificial intelligence.

replies(3): >>43719994 #>>43720037 #>>43722517 #
futureshock ◴[] No.43720037[source]
There’s increasing evidence that LLMs are more than that. Especially work by Anthropic has been showing how to trace the internal logic of an LLM as it answers a question. They can in fact reason over facts contained in the model, not just repeat already seen information.

A simple example is how LLMs do math. They are not calculators and have not memorized every sum in existence. Instead they deploy a whole set of mental math techniques that were discovered at training time. For example, Claude uses a special trick for adding 2 digit numbers ending in 6 and 9.

Many more examples in this recent reach report, including evidence of future planning while writing rhyming poetry.

https://www.anthropic.com/research/tracing-thoughts-language...

replies(4): >>43720298 #>>43721540 #>>43722641 #>>43735729 #
1. sksxihve ◴[] No.43722641[source]
> sometimes this "chain of thought" ends up being misleading; Claude sometimes makes up plausible-sounding steps to get where it wants to go. From a reliability perspective, the problem is that Claude’s "faked" reasoning can be very convincing.

If you ask the LLM to explain how it got the answer the response it gives you won't necessarily be the steps it used to figure out the answer.