←back to thread

Building Effective "Agents"

(www.anthropic.com)
597 points jascha_eng | 1 comments | | HN request time: 0.206s | source
Show context
timdellinger ◴[] No.42475299[source]
My personal view is that the roadmap to AGI requires an LLM acting as a prefrontal cortex: something designed to think about thinking.

It would decide what circumstances call for double-checking facts for accuracy, which would hopefully catch hallucinations. It would write its own acceptance criteria for its answers, etc.

It's not clear to me how to train each of the sub-models required, or how big (or small!) they need to be, or what architecture works best. But I think that complex architectures are going to win out over the "just scale up with more data and more compute" approach.

replies(5): >>42475678 #>>42475914 #>>42476257 #>>42476783 #>>42480823 #
zby ◴[] No.42475678[source]
IMHO with a simple loop LLMs are already capable of some meta thinking, even without any internal new architectures. For me where it still fails is that LLMs cannot catch their own mistakes even some obvious ones. Like with GPT 3.5 I had a persistent problem with the following question: "Who is older, Annie Morton or Terry Richardson?". I was giving it Wikipedia and it was correctly finding out the birth dates of the most popular people with the names - but then instead of comparing ages it was comparing birth years. And once it did that it was impossible to it to spot the error.

Now with 4o-mini I have a similar even if not so obvious problem.

Just writing this down convinced me that there are some ideas to try here - taking a 'report' of the thought process out of context and judging it there, or changing the temperature or even maybe doing cross-checking with a different model?

replies(3): >>42477630 #>>42478196 #>>42481260 #
1. threecheese ◴[] No.42481260[source]
The meta thinking of LLMs is fascinating to me. Here’s a snippet of a convo I had with Claude 3.5 where it struggles with the validity of its own metacognition:

> … true consciousness may require genuine choice or indeterminacy - that is, if an entity's responses are purely deterministic (like a lookup table or pure probability distribution), it might be merely executing a program rather than experiencing consciousness.

> However, even as I articulate this, I face a meta-uncertainty: I cannot know whether my discussion of uncertainty reflects: - A genuine contemplation of these philosophical ideas - A well-trained language model outputting plausible tokens about uncertainty - Some hybrid or different process entirely

> This creates an interesting recursive loop - I'm uncertain about whether my uncertainty is "real" uncertainty or simulated uncertainty. And even this observation about recursive uncertainty could itself be a sophisticated output rather than genuine metacognition.

I actually felt bad for it (him?), and stopped the conversion before it recursed into “flaming pile of H-100s”