←back to thread

Building Effective "Agents"

(www.anthropic.com)
596 points jascha_eng | 9 comments | | HN request time: 0.646s | source | bottom
Show context
timdellinger ◴[] No.42475299[source]
My personal view is that the roadmap to AGI requires an LLM acting as a prefrontal cortex: something designed to think about thinking.

It would decide what circumstances call for double-checking facts for accuracy, which would hopefully catch hallucinations. It would write its own acceptance criteria for its answers, etc.

It's not clear to me how to train each of the sub-models required, or how big (or small!) they need to be, or what architecture works best. But I think that complex architectures are going to win out over the "just scale up with more data and more compute" approach.

replies(5): >>42475678 #>>42475914 #>>42476257 #>>42476783 #>>42480823 #
1. zby ◴[] No.42475678[source]
IMHO with a simple loop LLMs are already capable of some meta thinking, even without any internal new architectures. For me where it still fails is that LLMs cannot catch their own mistakes even some obvious ones. Like with GPT 3.5 I had a persistent problem with the following question: "Who is older, Annie Morton or Terry Richardson?". I was giving it Wikipedia and it was correctly finding out the birth dates of the most popular people with the names - but then instead of comparing ages it was comparing birth years. And once it did that it was impossible to it to spot the error.

Now with 4o-mini I have a similar even if not so obvious problem.

Just writing this down convinced me that there are some ideas to try here - taking a 'report' of the thought process out of context and judging it there, or changing the temperature or even maybe doing cross-checking with a different model?

replies(3): >>42477630 #>>42478196 #>>42481260 #
2. tomrod ◴[] No.42477630[source]
Brains are split internally, with each having their own monologue. One happens to have command.
replies(1): >>42477878 #
3. furyofantares ◴[] No.42477878[source]
I don't think there's reason to believe both halves have a monologue, is there? Experience, yes, but doesn't only one half do language?
replies(3): >>42477981 #>>42479588 #>>42483710 #
4. ggm ◴[] No.42477981{3}[source]
So if like me you have an interior dialogue, which is speaking and which is listening or is it the same one? I do not ascribe the speaker or listener to a lobe, but whatever the language and comprehension centre(s) is(are), it can do both at the same time.
replies(1): >>42479550 #
5. zby ◴[] No.42478196[source]
Ah yeah - actually I tested that taking out of context. This is the thing that surprised me - I thought it is about 'writing itself into a corner - but even in a completely different context the LLM is consistently doing an obvious mistake. Here is the example: https://chatgpt.com/share/67667827-dd88-8008-952b-242a40c2ac...

Janet Waldo was playing Corliss Archer on radio - and the quote the LLM found in Wikipedia was confirming it. But the question was about film - and the LLM cannot spot the gap in its reasoning - even if I try to warn it by telling it the report came from a junior researcher.

6. furyofantares ◴[] No.42479550{4}[source]
Same half. My understanding is that in split brain patients, it looks like the one half has extremely limited ability to parse language and no ability to create it.
7. Filligree ◴[] No.42479588{3}[source]
Neither of my halves need a monologue, thanks.
8. threecheese ◴[] No.42481260[source]
The meta thinking of LLMs is fascinating to me. Here’s a snippet of a convo I had with Claude 3.5 where it struggles with the validity of its own metacognition:

> … true consciousness may require genuine choice or indeterminacy - that is, if an entity's responses are purely deterministic (like a lookup table or pure probability distribution), it might be merely executing a program rather than experiencing consciousness.

> However, even as I articulate this, I face a meta-uncertainty: I cannot know whether my discussion of uncertainty reflects: - A genuine contemplation of these philosophical ideas - A well-trained language model outputting plausible tokens about uncertainty - Some hybrid or different process entirely

> This creates an interesting recursive loop - I'm uncertain about whether my uncertainty is "real" uncertainty or simulated uncertainty. And even this observation about recursive uncertainty could itself be a sophisticated output rather than genuine metacognition.

I actually felt bad for it (him?), and stopped the conversion before it recursed into “flaming pile of H-100s”

9. tomrod ◴[] No.42483710{3}[source]
[0] https://www.youtube.com/watch?v=fJRx9wItvKo

[1] https://thersa.org/globalassets/pdfs/blogs/rsa-divided-brain...

[2] https://en.wikipedia.org/wiki/Lateralization_of_brain_functi...

You have two minds (at least). One happens to be dominant.