←back to thread

Building Effective "Agents"

(www.anthropic.com)
597 points jascha_eng | 5 comments | | HN request time: 1.011s | source
Show context
timdellinger ◴[] No.42475299[source]
My personal view is that the roadmap to AGI requires an LLM acting as a prefrontal cortex: something designed to think about thinking.

It would decide what circumstances call for double-checking facts for accuracy, which would hopefully catch hallucinations. It would write its own acceptance criteria for its answers, etc.

It's not clear to me how to train each of the sub-models required, or how big (or small!) they need to be, or what architecture works best. But I think that complex architectures are going to win out over the "just scale up with more data and more compute" approach.

replies(5): >>42475678 #>>42475914 #>>42476257 #>>42476783 #>>42480823 #
zby ◴[] No.42475678[source]
IMHO with a simple loop LLMs are already capable of some meta thinking, even without any internal new architectures. For me where it still fails is that LLMs cannot catch their own mistakes even some obvious ones. Like with GPT 3.5 I had a persistent problem with the following question: "Who is older, Annie Morton or Terry Richardson?". I was giving it Wikipedia and it was correctly finding out the birth dates of the most popular people with the names - but then instead of comparing ages it was comparing birth years. And once it did that it was impossible to it to spot the error.

Now with 4o-mini I have a similar even if not so obvious problem.

Just writing this down convinced me that there are some ideas to try here - taking a 'report' of the thought process out of context and judging it there, or changing the temperature or even maybe doing cross-checking with a different model?

replies(3): >>42477630 #>>42478196 #>>42481260 #
tomrod ◴[] No.42477630[source]
Brains are split internally, with each having their own monologue. One happens to have command.
replies(1): >>42477878 #
1. furyofantares ◴[] No.42477878[source]
I don't think there's reason to believe both halves have a monologue, is there? Experience, yes, but doesn't only one half do language?
replies(3): >>42477981 #>>42479588 #>>42483710 #
2. ggm ◴[] No.42477981[source]
So if like me you have an interior dialogue, which is speaking and which is listening or is it the same one? I do not ascribe the speaker or listener to a lobe, but whatever the language and comprehension centre(s) is(are), it can do both at the same time.
replies(1): >>42479550 #
3. furyofantares ◴[] No.42479550[source]
Same half. My understanding is that in split brain patients, it looks like the one half has extremely limited ability to parse language and no ability to create it.
4. Filligree ◴[] No.42479588[source]
Neither of my halves need a monologue, thanks.
5. tomrod ◴[] No.42483710[source]
[0] https://www.youtube.com/watch?v=fJRx9wItvKo

[1] https://thersa.org/globalassets/pdfs/blogs/rsa-divided-brain...

[2] https://en.wikipedia.org/wiki/Lateralization_of_brain_functi...

You have two minds (at least). One happens to be dominant.