Most active commenters
  • xanderlewis(4)
  • falcor84(3)

←back to thread

625 points lukebennett | 18 comments | | HN request time: 1.069s | source | bottom
1. nerdypirate ◴[] No.42139075[source]
"We will have better and better models," wrote OpenAI CEO Sam Altman in a recent Reddit AMA. "But I think the thing that will feel like the next giant breakthrough will be agents."

Is this certain? Are Agents the right direction to AGI?

replies(7): >>42139134 #>>42139151 #>>42139155 #>>42139574 #>>42139637 #>>42139896 #>>42144173 #
2. nprateem ◴[] No.42139134[source]
They're nothing to do with AGI. They're to get people using their LLMs more.
3. xanderlewis ◴[] No.42139151[source]
If by agents you mean systems comprised of individual (perhaps LLM-powered) agents interacting with each other, probably not. I get the vague impression that so far researchers haven’t found any advantage to such systems — anything you can do with a group of AI agents can be emulated with a single one. It’s like chaining up perceptrons hoping to get more expressive power for free.
replies(2): >>42139320 #>>42139568 #
4. SirMaster ◴[] No.42139155[source]
All I can think of when I hear Agents is the Matrix lol.

Goodbye, Mr. Anderson...

5. j_maffe ◴[] No.42139320[source]
> I get the vague impression that so far researchers haven’t found any advantage to such systems — anything you can do with a group of AI agents can be emulated with a single one. It’s like chaining up perceptrons hoping to get more expressive power for free. Emergence happens when many elements interact in a system. Brains are literally a bunch of neurons in a complex network. Also research is already showing promising results of the performance of agent systems.
replies(2): >>42139456 #>>42141876 #
6. tartoran ◴[] No.42139456{3}[source]
That's wishful thinking at best. Throw it all in a bucket and it will get infected with being and life.
replies(1): >>42141177 #
7. falcor84 ◴[] No.42139568[source]
> It’s like chaining up perceptrons hoping to get more expressive power for free.

Isn't that literally the cause of the success of deep learning? It's not quite "free", but as I understand it, the big breakthrough of AlexNet (and much of what came after) was that running a larger CNN on a larger dataset allowed the model to be so much more effective without any big changes in architecture.

replies(1): >>42139912 #
8. esafak ◴[] No.42139574[source]
I think he means you won't be impressed by GPT5 because it will be more of the same, whereas agents will represent a new direction.
9. falcor84 ◴[] No.42139637[source]
Nothing is certain, but my $0.02 is that setting LLM-based agents up with long-running tasks and giving them a way of interacting with the world, via computer use (e.g. Anthropic's recent release) and via actual robotic bodies (e.g. figure.ai) are the way forward to AGI. At the very least, this approach allows the gathering of unlimited ground truth data, that can be used to train subsequent models (or even allow for actual "hive mind" online machine learning).
10. rapjr9 ◴[] No.42139896[source]
I've worked on agents of various kinds (mobile agents, calendar agents, robotic agents, sensing agents) and what is different about agents is they have the ability to not just mess up your data or computing, they have the ability to directly mess up reality. Any problems with agents has a direct impact on your reality; you miss appointments, get lost, can't find stuff, lose your friends, lose you business relationships. This is a big liability issue. Chatbots are like an advice column that sometimes gives bad advice, agents are like a bulldozer sometimes leveling the wrong house.
11. david2ndaccount ◴[] No.42139912{3}[source]
Without a non-linear activation function, chaining perceptrons together is equivalent to one large perceptron.
replies(1): >>42141849 #
12. handfuloflight ◴[] No.42141177{4}[source]
Don't see where your parent comment said or implied that the point was for being and life to emerge.
replies(1): >>42145965 #
13. xanderlewis ◴[] No.42141849{4}[source]
Yep. falcor84: you’re thinking of the so-called ‘multilayer perceptron’ which is basically an archaic name for a (densely connected?) neural network. I was referring to traditional perceptrons.
replies(1): >>42142074 #
14. xanderlewis ◴[] No.42141876{3}[source]
That’s the inspiration behind the idea, but it doesn’t seem to be working in practice.

It’s not true that any element, when duplicated and linked together will exhibit anything emergent. Neural networks (in a certain sense, though not their usual implementation) are already built out of individual units linked together, so simply having more of these groups of units might not add anything important.

> research is already showing promising results of the performance of agent systems.

…in which case, please show us! I’d be interested.

15. falcor84 ◴[] No.42142074{5}[source]
While ReLU is relatively new, AI researchers have been aware of the need for nonlinear activation functions and building multilayer perceptrons with them since the late 1960s, so I had assumed that's what you meant.
replies(1): >>42142428 #
16. xanderlewis ◴[] No.42142428{6}[source]
It was a deliberately historical example.
17. eichi ◴[] No.42144173[source]
It's marketing using buzz word rhetric. It's better to learn OOP if he trully think that. I also think OpenAI's PMF was to make the LLMs application towords better argument machine.
18. hatefulmoron ◴[] No.42145965{5}[source]
I think their point is that having complex interactions between simple things doesn't necessarily result in any great emergent behavior. You can't just throw gloopy masses of cells into a bucket, shake it about, and get a cat.