Most active commenters
  • xanderlewis(4)

←back to thread

625 points lukebennett | 11 comments | | HN request time: 0s | source | bottom
Show context
nerdypirate ◴[] No.42139075[source]
"We will have better and better models," wrote OpenAI CEO Sam Altman in a recent Reddit AMA. "But I think the thing that will feel like the next giant breakthrough will be agents."

Is this certain? Are Agents the right direction to AGI?

replies(7): >>42139134 #>>42139151 #>>42139155 #>>42139574 #>>42139637 #>>42139896 #>>42144173 #
1. xanderlewis ◴[] No.42139151[source]
If by agents you mean systems comprised of individual (perhaps LLM-powered) agents interacting with each other, probably not. I get the vague impression that so far researchers haven’t found any advantage to such systems — anything you can do with a group of AI agents can be emulated with a single one. It’s like chaining up perceptrons hoping to get more expressive power for free.
replies(2): >>42139320 #>>42139568 #
2. j_maffe ◴[] No.42139320[source]
> I get the vague impression that so far researchers haven’t found any advantage to such systems — anything you can do with a group of AI agents can be emulated with a single one. It’s like chaining up perceptrons hoping to get more expressive power for free. Emergence happens when many elements interact in a system. Brains are literally a bunch of neurons in a complex network. Also research is already showing promising results of the performance of agent systems.
replies(2): >>42139456 #>>42141876 #
3. tartoran ◴[] No.42139456[source]
That's wishful thinking at best. Throw it all in a bucket and it will get infected with being and life.
replies(1): >>42141177 #
4. falcor84 ◴[] No.42139568[source]
> It’s like chaining up perceptrons hoping to get more expressive power for free.

Isn't that literally the cause of the success of deep learning? It's not quite "free", but as I understand it, the big breakthrough of AlexNet (and much of what came after) was that running a larger CNN on a larger dataset allowed the model to be so much more effective without any big changes in architecture.

replies(1): >>42139912 #
5. david2ndaccount ◴[] No.42139912[source]
Without a non-linear activation function, chaining perceptrons together is equivalent to one large perceptron.
replies(1): >>42141849 #
6. handfuloflight ◴[] No.42141177{3}[source]
Don't see where your parent comment said or implied that the point was for being and life to emerge.
replies(1): >>42145965 #
7. xanderlewis ◴[] No.42141849{3}[source]
Yep. falcor84: you’re thinking of the so-called ‘multilayer perceptron’ which is basically an archaic name for a (densely connected?) neural network. I was referring to traditional perceptrons.
replies(1): >>42142074 #
8. xanderlewis ◴[] No.42141876[source]
That’s the inspiration behind the idea, but it doesn’t seem to be working in practice.

It’s not true that any element, when duplicated and linked together will exhibit anything emergent. Neural networks (in a certain sense, though not their usual implementation) are already built out of individual units linked together, so simply having more of these groups of units might not add anything important.

> research is already showing promising results of the performance of agent systems.

…in which case, please show us! I’d be interested.

9. falcor84 ◴[] No.42142074{4}[source]
While ReLU is relatively new, AI researchers have been aware of the need for nonlinear activation functions and building multilayer perceptrons with them since the late 1960s, so I had assumed that's what you meant.
replies(1): >>42142428 #
10. xanderlewis ◴[] No.42142428{5}[source]
It was a deliberately historical example.
11. hatefulmoron ◴[] No.42145965{4}[source]
I think their point is that having complex interactions between simple things doesn't necessarily result in any great emergent behavior. You can't just throw gloopy masses of cells into a bucket, shake it about, and get a cat.