←back to thread

625 points lukebennett | 3 comments | | HN request time: 0s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
vundercind ◴[] No.42139782[source]
I thought maybe they were on the right track until I read Attention Is All You Need.

Nah, at best we found a way to make one part of a collection of systems that will, together, do something like thinking. Thinking isn’t part of what this current approach does.

What’s most surprising about modern LLMs is that it turns out there is so much information statistically encoded in the structure of our writing that we can use only that structural information to build a fancy Plinko machine and not only will the output mimic recognizable grammar rules, but it will also sometimes seem to make actual sense, too—and the system doesn’t need to think or actually “understand” anything for us to, basically, usefully query that information that was always there in our corpus of literature, not in the plain meaning of the words, but in the structure of the writing.

replies(5): >>42139883 #>>42139888 #>>42139993 #>>42140508 #>>42140521 #
hackinthebochs ◴[] No.42139888[source]
I see takes like this all the time and its so confusing. Why does knowing how things work under the hood make you think its not on the path towards AGI? What was lacking in the Attention paper that tells you AGI won't be built on LLMs? If its the supposed statistical nature of LLMs (itself a questionable claim), why does statistics seem so deflating to you?
replies(4): >>42140161 #>>42141243 #>>42142441 #>>42145571 #
vundercind ◴[] No.42140161[source]
> Why does knowing how things work under the hood make you think its not on the path towards AGI?

Because I had no idea how these were built until I read the paper, so couldn’t really tell what sort of tree they’re barking up. The failure-modes of LLMs and ways prompts affect output made a ton more sense after I updated my mental model with that information.

replies(2): >>42141442 #>>42141443 #
1. hackinthebochs ◴[] No.42141443[source]
Right, but its behavior didn't change after you learned more about it. Why should that cause you to update in the negative? Why does learning how it work not update you in the direction of "so that's how thinking works!" rather than, "clearly its not doing any thinking"? Why do you have a preconception of how thinking works such that learning about the internals of LLMs updates you against it thinking?
replies(1): >>42142386 #
2. vundercind ◴[] No.42142386[source]
If you didn’t know what an airplane was, and saw one for the first time, you might wonder why it doesn’t flap its wings. Is it just not very good at being a bird yet? Is it trying to flap, but cannot? Why, there’s a guy over there with a company called OpenBird and he is saying all kinds of stuff about how bird-like they are. Where’s the flapping? I don’t see any pecking at seed, either. Maybe the engineers just haven’t finished making the flapping and pecking parts yet?

Then on learning how it works, you might realize flapping just isn’t something they’re built to do, and it wouldn’t make much sense if they did flap their wings, given how they work instead.

And yet—damn, they fly fast! That’s impressive, and without a single flap! Amazing. Useful!

At no point did their behavior change, but your ability to understand how and why they do what they do, and why they fail the ways they fail instead of the ways birds fail, got better. No more surprises from expecting them to be more bird-like than they are supposed to, or able to be!

And now you can better handle that guy over there talking about how powerful and scary these “metal eagles” (his words) are, how he’s working so hard to make sure they don’t eat us with their beaks (… beaks? Where?), they’re so powerful, imagine these huge metal raptors ruling the sky, roaming and eating people as they please, while also… trying to sell you airplanes? Actively seeking further investment in making them more capable? Huh. One begins to suspect the framing of these things as scary birds that (spooky voice) EVEN THEIR CREATORS FEAR FOR THEIR BIRD-LIKE QUALITIES (/spooky voice) was part of a marketing gimmick.

replies(1): >>42142564 #
3. hackinthebochs ◴[] No.42142564[source]
The problem with this analogy is that we know what birds are and what they're constituted by. But we don't know what thinking is or what it is constituted by. If we wanted to learn about birds by examining airplanes, we would be barking up the wrong tree. On the other hand, if we wanted to learn about flight, we might reasonably look at airplanes and birds, then determine what the commonality is between their mechanisms of defying gravity. It would be a mistake to say "planes aren't flapping their wings, therefore they aren't flying". But that's exactly what people do when they dismiss LLMs being presently or in the future capable of thinking because they are made up of statistics, matrix multiplication, etc.