←back to thread

789 points MindBreaker2605 | 3 comments | | HN request time: 0.953s | source
Show context
sebmellen ◴[] No.45897467[source]
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
replies(20): >>45897481 #>>45897498 #>>45897518 #>>45897885 #>>45897970 #>>45897978 #>>45898040 #>>45898053 #>>45898092 #>>45898108 #>>45898186 #>>45898539 #>>45898651 #>>45898727 #>>45899160 #>>45899375 #>>45900884 #>>45900885 #>>45901421 #>>45903451 #
xuancanh ◴[] No.45897885[source]
In industry research, someone in a chief position like LeCun should know how to balance long-term research with short-term projects. However, for whatever reason, he consistently shows hostility toward LLMs and engineering projects, even though Llama and PyTorch are two of the most influential projects from Meta AI. His attitude doesn’t really match what is expected from a Chief position at a product company like Facebook. When Llama 4 got criticized, he distanced himself from the project, stating that he only leads FAIR and that the project falls under a different organization. That kind of attitude doesn’t seem suitable for the face of AI at the company. It's not a surprise that Zuck tried to demote him.
replies(13): >>45897942 #>>45898142 #>>45898331 #>>45898661 #>>45898893 #>>45899157 #>>45899354 #>>45900094 #>>45900130 #>>45900230 #>>45901443 #>>45901631 #>>45902275 #
hbarka ◴[] No.45900130[source]
LeCun truly believes the future is in world models. He’s not alone. Good for him to now be in the position he’s always wanted and hopefully prove out what he constantly talks about.
replies(1): >>45901728 #
astrange ◴[] No.45901728[source]
He seems stuck in the GOFAI development philosophy where they just decide humans have something called a "world model" because they said so, and then decide that if they just develop some random thing and call it a "world model" it'll create intelligence because it has the same name as the thing they made up.

And of course it doesn't work. Humans don't have world models. There's no such thing as a world model!

replies(1): >>45905064 #
HarHarVeryFunny ◴[] No.45905064[source]
I don't think the focus is really on world models, rather than on animal intelligence based around predicting the real world, but to predict it you need to model it in some sense.
replies(1): >>45905460 #
1. astrange ◴[] No.45905460[source]
IMO the issue is that animals can't have a specific "world model" system, because if you create a model ahead of time you will mostly waste energy because most of the model is not used.

And animals' main concern is energy conservation, so they must be doing something else.

replies(1): >>45906280 #
2. HarHarVeryFunny ◴[] No.45906280[source]
There are many factors playing into "survival of the fittest", and energy conservation is only one. Animals build mental models to predict the world because this superpower of seeing into the future is critical to survival - predict where the water is in a drought, where the food is, and how to catch it, etc, etc.

The animal learns as it encounters learning signals - prediction failure - which is the only way to do it. Of course you need to learn/remember something before you can use that in the future, so in that sense it's "ahead of time", but the reason it's done that way because evolution has found that learning patterns will ultimately prove beneficial.

replies(1): >>45907734 #
3. astrange ◴[] No.45907734[source]
It doesn't necessarily need to model the world to learn how to perform actions though. That was the topic of this old GOFAI research:

https://aaai.org/papers/00268-aaai87-048-pengi-an-implementa...

It instead works by "doing the thing that worked last time".

As an example, you don't usually need to know what is in your garbage in order to take out the trash.