Most active commenters
  • ACCount37(7)
  • hitarpetar(3)

←back to thread

760 points MindBreaker2605 | 32 comments | | HN request time: 0.001s | source | bottom
Show context
sebmellen ◴[] No.45897467[source]
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
replies(20): >>45897481 #>>45897498 #>>45897518 #>>45897885 #>>45897970 #>>45897978 #>>45898040 #>>45898053 #>>45898092 #>>45898108 #>>45898186 #>>45898539 #>>45898651 #>>45898727 #>>45899160 #>>45899375 #>>45900884 #>>45900885 #>>45901421 #>>45903451 #
1. ACCount37 ◴[] No.45897970[source]
That was obviously him getting sidelined. And it's easy to see why.

LLMs get results. None of the Yann LeCun's pet projects do. He had ample time to prove that his approach is promising, and he didn't.

replies(3): >>45898088 #>>45898122 #>>45898749 #
2. dude250711 ◴[] No.45898088[source]
There is someone else at Facebook who's pet projects do not get results...
replies(3): >>45898144 #>>45898195 #>>45898427 #
3. camillomiller ◴[] No.45898122[source]
LLMs get results is quite the bold statement. If they get results, they should be getting adopted, and they should be making money. This is all built on hazy promises. If you had marketable results, you wouldn't have to hide 20+ billion dollars of debt financing into an obscure SPV. LLMs are the most baffling piece of tech. They are incredible, and yet marred by their non-deterministic hallucinatory nature, and bound to fail in adoption unless you convince everyone that they don't need precision and accuracy, but they can do their business at 75% quality, just with less human overhead. It's quite the thing to convince people of, and that's why it needs the spend it's needing. A lot of we-need-to-stay-in-the-loop CEOs and bigwigs got infatuated with the idea, and most probably they just had their companies get addicted to the tech equivalent of crack cocaine. A reckoning is coming.
replies(3): >>45898203 #>>45898220 #>>45898398 #
4. jb1991 ◴[] No.45898144[source]
Who are you referring to?
replies(1): >>45898176 #
5. nolok ◴[] No.45898176{3}[source]
I think he means Zuckerberg himself, the metaverse isn't exactly a major success, but this is a false equivalency the way he organized it only his vote matters he does what he wants
6. ergocoder ◴[] No.45898195[source]
If you hire a house cleaner to clean your house, and the cleaner didn't do well, would you eject yourself out of the house? You would not. You would change to a new cleaner.
replies(1): >>45900278 #
7. ◴[] No.45898203[source]
8. ACCount37 ◴[] No.45898220[source]
LLMs get results, yes. They are getting adopted, and they are making money.

Frontier models are all profitable. Inference is sold with a damn good margin, and the amounts of inference AI companies sell keeps rising. This necessitates putting more and more money into infrastructure. AI R&D is extremely expensive too, and this necessitates even more spending.

A mistake I see people make over and over again is keeping track of the spending but overlooking the revenue altogether. Which sure is weird: you don't get from $0B in revenue to $12B in revenue in a few years by not having a product anyone wants to buy.

And I find all the talk of "non-deterministic hallucinatory nature" to be overrated. Because humans suffer from all of that too, just less severely. On top of a number of other issues current AIs don't suffer from.

Nonetheless, we use human labor for things. All AI has to do is provide a "good enough" alternative, and it often does.

replies(3): >>45898909 #>>45899125 #>>45901121 #
9. miohtama ◴[] No.45898398[source]
OpenAI and Anthropic are making north of 4B/year revenue so some companies have figured out the money making part. ChatGPT has some 800M users according to some calculations. Whether it's enough money today, enough money tomorrow, is of course a question but there is a lot of money. Users would not use them in a scale if they do not solve their problems.
replies(2): >>45898847 #>>45898896 #
10. ACCount37 ◴[] No.45898427[source]
Sure, but that "someone else" is the man writing the checks. If the roles were reversed, he'd be the one being fired now.
11. chaoz_ ◴[] No.45898749[source]
I agree. I never understood LeCun's statement that we need to pivot toward the visual aspects of things because the bitrate of text is low while visual input through the eye is high.

Text and languages contain structured information and encode a lot of real-world complexity (or it's "modelling" that).

Not saying we won't pivot to visual data or world simulations, but he was clearly not the type of person to compete with other LLM research labs, nor did he propose any alternative that could be used to create something interesting for end-users.

replies(3): >>45898776 #>>45900490 #>>45901977 #
12. ACCount37 ◴[] No.45898776[source]
If LeCun's research has made Meta a powerhouse of video generation or general purpose robotics - the two promising directions that benefit from working with visual I/O and world modeling as LeCun sees it - it could have been a justified detour.

But that sure didn't happen.

13. panja ◴[] No.45898847{3}[source]
OpenAI lost 12bn last quarter
14. Hendrikto ◴[] No.45898896{3}[source]
It’s easy to make 1 billion by spending 10 billion. That’s not “making money” though, it is lighting it on fire.
replies(1): >>45900338 #
15. camillomiller ◴[] No.45898909{3}[source]
In this comment you proceeded to basically reinvent the meaning of "profitable company", but sure. I won't even get into the point of comparing LLM to humans, because I choose not to engage with whoever doesn't have the human decency, humanistic compass, or basic phylosophical understanding of how putting LLMs and human labor on the same level to justify hallucinations and non-determinism is deranged and morally bankrupt.
replies(1): >>45899050 #
16. ACCount37 ◴[] No.45899050{4}[source]
You should go and work in a call center for a year, on the first line.

Then come back and tell me how replacing human labor with AI is "deranged and morally bankrupt".

replies(1): >>45901410 #
17. ripe ◴[] No.45899125{3}[source]
> Frontier models are all profitable.

This is an extraordinary claim and needs extraordinary proof.

LLMs are raising lots of investor money, but that's a completely different thing from being profitable.

replies(2): >>45899369 #>>45902241 #
18. ACCount37 ◴[] No.45899369{4}[source]
You don't even need insider info - it lines up with external estimates.

We have estimates that range from 30% to 70% gross margin on API LLM inference prices at major labs, 50% middle road. 10% to 80% gross margin on user-facing subscription services, error bars inflated massively. We also have many reports that inference compute has come to outmatch training run compute for frontier models by a factor of x10 or more over the lifetime of a model.

The only source of uncertainty is: how much inference do the free tier users consume? Which is something that the AI companies themselves control: they are in charge of which models they make available to the free users, and what the exact usage caps for free users are.

Adding that up? Frontier models are profitable.

This goes against the popular opinion, which is where the disbelief is coming from.

Note that I'm talking LLMs rather than things like image or video generation models, which may have vastly different economics.

replies(1): >>45901400 #
19. psychoslave ◴[] No.45900278{3}[source]
But if we hire someone to deal on R&D to automate fully the house cleaning process, we might not necessarily expect the office to be maintained in clean state by the researchers themselves any time we enter the room.
20. aryonoco ◴[] No.45900338{4}[source]
People used to say this about Amazon all the time. Remember how Amazon basically didn’t turn any real profits for 2 decades? The joke was that Amazon was a charitable organisation being funded by Wall Street for the benefit of human kind.

That didn’t last. People in the know knew that once you have a billion users and insane revenue and market power and have basically bought or driven out of business most of your competitors (Diapers.com, Jet.com, etc) you can eventually slow down your physical expansion, tighten the screws on your suppliers, increase efficiencies, and start printing money.

The VCs who are funding these companies are hoping that they have found the next Amazon. Many will probably go out of business, but some might join the ranks of trillion dollar companies.

replies(2): >>45901003 #>>45901425 #
21. tarsinge ◴[] No.45900490[source]
Text and language contain only approximate information filtered through humans eyes and brains. Also animals don't have language and can show quite advanced capabilities compared to what we can currently do in robotics. And if you do enough mindfulness you can dissociate cognition/consciousness from language. I think we are lured because how important language is for us humans, but intuitively it's obvious to me language (and LLMs) are only a subcomponent, or even irrelevant for say self driving or robotics.
replies(1): >>45901229 #
22. ambicapter ◴[] No.45901003{5}[source]
So every company that doesn't turn any profits is actually Amazon in disguise?
replies(1): >>45906173 #
23. echelon ◴[] No.45901121{3}[source]
> Frontier models are all profitable.

They generate revenue, but most companies are in the hole for the research capital outlay.

If open source models from China become popular, then the only thing that matters is distribution / moat.

Can these companies build distribution advantage and moats?

24. ◴[] No.45901229{3}[source]
25. hitarpetar ◴[] No.45901400{5}[source]
what about training?
replies(1): >>45902145 #
26. hitarpetar ◴[] No.45901410{5}[source]
red herring. just because some jobs are bad (maybe shouldn't exist like that in the first place) doesn't make this movement humanistic
27. hitarpetar ◴[] No.45901425{5}[source]
this gets brought up a lot, and the reality is that the scale of Amazon's losses is completely dwarfed by what's going on now
28. KaiserPro ◴[] No.45901977[source]
Thats where the research is leading.

The issue is context. trying to make an AI assistant with just text only inputs is doeable but limiting. You need to know the _context_ of all the data, and without visual input most of it is useful.

For example "Where is the other half of this" is almost impossible to solve unless you have an idea of what "this" is.

but to do that you need to have cameras, to use cameras you need to have position, object, and people tracking. And that is a hard problem thats not solved.

the hypothesis is that "world models" solve that with an implicit understanding of the worl and the objects in context

29. ACCount37 ◴[] No.45902145{6}[source]
I literally mentioned that:

> We also have many reports that inference compute has come to outmatch training run compute for frontier models by a factor of x10 or more over the lifetime of a model.

30. jonas21 ◴[] No.45902241{4}[source]
Dario Amodei from Anthropic has made the claim that if you looked at each model as a separate business, it would be profitable [1], i.e. each model brings in more revenue over its lifetime than the total of training + inference costs. It's only because you're simultaneously training the next generation of models, which are larger and more expensive to train, but aren't generating revenue yet, that the company as a whole loses money in a given year.

Now, it's not like he opened up Anthropic's books for an audit, so you don't necessarily have to trust him. But you do need to believe that either (a) what he is saying is roughly true or (b) he is making the sort of fraudulent statements that could get you sent to prison.

[1] https://www.youtube.com/watch?v=GcqQ1ebBqkc&t=1014s

replies(1): >>45902337 #
31. f33d5173 ◴[] No.45902337{5}[source]
He's speaking in a purely hypothetical sense. The title of the video even makes sure to note "in this example". If it turned this wasn't true of anthropic, it certainly wouldn't be fraud.
32. aryonoco ◴[] No.45906173{6}[source]
If you’ve got nearly a billion users, and are multiplying your revenue on an annual basis, then yes. You’re effectively showing that you’re in hyper growth trajectory.

Hyper growth is expensive because it’s usually capital intensive. The trick is, once that growth phase is over, can you then start milking your customers while keeping a lid on costs? Not everyone can, but Amazon did, and most investors think OpenAI and Anthropic can as well.