←back to thread

765 points MindBreaker2605 | 2 comments | | HN request time: 0s | source
Show context
sebmellen ◴[] No.45897467[source]
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
replies(20): >>45897481 #>>45897498 #>>45897518 #>>45897885 #>>45897970 #>>45897978 #>>45898040 #>>45898053 #>>45898092 #>>45898108 #>>45898186 #>>45898539 #>>45898651 #>>45898727 #>>45899160 #>>45899375 #>>45900884 #>>45900885 #>>45901421 #>>45903451 #
ACCount37 ◴[] No.45897970[source]
That was obviously him getting sidelined. And it's easy to see why.

LLMs get results. None of the Yann LeCun's pet projects do. He had ample time to prove that his approach is promising, and he didn't.

replies(3): >>45898088 #>>45898122 #>>45898749 #
camillomiller ◴[] No.45898122[source]
LLMs get results is quite the bold statement. If they get results, they should be getting adopted, and they should be making money. This is all built on hazy promises. If you had marketable results, you wouldn't have to hide 20+ billion dollars of debt financing into an obscure SPV. LLMs are the most baffling piece of tech. They are incredible, and yet marred by their non-deterministic hallucinatory nature, and bound to fail in adoption unless you convince everyone that they don't need precision and accuracy, but they can do their business at 75% quality, just with less human overhead. It's quite the thing to convince people of, and that's why it needs the spend it's needing. A lot of we-need-to-stay-in-the-loop CEOs and bigwigs got infatuated with the idea, and most probably they just had their companies get addicted to the tech equivalent of crack cocaine. A reckoning is coming.
replies(3): >>45898203 #>>45898220 #>>45898398 #
ACCount37 ◴[] No.45898220[source]
LLMs get results, yes. They are getting adopted, and they are making money.

Frontier models are all profitable. Inference is sold with a damn good margin, and the amounts of inference AI companies sell keeps rising. This necessitates putting more and more money into infrastructure. AI R&D is extremely expensive too, and this necessitates even more spending.

A mistake I see people make over and over again is keeping track of the spending but overlooking the revenue altogether. Which sure is weird: you don't get from $0B in revenue to $12B in revenue in a few years by not having a product anyone wants to buy.

And I find all the talk of "non-deterministic hallucinatory nature" to be overrated. Because humans suffer from all of that too, just less severely. On top of a number of other issues current AIs don't suffer from.

Nonetheless, we use human labor for things. All AI has to do is provide a "good enough" alternative, and it often does.

replies(3): >>45898909 #>>45899125 #>>45901121 #
camillomiller ◴[] No.45898909[source]
In this comment you proceeded to basically reinvent the meaning of "profitable company", but sure. I won't even get into the point of comparing LLM to humans, because I choose not to engage with whoever doesn't have the human decency, humanistic compass, or basic phylosophical understanding of how putting LLMs and human labor on the same level to justify hallucinations and non-determinism is deranged and morally bankrupt.
replies(1): >>45899050 #
1. ACCount37 ◴[] No.45899050[source]
You should go and work in a call center for a year, on the first line.

Then come back and tell me how replacing human labor with AI is "deranged and morally bankrupt".

replies(1): >>45901410 #
2. hitarpetar ◴[] No.45901410[source]
red herring. just because some jobs are bad (maybe shouldn't exist like that in the first place) doesn't make this movement humanistic