←back to thread

492 points Lionga | 1 comments | | HN request time: 0.214s | source
Show context
Fanofilm ◴[] No.45673287[source]
I think this is because older AI doesn't get done what LLM AI does. Older AI = normal trained models, neural networks (without transformers), support vector machines, etc. For that reason, they are letting them go. They don't see revenue coming from that. They don't see new product lines (like AI Generative image/video). AI may have this every 5 years. A break through moves the technology into an entirely new area. Then older teams have to re-train, or have a harder time.
replies(7): >>45673374 #>>45673437 #>>45673454 #>>45673503 #>>45673506 #>>45674576 #>>45674661 #
1. nickpsecurity ◴[] No.45674661[source]
I really doubt that. Most of the profit-generating AI in most industries... decision support, spotting connections, recommendations, filtering, etc... runs on old school techniques. They're cheaper to train, cheaper to run, and more explainable.

Last survey I saw said regression was still the most-used technique with SVM's more used than LLM's. I figured combining those types of tools with LLM tech, esp for specifying or training them, is a better investment than replacing them. There's people doing that.

Now, I could see Facebook itself thinking LLM's are the most important if they're writing all the code, tests, diagnostics, doing moderation, customer service, etc. Essentially, running the operational side of what generates revenue. They're also willing to spend a lot of money to make that good enough for their use case.

That said, their financial bets make me wonder if they're driven by imagination more than hard analyses.