←back to thread

492 points Lionga | 2 comments | | HN request time: 0.458s | source
Show context
Fanofilm ◴[] No.45673287[source]
I think this is because older AI doesn't get done what LLM AI does. Older AI = normal trained models, neural networks (without transformers), support vector machines, etc. For that reason, they are letting them go. They don't see revenue coming from that. They don't see new product lines (like AI Generative image/video). AI may have this every 5 years. A break through moves the technology into an entirely new area. Then older teams have to re-train, or have a harder time.
replies(7): >>45673374 #>>45673437 #>>45673454 #>>45673503 #>>45673506 #>>45674576 #>>45674661 #
1. fidotron ◴[] No.45673437[source]
There always has been a stunning amount of inertia from the old big data/ML/"AI" guard towards actually deploying anything more sophisticated than linear regression.
replies(1): >>45678454 #
2. scheme271 ◴[] No.45678454[source]
There's a lot of areas where you need to be able to explain the decisions that your AI models make and that's extremely hard to do unless you're using linear regression. E.g. you're a bank and your AI model for some reason appears to be accepting applications from white people and rejecting applications from african americans or latinos. How are you going to show in court that your model isn't discriminating based on race or some proxy for race?