←back to thread

486 points dbreunig | 1 comments | | HN request time: 0s | source
Show context
jsheard ◴[] No.41863390[source]
These NPUs are tying up a substantial amount of silicon area so it would be a real shame if they end up not being used for much. I can't find a die analysis of the Snapdragon X which isolates the NPU specifically but AMDs equivalent with the same ~50 TOPS performance target can be seen here, and takes up about as much area as three high performance CPU cores:

https://www.techpowerup.com/325035/amd-strix-point-silicon-p...

replies(4): >>41863880 #>>41863905 #>>41864412 #>>41865466 #
JohnFen ◴[] No.41864412[source]
> These NPUs are tying up a substantial amount of silicon area so it would be a real shame if they end up not being used for much.

This has been my thinking. Today you have to go out of your way to buy a system with an NPU, so I don't have any. But tomorrow, will they just be included by default? That seems like a waste for those of us who aren't going to be running models. I wonder what other uses they could be put to?

replies(6): >>41864427 #>>41864488 #>>41864879 #>>41865208 #>>41865384 #>>41870713 #
heavyset_go ◴[] No.41865208[source]
The idea is that your OS and apps will integrate ML models, so you will be running models whether you know it or not.
replies(1): >>41866421 #
JohnFen ◴[] No.41866421[source]
I'm confident that I'll be able to know and control whether or not my Linux and BSD machines will be using ML models.
replies(2): >>41866482 #>>41875239 #
1. heavyset_go ◴[] No.41875239[source]
I agree with the premise as a Linux user myself, but if you're using any JetBrains products, or Zoom, you're running models on the client-side. I suspect small models will continue to creep into apps. Even Firefox ships ML models in the browser.