←back to thread

486 points dbreunig | 1 comments | | HN request time: 0.236s | source
Show context
cjbgkagh ◴[] No.41865626[source]
> We've tried to avoid that by making both the input matrices more square, so that tiling and reuse should be possible.

While it might be possible it would not surprise me if a number of possible optimizations had not made it into Onnx. It appears that Qualcomm does not give direct access to the NPU and users are expected to use frameworks to convert models over to it, and in my experience conversion tools generally suck and leave a lot of optimizations on the table. It could be less of NPUs suck and more of the conversions tools suck. I'll wait until I get direct access - I don't trust conversion tools.

My view of NPUs is that they're great for tiny ML models and very fast function approximations which is my intended use case. While LLMs are the new hotness there are huge number of specialized tasks that small models are really useful for.

replies(2): >>41865847 #>>41868939 #
1. jaygreco ◴[] No.41865847[source]
I came here to say this. I haven’t worked with the Elite X but the past gen stuff I’ve used (865 mostly) the accelerators - compute DSP and much smaller NPU - required _very_ specific setup, compilation with a bespoke toolchain, and communication via RPC to name a few.

I would hope the NPU on Elite X is easier to get to considering the whole copilot+ thing, but I bring this up mainly to make the point that I doubt it’s just as easy as “run general purpose model, expect it to magically teleport onto the NPU”.