←back to thread

486 points dbreunig | 4 comments | | HN request time: 0.001s | source
Show context
protastus ◴[] No.41863883[source]
Deploying a model on an NPU requires significant profile based optimization. Picking up a model that works fine on the CPU but hasn't been optimized for an NPU usually leads to disappointing results.
replies(2): >>41864613 #>>41864649 #
1. CAP_NET_ADMIN ◴[] No.41864649[source]
Beauty of CPUs - they'll chew through whatever bs code you throw at them at a reasonable speed.
replies(1): >>41869110 #
2. marginalia_nu ◴[] No.41869110[source]
I don't think this is correct. The difference between well optimized code and unoptimized code on the CPU is frequently at least an order of magnitude performance.

Reason it doesn't seem that way is that the CPU is so fast we often bottleneck on I/O first. However, for compute-workloads like inference, it really does matter.

replies(1): >>41870758 #
3. consteval ◴[] No.41870758[source]
While this is true, the most effective optimizations you don't do yourself. The compiler or runtime does it. They get the low-hanging fruit. You can further optimize yourself, but unless your design is fundamentally bad, you're gonna be micro-optimizing.

gcc -O0 and -O2 has a HUGE performance gain. We don't really have anything to auto-magically do this for models, yet. Compilers are intimately familiar with x86.

replies(1): >>41872913 #
4. marginalia_nu ◴[] No.41872913{3}[source]
While the compiler is decent at producing code that is good in terms of saturating the instruction pipeline, there are many things the compiler simply can't help you with.

Having cache friendly memory access patterns is perhaps the biggest one. Though automatic vectorization is also still not quite there, so in cases where there's a severe bottleneck, doing that manually may still considerably improve performance, if the workload is vectorizable.