←back to thread

172 points marban | 2 comments | | HN request time: 0.578s | source
Show context
Aissen ◴[] No.40052746[source]
A quick search into it shows that this Ryzen AI NPU's support isn't integrated into upstream inference frameworks yet — so right now it's just useless silicon surface you pay for :-/
replies(3): >>40052844 #>>40053100 #>>40060474 #
dhruvdh ◴[] No.40053100[source]
There is a VitisAI execution provider for ONNX, and you can use ONNX backends for inference frameworks that support it. More info here - https://ryzenai.docs.amd.com/en/latest/

But regardless, 16 TOPs is no good for LLMs. Though there is a Ryzen AI demo that shows Llama 7B running on these at 8 tokens/sec. A sub-par experience for a sub-par LLM.

replies(3): >>40054182 #>>40054664 #>>40142456 #
Aissen ◴[] No.40054182[source]
Thanks, I was looking for information on this, it seems to be lower speed than pure-CPU inference on M2, and probably much worse than a ROCm GPU-based solution?
replies(1): >>40055622 #
p_l ◴[] No.40055622[source]
Because the NPU isn't for high-end inferencing. It's a relatively small coprocessor that is supposed to do bunch of tasks with high TOPS/watt without engaging the way more power hungry GPU.

At release time, the windows driver for example included few video processing offloads used by Windows Frameworks used for example by MS Teams for background removal - so that such tasks use less battery on laptops and free up CPU/GPU for other tasks on desktop.

For higher end processing you can use the same AIE-ML coprocessors various chips available previously from Xilinx and now under AMD brand.

replies(1): >>40055788 #
1. fpgamlirfanboy ◴[] No.40055788[source]
> the same AIE-ML coprocessors

they're not the same - versal acaps (whatever you want to call them) have AIE1 arch while phoenix has AIE2 arch. there are significant differences between the two arches (local memory, bfloat16, etc.)

replies(1): >>40056288 #
2. p_l ◴[] No.40056288[source]
Phoenix has AIE-ML (what you call AIE2), Versal has choice of AIE (AIE1) and AIE-ML (AIE2) depending on chip you buy.

Essentially, AMD is making two tile designs optimized for slightly different computations and claims that they are going to offer both in Versal, but NPUs use exclusively the ML-optimized ones.