←back to thread

213 points cnst | 1 comments | | HN request time: 0.206s | source
Show context
jazzyjackson ◴[] No.42154393[source]
I sent mine back. I thought the NPU would help with local LLM but there nothing to utilize it yet, lmstudio has it on the roadmap but it was a bit of a letdown. M1 MacBook was 30 times faster at generating tokens.

Happy with my gen 11 x1 carbon (the one before they put the power button on the outside edge like a tablet ?!?)

replies(3): >>42155152 #>>42155356 #>>42158228 #
1. pjmlp ◴[] No.42155152[source]
It is used transparently by applications when they make use of DirectML.