Most active commenters
  • ddingus(4)
  • defrost(4)
  • noduerme(4)
  • brookst(3)
  • bdd8f1df777b(3)
  • Spooky23(3)

←back to thread

486 points dbreunig | 105 comments | | HN request time: 0.638s | source | bottom
1. isusmelj ◴[] No.41863460[source]
I think the results show that just in general the compute is not used well. That the CPU took 8.4ms and GPU took 3.2ms shows a very small gap. I'd expect more like 10x - 20x difference here. I'd assume that the onnxruntime might be the issue. I think some hardware vendors just release the compute units without shipping proper support yet. Let's see how fast that will change.

Also, people often mistake the reason for an NPU is "speed". That's not correct. The whole point of the NPU is rather to focus on low power consumption. To focus on speed you'd need to get rid of the memory bottleneck. Then you end up designing your own ASIC with it's own memory. The NPUs we see in most devices are part of the SoC around the CPU to offload AI computations. It would be interesting to run this benchmark in a infinite loop for the three devices (CPU, NPU, GPU) and measure power consumption. I'd expect the NPU to be lowest and also best in terms of "ops/watt"

replies(8): >>41863552 #>>41863639 #>>41864898 #>>41864928 #>>41864933 #>>41866594 #>>41869485 #>>41870575 #
2. AlexandrB ◴[] No.41863552[source]
> Also, people often mistake the reason for an NPU is "speed". That's not correct. The whole point of the NPU is rather to focus on low power consumption.

I have a sneaking suspicion that the real real reason for an NPU is marketing. "Oh look, NVDA is worth $3.3T - let's make sure we stick some AI stuff in our products too."

replies(8): >>41863644 #>>41863654 #>>41865529 #>>41865968 #>>41866150 #>>41866423 #>>41867045 #>>41870116 #
3. kmeisthax ◴[] No.41863639[source]
> I think some hardware vendors just release the compute units without shipping proper support yet

This is Nvidia's moat. Everything has optimized kernels for CUDA, and maybe Apple Accelerate (which is the only way to touch the CPU matrix unit before M4, and the NPU at all). If you want to use anything else, either prepare to upstream patches in your ML framework of choice or prepare to write your own training and inference code.

replies(1): >>41868138 #
4. kmeisthax ◴[] No.41863644[source]
You forget "Because Apple is doing it", too.
replies(1): >>41864659 #
5. itishappy ◴[] No.41863654[source]
I assume you're both right. I'm sure NPUs exist to fill a very real niche, but I'm also sure they're being shoehorned in everywhere regardless of product fit because "AI big right now."
replies(2): >>41864463 #>>41865770 #
6. wtallis ◴[] No.41864463{3}[source]
Looking at it slightly differently: putting low-power NPUs into laptop and phone SoCs is how to get on the AI bandwagon in a way that NVIDIA cannot easily disrupt. There are plenty of systems where a NVIDIA discrete GPU cannot fit into the budget (of $ or Watts). So even if NPUs are still somewhat of a solution in search of a problem (aka a killer app or two), they're not necessarily a sign that these manufacturers are acting entirely without strategy.
7. rjsw ◴[] No.41864659{3}[source]
I think other ARM SoC vendors like Rockchip added NPUs before Apple, or at least around the same time.
replies(2): >>41864882 #>>41865956 #
8. acchow ◴[] No.41864882{4}[source]
I was curious so looked it up. Apple's first chip with an NPU was the A11 bionic in Sept 2017. Rockchip's was the RK1808 in Sept 2019.
replies(2): >>41865370 #>>41865486 #
9. godelski ◴[] No.41864898[source]
They definitely aren't doing the timing properly, but also what you might think is timing is not what is generally marketed. But I will say, those marketed versions are often easier to compare. One such example is that if you're using GPU then have you actually considered that there's an asynchronous operation as part of your timing?

If you're naively doing `time.time()` then what happens is this

  start = time.time() # cpu records time
  pred = model(input.cuda()).cuda() # push data and model (if not already there) to GPU memory and start computation. This is asynchronous
  end = time.time() # cpu records time, regardless of if pred stores data
You probably aren't expecting that if you don't know systems and hardware. But python (and really any language) is designed to be smart and compile into more optimized things than what you actually wrote. There's no lock, and so we're not going to block operations for cpu tasks. You might ask why do this? Well no one knows what you actually want to do. And do you want the timer library now checking for accelerators (i.e. GPU) every time it records a time? That's going to mess up your timer! (at best you'd have to do a constructor to say "enable locking for this accelerator") So you gotta do something a bit more nuanced.

If you want to actually time GPU tasks, you should look at cuda event timers (in pytorch this is `torch.cuda.Event(enable_timing=True)`. I have another comment with boilerplate)

Edit:

There's also complicated issues like memory size and shape. They definitely are not being nice to the NPU here on either of those. They (and GPUs!!!) want channels last. They did [1,6,1500,1500] but you'd want [1,1500,1500,6]. There's also the issue of how memory is allocated (and they noted IO being an issue). 1500 is a weird number (as is 6) so they aren't doing any favors to the NPU, and I wouldn't be surprised that this is a surprisingly big hit considering how new these things are

And here's my longer comment with more details: https://news.ycombinator.com/item?id=41864828

replies(1): >>41865375 #
10. theresistor ◴[] No.41864928[source]
> Also, people often mistake the reason for an NPU is "speed". That's not correct. The whole point of the NPU is rather to focus on low power consumption.

It's also often about offload. Depending on the use case, the CPU and GPU may be busy with other tasks, so the NPU is free bandwidth that can be used without stealing from the others. Consider AI-powered photo filters: the GPU is probably busy rendering the preview, and the CPU is busy drawing UI and handling user inputs.

replies(1): >>41865137 #
11. spookie ◴[] No.41864933[source]
I've been building an app in pure C using onnxruntime, and it outperforms a comparable one done with python by a substancial amount. There are many other gains to be made.

(In the end python just calls C, but it's pretty interesting how much performance is lost)

replies(1): >>41867439 #
12. cakoose ◴[] No.41865137[source]
Offload only makes sense if there are other advantages, e.g. speed, power.

Without those, wouldn't it be better to use the NPUs silicon budget on more CPU?

replies(4): >>41865175 #>>41865703 #>>41865735 #>>41868502 #
13. heavyset_go ◴[] No.41865175{3}[source]
More CPU means siphoning off more of the power budget on mobile devices. The theoretical value of NPUs is power efficiency on a limited budget.
14. GeekyBear ◴[] No.41865370{5}[source]
Face ID was the first tent pole feature that ran on the NPU.
15. artemisart ◴[] No.41865375[source]
Important precision: the async part is absolutely not python specific, but comes from CUDA, indeed for performance, and you will have to use cuda events too in C++ to properly time it.

For ONNX the runtimes I know of are synchronous as we don't do each operation individually but whole models at once, there is no need for async, the timings should be correct.

replies(1): >>41865495 #
16. j16sdiz ◴[] No.41865486{5}[source]
Google TPU was introduced around same time as apple. Basically everybody knew it can be something around that time, just don't know exactly how
replies(1): >>41866672 #
17. godelski ◴[] No.41865495{3}[source]
Yes, it isn't python, it is... hardware. Not even CUDA specific. It is about memory moving around and optimization (remember, even the CPUs do speculative execution). I say a little more in the larger comment.

I'm less concerned about the CPU baseline and more concerned about the NPU timing. Especially given the other issues

18. Dalewyn ◴[] No.41865529[source]
There are no nerves in a neural processing unit, so yes: It's 300% bullshit marketing.
replies(2): >>41865574 #>>41865788 #
19. jcgrillo ◴[] No.41865574{3}[source]
Maybe the N secretly stands for NFT.. Like the tesla self driving hardware only smaller and made of silicon.
20. theresistor ◴[] No.41865703{3}[source]
If you know that you need to offload matmuls, then building matmul hardware is more area efficient than adding an entire extra CPU. Various intermediate points exist along that spectrum, e.g. Cell's SPUs.
21. avianlyric ◴[] No.41865735{3}[source]
Not really. To get extra CPU performance that likely means more cores, or some other general compute silicon. That stuff tends to be quite big, simply because it’s so flexible.

NPUs focus on one specific type of computation, matrix multiplication, and usually with low precision integers, because that’s all a neural net needs. That vast reduction in flexibility means you can take lots of shortcuts in your design, allowing you cram more compute into a smaller footprint.

If you look at the M1 chip[1], you can see the entire 16-Neural engine has a foot print about the size of 4 performance cores (excluding their caches). It’s not perfect comparison, without numbers on what the performance core can achieve in terms of ops/second vs the Neural Engine. But it seems reasonable to be that the Neural Engine and handily outperform the performance core complex when doing matmul operations.

[1] https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

22. brookst ◴[] No.41865770{3}[source]
The shoehorning only works if there is buyer demand.

As a company, if customers are willing to pay a premium for a NPU, or if they are unwilling to buy a product without one, it is not your place to say “hey we don’t really believe in the AI hype so we’re going to sell products people don’t want to prove a point”

replies(3): >>41865911 #>>41865951 #>>41866019 #
23. brookst ◴[] No.41865788{3}[source]
Neural is an adjective. Adjectives do not require their associated nouns to be present. See also: digital computers have mo fingers at all.
replies(1): >>41865928 #
24. MBCook ◴[] No.41865911{4}[source]
Is there demand? Or do they just assume there is?

If they shove it in every single product and that’s all anyone advertises, whether consumers know it will help them or not, you don’t get a lot of choice.

If you want the latest chip, you’re getting AI stuff. That’s all there is to it.

replies(1): >>41866176 #
25. -mlv ◴[] No.41865928{4}[source]
I always thought 'digital' referred to numbers, not fingers.
replies(1): >>41865964 #
26. bdd8f1df777b ◴[] No.41865951{4}[source]
There are two kinds of buyer demands: product, buyers, and the stock buyers. The AI hype can certainly convince some of the stock buyers.
27. bdd8f1df777b ◴[] No.41865956{4}[source]
Even if it were true, they wouldn’t have the same influence as Apple has.
28. bdd8f1df777b ◴[] No.41865964{5}[source]
The derivative meaning has been use so widely that it has surpassed its original one in usage. But it doesn’t change the fact that it originally refers to the fingers.
29. Spooky23 ◴[] No.41865968[source]
Microsoft needs to throw something in the gap to slow down MacBook attrition.

The M processors changed the game. My teams support 250k users. I went from 50 MacBooks in 2020 to over 10,000 today. I added zero staff - we manage them like iPhones.

replies(3): >>41866126 #>>41866658 #>>41875496 #
30. Spooky23 ◴[] No.41866019{4}[source]
Apple will have a completely AI capable product line in 18 months, with the major platforms basically done.

Microsoft is built around the broken Intel tick/tick model of incremental improvement — they are stuck with OEM shitware that will take years to flush out of the channel. That means for AI, they are stuck with cloud based OpenAI, where NVIDIA has them by the balls and the hyperscalers are all fighting for GPU.

Apple will deliver local AI features as software (the hardware is “free”) at a much higher margin - while Office 365 AI is like $400+ a year per user.

You’ll have people getting iPhones to get AI assisted emails or whatever Apple does that is useful.

replies(6): >>41866402 #>>41866405 #>>41866461 #>>41866768 #>>41875505 #>>41885273 #
31. cj ◴[] No.41866126{3}[source]
Rightly so.

The M processor really did completely eliminate all sense of “lag” for basic computing (web browsing, restarting your computer, etc). Everything happens nearly instantly, even on the first generation M1 processor. The experience of “waiting for something to load” went away.

Not to mention these machines easily last 5-10 years.

replies(4): >>41866165 #>>41866450 #>>41866737 #>>41866918 #
32. conradev ◴[] No.41866150[source]
The real consumers of the NPUs are the operating systems themselves. Google’s TPU and Apple’s ANE are used to power OS features like Apple’s Face ID and Google’s image enhancements.

We’re seeing these things in traditional PCs now because Microsoft has demanded it so that Microsoft can use it in Windows 11.

Any use by third party software is a lower priority

33. nxobject ◴[] No.41866165{4}[source]
As a very happy M1 Max user (should've shelled out for 64GB of RAM, though, for local LLMs!), I don't look forward to seeing how the Google Workspace/Notions/etc. of the world somehow reintroduce lag back in.
replies(3): >>41866309 #>>41866670 #>>41866791 #
34. Terr_ ◴[] No.41866176{5}[source]
"The math is clear: 100% of our our car sales come from models with our company logo somewhere on the front, which shows incredible customer desire for logos. We should consider offering a new luxury trim level with more of them."

"How many models to we have without logos?"

"Huh? Why would we do that?"

replies(1): >>41866278 #
35. MBCook ◴[] No.41866278{6}[source]
Heh. Yeah more or less.

To some degree I understand it, because as we’ve all noticed computers have pretty much plateaued for the average person. They last much longer. You don’t need to replace them every two years anymore because the software isn’t out stripping them so fast.

AI is the first thing to come along in quite a while that not only needs significant power but it’s just something different. It’s something they can say your old computer doesn’t have that the new one does. Other than being 5% faster or whatever.

So even if people don’t need it, and even if they notice they don’t need it, it’s something to market on.

The stuff up thread about it being the hotness that Wall Street loves is absolutely a thing too.

replies(1): >>41866459 #
36. bugbuddy ◴[] No.41866309{5}[source]
The problem for Intel and AMD is they are stuck with an OS that ships with a lag-inducing Anti-malware suite. I just did a simple git log and it took 2000% longer than usual because the Antivirus was triggered to scan and run a simulation on each machine instruction and byte of data accessed. The commit log window stayed blank waiting to load long enough for me to complete another tiny project. It always ruin my day.
replies(2): >>41866574 #>>41866628 #
37. justahuman74 ◴[] No.41866402{5}[source]
Who is getting $400/y of value from that?
38. nxobject ◴[] No.41866405{5}[source]
I hope that once they get a baseline level of AI functionality in, they start working with larger LLMs to enable some form of RAG... that might be their next generational shift.
39. pclmulqdq ◴[] No.41866423[source]
The correct way to make a true "NPU" is to 10x your memory bandwidth and feed a regular old multicore CPU with SIMD/vector instructions (and maybe a matrix multiply unit).

Most of these small NPUs are actually made for CNNs and other models where "stream data through weights" applies. They have a huge speedup there. When you stream weights across data (any LLM or other large model), you are almost certain to be bound by memory bandwidth.

replies(2): >>41866896 #>>41871310 #
40. ddingus ◴[] No.41866450{4}[source]
I have a first gen M1 and it holds up very nicely even today. I/O is crazy fast and high compute loads get done efficiently.

One can bury the machine and lose very little basic interactivity. That part users really like.

Frankly the only downside of the MacBook Air is the tiny storage. The 8GB RAM is actually enough most of the time. But general system storage with only 1/4 TB is cramped consistently.

Been thinking about sending the machine out to one of those upgrade shops...

replies(1): >>41866621 #
41. ddingus ◴[] No.41866459{7}[source]
That was all true nearly 10 years ago. And it has only improved. Almost any computer one finds these days is capable of the basics.
42. hakfoo ◴[] No.41866461{5}[source]
We're still looking for "that is useful".

The stuff they've been trying to sell AI to the public with is increasingly looking as absurd as every 1978 "you'll store your recipes on the home computer" argument.

AI text became a Human Centipede story: Start with a coherent 10-word sentence, let AI balloon it into five pages of flowery nonsense, send it to someone else, who has their AI smash it back down to 10 meaningful words.

Coding assistance, even as spicy autocorrect, is often a net negative as you have to plow through hallucinations and weird guesses as to what you want but lack the tools to explain to it.

Image generation is already heading rapidly into cringe territory, in part due to some very public social media operations. I can imagine your kids' kids in 2040 finding out they generated AI images in the 2020s and looking at them with the same embarrassment you'd see if they dug out your high-school emo fursona.

There might well be some more "closed-loop" AI applications that make sense. But are they going to be running on every desktop in the world? Or are they going to be mostly used in datacentres and purpose-built embedded devices?

I also wonder how well some of the models and techniques scale down. I know Microsoft pushed a minimum spec to promote a machine as Copilot-ready, but that seems like it's going to be "Vista Basic Ready" redux as people try to run tools designed for datacentres full of Quadro cards, or at least high-end GPUs, on their $299 HP laptop.

replies(3): >>41866517 #>>41869224 #>>41870131 #
43. jjmarr ◴[] No.41866517{6}[source]
Cringe emo girls are trendy now because the nostalgia cycle is hitting the early 2000s. Your kid would be impressed if you told them you were a goth gf. It's not hard to imagine the same will happen with primitive AIs in the 40s.
replies(1): >>41866595 #
44. zdw ◴[] No.41866574{6}[source]
This is most likely due to corporate malware.

Even modern macs can be brought to their knees by something that rhymes with FrowdStrike Calcon and interrupts all IO.

45. defrost ◴[] No.41866595{7}[source]
Early 2000's ??

Bela Lugosi Died in 1979, and Peter Murphy was onto his next band by 1984.

By 2000 Goth was fully a distant dot in the rear view mirror for the OG's

    In 2002, Murphy released *Dust* with Turkish-Canadian composer and producer Mercan Dede, which utilizes traditional Turkish instrumentation and songwriting, abandoning Murphy's previous pop and rock incarnations, and juxtaposing elements from progressive rock, trance, classical music, and Middle Eastern music, coupled with Dede's trademark atmospheric electronics.
https://www.youtube.com/watch?v=Yy9h2q_dr9k

https://en.wikipedia.org/wiki/Bauhaus_(band)

replies(2): >>41866683 #>>41866897 #
46. lynguist ◴[] No.41866621{5}[source]
Why did you buy a 256GB device for personal use in the first place? Too good of a deal? Or saving these $400 for upgrades for something else?
replies(2): >>41867871 #>>41874616 #
47. alisonatwork ◴[] No.41866628{6}[source]
Pro tip: turn off malware scanning in your git repos[0]. There is also the new Dev Drive feature in Windows 11 that makes it even easier for developers (and IT admins) to set this kind of thing up via policies[1].

In companies where I worked where the IT team rolled out "security" software to the Mac-based developers, their computers were not noticeably faster than Windows PCs at all, especially given the majority of containers are still linux/amd64, reflecting the actual deployment environment. Meanwhile Windows also runs on ARM anyway, so it's not really something useful to generalize about.

[0] https://support.microsoft.com/en-us/topic/how-to-add-a-file-...

[1] https://learn.microsoft.com/en-us/windows/dev-drive/

replies(2): >>41866770 #>>41866861 #
48. pjmlp ◴[] No.41866658{3}[source]
Microsoft has indeed a problem, however only in countries whose people can afford Apple level prices, and not everyone is a G7 citizen.
replies(1): >>41866904 #
49. djur ◴[] No.41866670{5}[source]
Oh, just work for a company that uses Crowdstrike or similar. You'll get back all the lag you want.
50. Someone ◴[] No.41866672{6}[source]
https://en.wikipedia.org/wiki/Tensor_Processing_Unit#Product... shows the first one is from 2015 (publicly announced in 2016). It also shows they have a TDP of 75+W.

I can’t find TDP for Apple’s Neural Engine (https://en.wikipedia.org/wiki/Neural_Engine), but the first version shipped in the iPhone 8, which has a 7 Wh battery, so these are targeting different markets.

51. djur ◴[] No.41866683{8}[source]
I'm not sure what "gothic music existed in the 1980s" is meant to indicate as a response to "goths existed in the early 2000s as a cultural archetype".
replies(1): >>41866722 #
52. defrost ◴[] No.41866722{9}[source]
That Goths in 2000's were at best second wave nostalgia cycle of Goths from the 1980s.

That people recalling Goths in that period should beware of thinking that was a source and not an echo.

In 2006 Noel Fielding's Richmond Felicity Avenal was a basement dwelling leftover from many years past.

replies(1): >>41866888 #
53. bzzzt ◴[] No.41866737{4}[source]
Depends on the application as well. Just try to start up Microsoft Teams.
54. im3w1l ◴[] No.41866768{5}[source]
Until AI chips become abundant, and we are not there yet, cloud AI just makes too much sense. Using a chip constantly vs using it 0.1% of the time is just so many orders of magnitude better.

Local inference does have privacy benefits. I think at the moment it might make sense to send most of queries to a beefy cloud model, and send sensitive queries to a smaller local one.

55. bugbuddy ◴[] No.41866770{7}[source]
Unfortunately, the IT department people think they are literal GODs for knowing how to configure Domain Policies and lock down everything. They even refuse to help or even answer requests for help when there are false positives on our own software builds that we cannot unmark as false positives. These people are proactively antagonistic to productivity. Management could not careless…
replies(2): >>41867349 #>>41869612 #
56. n8cpdx ◴[] No.41866791{5}[source]
Chrome managed it. Not sure how since Edge still works reasonably well and Safari is instant to start (even faster than system settings, which is really an indictment of SwiftUI).
57. xxs ◴[] No.41866861{7}[source]
the short answer is that you can't without the necessary permissions, and even if you do - the next roll out will wipe out your changes.

So the pro-part of the tip does not apply.

On my own machines anti-virus is one the very first things to be removed. Most of the time I'd turn off all the swap file, yet Windows doesn't overcommit and certain applications are notorious for allocating memory w/o even using it.

58. bee_rider ◴[] No.41866888{10}[source]
True Goth died our way before any of that. They totally sold out when the sacked Rome, the gold went to their heads and everything since then has been nostalgia.
replies(1): >>41866910 #
59. bee_rider ◴[] No.41866896{3}[source]
I’m sure we’ll get GPNPU. Low precision matvecs could be fun to play with.
replies(1): >>41868292 #
60. carlob ◴[] No.41866897{8}[source]
There was a submission here a few months ago about the various incarnations of goth starting from the late Roman empire.

https://www.the-hinternet.com/p/the-goths

replies(1): >>41866940 #
61. jocaal ◴[] No.41866904{4}[source]
Microsoft is slowly being squeezed from both sides of the market. Chromebooks have silently become wildly popular on the low end. The only advantage I see windows have is corporate and gaming. But valve is slowly chopping away at the gaming advantage as well.
replies(2): >>41866915 #>>41901108 #
62. defrost ◴[] No.41866910{11}[source]
That was just the faux life Westside Visigoths .. what'd you expect?

#Ostrogoth #TwueGoth

63. pjmlp ◴[] No.41866915{5}[source]
Chromebooks are no where to be seen outside US school market.

Coffe shops, trains and airports in Europe? Nope, rare animal on tables.

European schools? Most countries parents buy their kids a computer, and most often it is a desktop used by the whole family, or a laptop of some kind running Windows, unless we are talking about the countries where buying Apple isn't an issue on the monthly expenses.

Popular? In Germany, the few times they get displayed on shopping mall stores, they get rountinely discounted, or bundled with something else, until finally they get rid of them.

Valve is heavily dependent on game studios producing Windows games.

64. morsch ◴[] No.41866918{4}[source]
It's fine. For basic computing, my M3 doesn't feel much faster than my Linux desktop that's like 8 years old. I think the standard for laptops was just really, really low.
replies(1): >>41868073 #
65. defrost ◴[] No.41866940{9}[source]
Was there? This one: https://news.ycombinator.com/item?id=41232761 ?

Nice: https://www.youtube.com/watch?v=VZvSqgn_Zf4

66. WithinReason ◴[] No.41867045[source]
yeah I'm not sure being 1% utilised helps power consumption
67. lynx23 ◴[] No.41867349{8}[source]
Nobody wants to be resonsible for giving allowing exceptions in security-matters. Its far easier to ignore the problems at hand, then to risk being wrong just once.
68. dacryn ◴[] No.41867439[source]
agree there, but then again using ort in Rust is faster again.

You cannot compare python with a onxx executor.

I don't know what you used in Python, but if it's pytorch or similar, those are built with flexibility in mind, for optimal performance you want to export those to onxx and use whatever executor that is optimized for your env. onxxruntime is one of them, but definitely not the only one, and given it's from Microsoft, some prefer to avoid it and choose among the many free alternatives.

replies(1): >>41867667 #
69. rerdavies ◴[] No.41867667{3}[source]
Why would the two not be entirely comparable? PyTorch may be slower at building the models; but once the model is compiled and loaded on the NPU, there's just not a whole lot of Python involved anymore. A few hundred CPU cycles to push the input data using python; a few hundred CPU cycles to receive the results using python. And everything in-between gets executed on the NPU.
replies(1): >>41868158 #
70. 112233 ◴[] No.41867871{6}[source]
Not OP, but by booting M1 from external thunderbolt nvme you lose less than 50% of benchmark disk throughput (3GB/s is still ridiculously fast), can buy 8TB drive for less than 1k, plus can boot it on another M1 mac if something happens. If there was "max mem, min disk" model, would def get that.
replies(1): >>41874656 #
71. thanksgiving ◴[] No.41868073{5}[source]
> I think the standard for laptops was just really, really low.

As someone who used windows laptops, I was amazed when I saw someone sitting next to me on a public transit subway on her MacBook Pro editing images on photoshop with just her trackpad. The standard for windows laptops used to be that low (about ten or twelve years ago?) that seeing a MacBook trackpad just woke someone is a part of my permanent memory.

replies(1): >>41869537 #
72. noduerme ◴[] No.41868138[source]
I'm not sure why this is a moat. Isn't it just a matter of translation from CUDA to some other instruction set? If AMD or someone else makes cheaper hardware that does the same thing, it doesn't seem like a stretch for them to release a PyTorch patch or whatever.
replies(2): >>41868786 #>>41870624 #
73. noduerme ◴[] No.41868158{4}[source]
I really wish Python wasn't the language controlling all the C code. You need a controller, in a scripting language that's easy to modify, but it's a rather hideous choice. It would be like choosing to build the world's largest social network in PHP or something. lol.
replies(2): >>41868287 #>>41873886 #
74. robertlagrant ◴[] No.41868287{5}[source]
> it's a rather hideous choice

Why?

75. touisteur ◴[] No.41868292{4}[source]
SHAVE from MOVIDIUS was fun, before Intel bought them out.
replies(1): >>41870749 #
76. mapt ◴[] No.41868502{3}[source]
For PC CPUs, there are already so many watts per square millimeter that many of the top tiers of the recent generations are running thermally throttled 24/7; More cooling improves performance rather than temperatures because it allows more of the cores to run at 'full' speed or at 'boost' speed. This kills their profitable market segmentation.

In this environment it makes some sense to use more efficient RISC cores, and to spread out cores a bit with dedicated bits that either aren't going to get used all the time, or that are going to be used at lower power draws, and combining cores with better on-die memory availability (extreme L2/L3 caches) and other features. Apple even has some silicon in the power section left as empty space for thermal reasons.

Emily (formerly Anthony) on LTT had a piece on the Apple CPUs that pointed out some of the inherent advantages of the big-chip ARM SOC versus the x86 motherboard-daughterboard arrangement as we start to hit Moore's Wall. https://www.youtube.com/watch?v=LFQ3LkVF5sM

77. david-gpu ◴[] No.41868786{3}[source]
Most of the computations are done inside NVidia proprietary libraries, not open-source CUDA. And if you saw what goes inside those libraries, I think you would agree that it is a substantial moat.
replies(1): >>41870764 #
78. Spooky23 ◴[] No.41869224{6}[source]
The product isn’t released, so I don’t think we know what is or isn’t good.

People are clearly finding LLM tech useful, and we’re barely scratching the surface.

79. fennecfoxy ◴[] No.41869485[source]
I think it's definitely possibly now (or very soon) for an LLM to write native GPU/NPU code to get itself to run on different hardware.
80. thesuitonym ◴[] No.41869537{6}[source]
I don't understand the hype around Apple trackpads. 15 years ago, sure, there was a huge gulf of difference, but today? The only difference that I can see or fee, at least between lenovo or dell and apple, is that the mac trackpad is physically larger.
81. thesuitonym ◴[] No.41869612{8}[source]
They don't think they're gods, they just think you're an idiot. This is not to say that you are, or even that they believe YOU individually are an idiot, it's just that users are idiots.

There are also insurance, compliance, and other constraints that IT folks have that make them unwilling to turn off scanning for you.

replies(2): >>41872079 #>>41881549 #
82. shermantanktop ◴[] No.41870116[source]
That’s how we got an explosion of interesting hardware in the early 80s - hardware companies attempting to entice consumers by claiming “blazing 16 bit speeds” or other nonsense. It was a marketing circus but it drove real investments and innovation over time. I’d hope the same could happen here.
83. HelloNurse ◴[] No.41870131{6}[source]
I expect this sort of thing to go out of fashion and/or be regulated after "AI" causes some large life loss, e.g. starting a war or designing a collapsing building.
84. hulitu ◴[] No.41870575[source]
> The whole point of the NPU is rather to focus on low power consumption

You know which chip has the lowest power consumption ? The one which is turned off. /s

85. blharr ◴[] No.41870624{3}[source]
Sure you can probably translate rough code and get something that "works" but all the thousands of small optimizations that are baked in are not trivial to just translate.
replies(1): >>41876758 #
86. hedgehog ◴[] No.41870749{5}[source]
Did they become un-fun? There are a bunch on the new Intel CPUs.
replies(1): >>41872974 #
87. theGnuMe ◴[] No.41870764{4}[source]
There are clean room approaches like AMDs and Scale.
replies(2): >>41871154 #>>41873069 #
88. caeril ◴[] No.41871154{5}[source]
Geohot has multiple (and ongoing) rants about the sheer instability of AMD RDNA3 drivers. Lisa Su engaged directly with him on this, and she didn't seem to give a shit about their problems.

AMD is not taking ML applications seriously, outside of their marketing hype.

replies(1): >>41874128 #
89. sounds ◴[] No.41871310{3}[source]
Apple Silicon is surprisingly a good approach here -

   * On CPU: SIMD NEON
   * On CPU: custom matrix multiply accelerator, separate from SIMD unit
   * On CPU package: NPU
   * GPU
Then they go and hide it all in proprietary undocumented features and force you to use their framework to access it :c
90. xxs ◴[] No.41872079{9}[source]
they are allowed to do that for the folks that produce the goods of course, it just makes a lot harder to retain the said idiots.
91. touisteur ◴[] No.41872974{6}[source]
Most of the toolchain got hidden behind openvino and there was no hardware released for years. Keembay was 'next year' for years. I have some code for DSP using it that I can't use anymore. Has Intel actually released new shave cores, with an actual dev environment ? I'm curious.
replies(1): >>41873250 #
92. david-gpu ◴[] No.41873069{5}[source]
Are you suggesting that Scale can take cuDNN kernels and run them at anything resembling peak performance on AMD GPUs?

Because functional compatibility is hardly useful if the performance is not up to par, and cuDNN will run specific kernels that are particularly tuned to not only a specific model of GPU, but also to the specific inputs that the user is submitting. NVidia is doing a ton of work behind the scenes to both develop high-performance kernels for their exact architecture, but also to know which ones are best for a particular application.

This is probably the main reason why I was hesitant to join AMD a few years ago and to this day it seems like it was a good decision.

93. hedgehog ◴[] No.41873250{7}[source]
The politics behind the software issues are complex. At least from the public presentation the new SHAVE cores are not much changed besides bigger vector units. I don't know what it would take to make a lower level SDK available again but it sure seems like it would be useful.
94. johnny22 ◴[] No.41873886{5}[source]
isn't that the case? Which then became a dialect of php with a custom interpreter (and then compiler) as they scaled.
replies(1): >>41876745 #
95. fvv ◴[] No.41874128{6}[source]
Rdna3 is not cdna
96. ddingus ◴[] No.41874616{6}[source]
I got it for a song. Literally a coupla hundred bucks a few months after release.

So yeah, great deal. And I really wanted to run the new CPU.

Frankly, I can do more and generally faster than I would expect running on those limited resources. It has been a quite nice surprise.

For a lot of what I do, the RAM and storage are enough.

97. ddingus ◴[] No.41874656{7}[source]
Interesting. You know I bought one of those USB 3 port expanders from TEMU and it is excellent! (I know, TEMU right? But it was so cheap!)

I could 3d print a couple of brackets and probably lodge a bigger SSD or the smaller form factor eMMC I think and pack it all into a little package one just plugs in. The port extender is currently shaped such that it fits right under the Air tilting it nicely for general use.

The Air only has external USB... still, I don't need to boot from it. The internal one can continue to do that. Storage is storage for most tasks.

98. wkat4242 ◴[] No.41875496{3}[source]
In our company we see the opposite. 5 years ago all the devs wanted Mac instead of Linux. Now they want to go back.

I think part of the reason is that we manage Mac pretty strictly now but we're getting there with Linux too.

We also tried to get them to use WSL 1 and 2 but they just laugh at it :) And point at its terrible disk performance and other dealbreakers. Can't blame them.

99. wkat4242 ◴[] No.41875505{5}[source]
> while Office 365 AI is like $400+ a year per user

And I'm pretty sure this is only Introductory pricing. As people get used to it and use it more it won't cover the cost. I think they rely on the gym model currently; many people not using the ai features much. But eventually that will change. Also, many companies figured that out and pull the copilot license from users that don't use it enough.

100. noduerme ◴[] No.41876745{6}[source]
Yes, that was the case. I was being sarcastic. Zuck wrote facebook in PHP and spent millions of dollars then writing a custom interpreter to let his janky code run slightly faster than normal.
101. noduerme ◴[] No.41876758{4}[source]
I like the take that small optimizations, taken together, amount to a moat. I feel like this could be a profoundly understated paradigm.
102. cj ◴[] No.41881549{9}[source]
> they just think you're an idiot.

To be fair, the average employee doesn’t have much more than idiot-level knowledge when it comes to security.

The majority of employees would rather turn off automatic OS updates simply because it’s a hassle to restart your computer because god forbid they you loose those 250 chrome tabs waiting for you to never get around to revisiting!

103. xp84 ◴[] No.41885273{5}[source]
Apple hasn’t shipped any ai features besides betas. I trust the people responsible for the useless abomination that is Siri to deliver a useful ai tool as much as I would trust Joe Biden to win a breakdancing competition.
replies(1): >>41895945 #
104. brookst ◴[] No.41895945{6}[source]
Well fortunately for all of us the people delivering client side ML today are totally different from the people who implemented a server side rule base assistant 10 years ago.
105. vrighter ◴[] No.41901108{5}[source]
I have never seen, much less interacted with, a chromebook. I don't think they're as popular as you think, in a lot of not-usa