←back to thread

292 points kaboro | 1 comments | | HN request time: 0.219s | source
Show context
parsimo2010 ◴[] No.25059497[source]
I accept that the performance of Apple's chips have increased rapidly in the last few years, but the benchmarks that they are using to compare to various x86 CPUs makes me suspicious that they are cherry-picking benchmarks and aren't telling the whole story (either in the Stratechery article or the Anandtech they got the figures from).

Why am I suspicious? THERE IS ABSOLUTELY NO WAY THAT A 5W PART LIKE THE A14 IS FASTER THAN A 100W PART LIKE THE i9-10900k! I understand they are comparing single threaded speed. I'll accept that the A14 is more power efficient. I'll acknowledge that Intel has been struggling lately. But to imply that a low power mobile is straight up faster than a high power chip in any category makes me extremely suspicious that the benchmark isn't actually measuring speed (maybe it's normalizing by power draw), or that the ARM and x86 versions of the benchmark have different reference values (like a 1000 score for an ARM is not the same speed of calculation as a 1000 score on x86). It just can't be true that the tablet with a total price of $1k can hang with a $500 CPU that has practically unlimited size, weight and power compared to the tablet, and when the total price to make it comparable in features (motherboard, power supply, monitor, etc) makes the desktop system more expensive.

Regardless of whether it's an intentional trick or an oversight, I don't think that the benchmark showing the mobile chip is better than a desktop chip in RAW PERFORMANCE is true. And that means that a lot of the conclusions that they draw from the benchmark aren't true. There is no way that the A14 (nor the M1) is going to be faster in any raw performance category than a latest generation and top-spec desktop system.

replies(11): >>25059551 #>>25059579 #>>25059583 #>>25059690 #>>25059897 #>>25059901 #>>25060075 #>>25060410 #>>25060485 #>>25063022 #>>25064162 #
anomaloustho ◴[] No.25059901[source]
Maybe I just need more education in this realm but I’m not sure why a difference in electrical wattage makes it physically impossible for one processor to produce a better result than another processor.

In the presentation, Johny Sruji seemed to place a bigger emphasis on the reduced power consumption than he did the speed. Saying things like, “this is a big deal” and “this is unheard of”.

In my mind, the argument of wattage seems analogous to saying, “There is no way a low wattage LED bulb will ever outshine a high wattage filament bulb.” I have assumed that we’ve been able to make leaps and bounds in CPU technology since the dawn of computers while also reducing power consumption.

But maybe there is some critical information I am missing here. I’m certainly no expert and would love to hear more about why the wattage aspect holds weight.

replies(2): >>25060209 #>>25060529 #
1. johncolanduoni ◴[] No.25060209[source]
So there is technically such a known limit due to a link between information-theoretic entropy and thermodynamic entropy, which provides a lower bound on energy usage for a particular digital circuit via the second law of thermodynamics. In simpler terms, there is an unavoidable generation of heat when you "throw bits away" like AND and OR gates do. However we are several orders of magnitude away from that efficiency bound in today's chips, so your analogy to LED bulbs is more apt than you may realize: LED bulbs are still far away from their theoretical maximum efficiency, but they're still a massive improvement over incandescent bulbs.

If you want to know more about this limitation, I suggest looking at a way of organizing computation that avoids this issue called "reversible computing"[1]. As I said, it won't be of practical significance for classical computing for a long while, but it's actually pretty fleshed out theoretically.

[1]: https://en.wikipedia.org/wiki/Reversible_computing