Most active commenters
  • minxomat(3)

←back to thread

1080 points antipaul | 15 comments | | HN request time: 0.417s | source | bottom
1. nichch ◴[] No.25065659[source]
How am I supposed to interpret this? A MacBook Air surpasses my i7-8700k in single and (almost) multi core performance?
replies(1): >>25065673 #
2. minxomat ◴[] No.25065673[source]
Yes, in fact, the A14 (iPhone 12) already surpassed most Intel chips: https://images.anandtech.com/doci/16226/perf-trajectory_575p...

Intel is now #3

replies(1): >>25065717 #
3. FartyMcFarter ◴[] No.25065717[source]
A modern mobile CPU with a TDP of 6 watts is beating a modern desktop CPU with a TDP of 125 watts? Is it just me or this seems too good to be true?
replies(4): >>25065751 #>>25065766 #>>25065784 #>>25065798 #
4. minxomat ◴[] No.25065751{3}[source]
It's a mobile CPU with many silicon advantages (widest decoder in industry, memory closer, deepest re-order buffer of any CPU and much more) plus a sane ISA and optimized OS. So yeah, you're seeing the benefit of Apples integration. That's why even the Anandtech page calls that graph "absurd", because it seems unreal, but it's real.
replies(1): >>25065796 #
5. kristofferR ◴[] No.25065766{3}[source]
5nm vs 14nm is the easiest explainable reason.
replies(1): >>25065847 #
6. wmf ◴[] No.25065784{3}[source]
It is true. Note that a single core can only use ~20W so high TDPs only matter for multicore.
7. refulgentis ◴[] No.25065796{4}[source]
Gotta be frank because it's not getting through: you're jumping way ahead here. Every time one of these threads has happened, there's an ever-increasing # of people who vaguely remember reading a story about Apple GeekBench numbers, so therefore this one is credible too - I used to be one of those people. This has been going regularly for 3-4 years now, and your interlocutor as well as other comments on this article are correct - comparing X86 versus ARM on GeekBench is nonsensical due to the way GeekBench eliminates thermal concerns and the impact of sustained load. Your iPhone can't magically do video editing or compile code faster than an i5.
replies(2): >>25065827 #>>25066030 #
8. kllrnohj ◴[] No.25065798{3}[source]
TDP lost meaning years and years ago, and power usage isn't linear. The extra 100-300mhz at the top end is a huge impact to power.

Check out for example the per core power charts that Anandtech does: https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...

Compare for example the 1 core power numbers between the chips. The 5600X 1 core result is 11w @ 4.6ghz, whereas the other two chips boost higher and hit 4.8-4.9ghz 1 core turbos, but it costs 17-18w to do it. Huge increase in power for that last 1-2% performance. So you really can't or shouldn't compare more power-concious configurations with the top end desktop where power is infinite and well worth spending for even single digit percentage gains.

And then of course you should also note that the single-core power draw in all of those is vastly lower than their TDP numbers (65w for the 5600x, and 125w for the 5800x/5900x).

replies(1): >>25069775 #
9. minxomat ◴[] No.25065827{5}[source]
My comment, and this specific thread, isn't even related to GeekBench. The graph I linked used SPEC instead of GB5. The gigantic architectural deep dive over on Anandtech even includes a discussion on the strengths and limits of both benchmarks, and how they make sense based on further micro-architecture testing.

The reason that graph doesn't include the A14 Firestorm -> M1 jump was simply timing. We know the thermal envelopes of the M1 and the cooling designs. We now have clock info thanks to GB5. So yes, the data is pretty solid. No one's saying that the iPhone beats the Mac (or a PC) at performance when you consider the whole system. Just that the CPU architecture can and will deliver higher performance given the M1 clock, thermals and cooling. Remember that The A14/M1 CPUs are faster at lower clock speeds.

10. kllrnohj ◴[] No.25065847{4}[source]
It's also the most wrong explanation. The actual performance efficiency between those processes isn't that drastic. The power efficiency of the M1 would come from better IPC such that it just doesn't have to clock as high to be competitive.

That's why A14 only runs at 1.8ghz base, 3ghz boost. That's how it has low power consumption. And similarly Intel pushing 5ghz is why it has high power consumption.

TSMC's 5nm will have a raw transistor performance/watt advantage, but it's not huge

11. raydev ◴[] No.25066030{5}[source]
Well, we have this evidence so far, on a phone that has no active cooling: https://twitter.com/tldtoday/status/1326610187529023488
replies(1): >>25066100 #
12. refulgentis ◴[] No.25066100{6}[source]
that's comparing a hardware encoder to a software one, unfortunately, as the replies note.

it's unfortunately drowned out by the cpu throttling scandal on google, but, its well-known in ar dev (and if you get to talk to an apple engineer away from stage lights at wwdc) that you have to proactively choose to tune performance, or you'll get killed after a minute or two due to thermal throttling.

replies(1): >>25066921 #
13. chrismorgan ◴[] No.25066921{7}[source]
This raises the question of just why the Mac is doing software rendering—I think the hardware it’s running on should have two compatible hardware encoders, that on the CPU and that on the GPU. Is the software being used incapable of using hardware encoding? Does it default to software rendering because of its higher quality per bit? Was it configured to use software encoding (whether ignorantly or deliberately)?
replies(1): >>25069970 #
14. imtringued ◴[] No.25069775{4}[source]
>https://images.anandtech.com/doci/16214/PerCore-1-5950X.png

Yeah comparing TDP is meaningless even within the same processor. The 4 core workload in this table uses 94W and the 16 core workload uses 98W. There is also an anomaly at 5 cores where the CPU uses less power than if it only used 4 cores.

If you tried to derive conclusions about the power efficiency of the CPU you would end up making statements like "This CPU is 3-4 times more power efficient than itself"

15. imtringued ◴[] No.25069970{8}[source]
Video encoding is generally done on CPUs because they can run more complicated video encoding algorithms with multiple passes. This generally results in smaller video files with the same quality. As you increase the compute insensitivity of the video encoder you get diminishing returns. 30% lower bitrate might need 10x as much CPU time. That tweet says more about the type of encoder and chosen encoder settings than anything about the hardware.

Imagine going on a hike and climbing an exponential slope like 2^x. You go up to 2^4 and then go down again and repeat this three times so you have hiked 12km (43) in total. Then there is a athlete who is going up to 2^8. He says he has hiked 8km and you laugh at him because of how sweaty he is despite having walked a shorter distance than you. In reality 32^4 (48) is nowhere near 2^8 (256). The athlete put in a lot more effort than you.