As another datapoint Ian (of Anandtech) estimated that the M1 would need to be clocked at 3.25Ghz to match Zen 3, and these systems are showing a 3.2Ghz clock: https://twitter.com/IanCutress/status/1326516048309460992
As another datapoint Ian (of Anandtech) estimated that the M1 would need to be clocked at 3.25Ghz to match Zen 3, and these systems are showing a 3.2Ghz clock: https://twitter.com/IanCutress/status/1326516048309460992
You can check the clock speeds: https://browser.geekbench.com/v5/cpu/4620493.gb5
Up to 5050MHz is stock behavior for the 5950X and it's using standard DDR4 3200 memory.
Simpler ARM ISA has advantages in very small / energy efficient CPUs since the silicon translation logic can be smaller but this advantage grows increasingly irrelevant when you are scaling to bigger, faster cores.
IMHO these days ISA implications on performance and efficiency are being overstated.
Noooo, besides simply copying instructions 1-to-1, the process is way to involved, and imposes 40 years old assumptions on memory model, and many other things, which greatly limits the amount of way you can interact with the CPU, adds to transistor count, and makes making efficient compilers really hard.
I'm probably not the first or last to suggest this but... it seems awfully tempting to say: why can't we throw away the concept of maintaining binary comparability yet and target some level of "internal" ISA directly (if intel/AMD could provide such an interface in parallel to the high level ISA)... with the accepted cost of knowing that ISA will change in not necessarily forward compatible ways between CPU revisions.
From the user's perspective we'd either end up with more complex binary distribution, or needing to compile for your own CPU FOSS style when you want to escape the performance limitations of x86.