As another datapoint Ian (of Anandtech) estimated that the M1 would need to be clocked at 3.25Ghz to match Zen 3, and these systems are showing a 3.2Ghz clock: https://twitter.com/IanCutress/status/1326516048309460992
As another datapoint Ian (of Anandtech) estimated that the M1 would need to be clocked at 3.25Ghz to match Zen 3, and these systems are showing a 3.2Ghz clock: https://twitter.com/IanCutress/status/1326516048309460992
You can check the clock speeds: https://browser.geekbench.com/v5/cpu/4620493.gb5
Up to 5050MHz is stock behavior for the 5950X and it's using standard DDR4 3200 memory.
Simpler ARM ISA has advantages in very small / energy efficient CPUs since the silicon translation logic can be smaller but this advantage grows increasingly irrelevant when you are scaling to bigger, faster cores.
IMHO these days ISA implications on performance and efficiency are being overstated.
Noooo, besides simply copying instructions 1-to-1, the process is way to involved, and imposes 40 years old assumptions on memory model, and many other things, which greatly limits the amount of way you can interact with the CPU, adds to transistor count, and makes making efficient compilers really hard.
Rather i suspect that the main benefit that M1 has in many real world benchmarks is that it has on-chip memory, cache-miss latency is a huge cost in the real world (why games has drifted towards DoD internals), so sidestepping that issue to a large extent by integrating memory on-die gives it a great boost.
I'm betting once they've reverse engineered the M1 perf, we will see multi-GB caches on AMD/Intel chips within 4 years.
This cannot be implemented in AMD's current 7nm process due to size restrictions.
The SoC-side of the story is also contrary to the very core design of a general purpose CPU. RAM, GPU, and extension cards for specialised tasks are already covered by 3rd party products on the PCIe and USB4 buses and AMD has no interest in cannibalising their GPU and console business...
With their upcoming discrete GPUs and accelerator cards, Intel might be in the same boat w.r.t. SoC design.
I'm probably not the first or last to suggest this but... it seems awfully tempting to say: why can't we throw away the concept of maintaining binary comparability yet and target some level of "internal" ISA directly (if intel/AMD could provide such an interface in parallel to the high level ISA)... with the accepted cost of knowing that ISA will change in not necessarily forward compatible ways between CPU revisions.
From the user's perspective we'd either end up with more complex binary distribution, or needing to compile for your own CPU FOSS style when you want to escape the performance limitations of x86.
Back then, Intel was still betting on Itanium. It was a time when AMD was ahead of Intel. Wintel lasted longer, and its only since the smartphone revolution they got caught up. In hindsight, even a Windows computer on Intel gave a user more freedom than the locked down stuff on say iOS. OTOH, sometimes user freedom is a bad thing, arguably if the user isn't technically inclined or if you can sell a locked down platform like PlayStation or Xbox for relatively cheap (kind of like the printer business).
I'm sure other people can add to this as well. :-)