The public clouds help a lot here by trivializing testing and locking in enough volume to get all of the basic stuff supported, and I think that’s why AMD was more successful now than they were in the Opteron era.
Maybe recent EPYC had caught up? I haven't been following too closely since it hasn't mattered to me. But both companies were suggesting an AMD pass by.
Not surprising at all though, anyone who's been following roadmaps knew it was only a matter of time. AMD is /hungry/.
No, not its not even close. AMD is miles ahead.
This is a Phoronix review for Turin (current generation): https://www.phoronix.com/review/amd-epyc-9965-9755-benchmark...
You can similarly search for phoronix reviews for the Genoa, Bergamo, and Milan generations (the last two generations).
AMD is still going to win a lot of the time, but Intel is better than it seems.
On the other hand AMD has been very conservative with their EPYC sales and forecast.
The good news is that AMD has, finally, mostly achieved that, and in some ways they are now superior. But that has taken time: far longer than it took AMD to beat Intel at benchmarks.
Their acceleration primitives work with many TLS implementations/nginx/SSH amongst many others.
Possibly AMD is doing similar but I'm not aware.
this is one of those things where there's a lot of money on the line, and people are willing to do the math.
the fact that it took this long should tell you everything you need to know about the reality of the situation
AMD has had the power efficiency crown in data center since Rome, released in 2019. And their data center CPUs became the best in the world years before they beat Intel in client.
The people who care deeply about power efficiency could and did do the math, and went with AMD. It is notable that they sell much better to the hyperscalers than they sell to small and medium businesses.
> idk go look at the xeon versus amd equivalent benchmarks.
They all show AMD with a strong lead in power efficiency for the past 5 years.
In AI world they have OpenVINO, Intel Neural Compressor, and a slew of other implementations that typically offer dramatic performance improvements.
Like we see with AMD trying to compete with Nvidia software matters - a lot.
Intel has just been removed from the Dow index. They are under performing on multiple levels
https://apnews.com/article/dow-intel-nvidia-sherwinwilliams-...
And things like the MI300A mean that isn't really a requirement now either.
For most users it is like the accelerators do not exist, even if they increase the area and the cost of all Intel Xeon CPUs.
This market segmentation policy is exactly as stupid as the removal of AVX-512 from the Intel consumer CPUs.
All users hate market segmentation and it is an important reason for preferring AMD CPUs, which are differentiated only on quantitative features, like number of cores, clock frequency or cache size, not on qualitative features, like the Intel CPUs, for which you must deploy different program variants, depending on the cost of the CPU, which may provide or not provide the features required for running the program.
The Intel marketing has always hoped that by showing nice features available only in expensive SKUs they will trick the customers into spending more for the top models. However any wise customer has preferred to buy from the competition instead of choosing between cheap crippled SKUs and complete but too expensive SKUs.
QAT is an integrated offering by Intel, but there are competing products delivered as add-in cards for most of the things it does, and they have more market presence than QAT. As such, QAT provides much less advantage to Intel than Intel marketing makes it seem like. Because yes, Xeon (including QAT) is better than bare Epyc, but Epyc + third party accelerator beats it handily. Especially in cost, the appearance of QAT seems to have spooked the vendors and the prices came down a lot.
Market segmentation sucks, but people buying 10,000+ servers do not do it based on which vendor gives them better vibes. People seem to generally be buying a mix of vendors based on what they are good at.
Developers can barely be bothered to recompile their code for different ISA variants, let alone optimize it for each one.
So often we just build for 1-2 of the most common, baseline versions of an ISA.
Probably doesn't help that (IIRC) ELF executables for the x86-64 System V ABI have now way to indicate precisely which ISA variants they support. So it's not easy during program-loading time to notice if your going to have a problem with unsupported instructions.
(It's also a good argument for using open source software: you can compile it for your specific hardware target if you want to.)
On the other hand, for any small businesses or individual users, who have no choice but to buy at the list prices or more, the TCO for the Intel server CPUs has become unacceptably bad. Before 2017, until the Broadwell Xeons, the TCO for the Intel server CPUs could be very good, even when bought at retail for a single server. However starting with the Skylake Server Xeons, the price for the non-crippled Xeon SKUs has increased so much that they have been no longer a good choice, except for the very big customers who buy them much cheaper than the official prices.
The fact that Intel must discount so much their server CPUs for the big customers is likely to explain a good part of their huge financial losses during the last quarters.