←back to thread

499 points baal80spam | 4 comments | | HN request time: 0.887s | source
Show context
bloody-crow ◴[] No.42055016[source]
Surprising it took so long given how dominant the EPYC CPUs were for years.
replies(8): >>42055051 #>>42055064 #>>42055100 #>>42055513 #>>42055586 #>>42055837 #>>42055949 #>>42055960 #
parl_match ◴[] No.42055100[source]
Complicated. Performance per watt was better for Intel, which matters way more when you're running a large fleet. Doesn't matter so much for workstations or gamers, where all that matters is performance. Also, certification, enterprise management story, etc was not there.

Maybe recent EPYC had caught up? I haven't been following too closely since it hasn't mattered to me. But both companies were suggesting an AMD pass by.

Not surprising at all though, anyone who's been following roadmaps knew it was only a matter of time. AMD is /hungry/.

replies(4): >>42055249 #>>42055396 #>>42055438 #>>42056199 #
dhruvdh ◴[] No.42055249[source]
> Performance per watt was better for Intel

No, not its not even close. AMD is miles ahead.

This is a Phoronix review for Turin (current generation): https://www.phoronix.com/review/amd-epyc-9965-9755-benchmark...

You can similarly search for phoronix reviews for the Genoa, Bergamo, and Milan generations (the last two generations).

replies(1): >>42055467 #
pclmulqdq ◴[] No.42055467[source]
You're thinking strictly about core performance per watt. Intel has been offering a number of accelerators and other features that make perf/watt look at lot better when you can take advantage of them.

AMD is still going to win a lot of the time, but Intel is better than it seems.

replies(3): >>42055522 #>>42056435 #>>42058139 #
1. adrian_b ◴[] No.42058139[source]
That is true, but the accelerators are disabled in all cheap SKUs and they are enabled only in very expensive Xeons.

For most users it is like the accelerators do not exist, even if they increase the area and the cost of all Intel Xeon CPUs.

This market segmentation policy is exactly as stupid as the removal of AVX-512 from the Intel consumer CPUs.

All users hate market segmentation and it is an important reason for preferring AMD CPUs, which are differentiated only on quantitative features, like number of cores, clock frequency or cache size, not on qualitative features, like the Intel CPUs, for which you must deploy different program variants, depending on the cost of the CPU, which may provide or not provide the features required for running the program.

The Intel marketing has always hoped that by showing nice features available only in expensive SKUs they will trick the customers into spending more for the top models. However any wise customer has preferred to buy from the competition instead of choosing between cheap crippled SKUs and complete but too expensive SKUs.

replies(2): >>42061418 #>>42061447 #
2. pclmulqdq ◴[] No.42061418[source]
Wise customers buy the thing that runs their workload with the lowest TCO, and for big customers on some specific workloads, Intel has the best TCO.

Market segmentation sucks, but people buying 10,000+ servers do not do it based on which vendor gives them better vibes. People seem to generally be buying a mix of vendors based on what they are good at.

replies(1): >>42066461 #
3. CoastalCoder ◴[] No.42061447[source]
I think Intel made a strategic mistake in recent years by segmenting its ISA variants. E.g., the many flavors of AVX-512.

Developers can barely be bothered to recompile their code for different ISA variants, let alone optimize it for each one.

So often we just build for 1-2 of the most common, baseline versions of an ISA.

Probably doesn't help that (IIRC) ELF executables for the x86-64 System V ABI have now way to indicate precisely which ISA variants they support. So it's not easy during program-loading time to notice if your going to have a problem with unsupported instructions.

(It's also a good argument for using open source software: you can compile it for your specific hardware target if you want to.)

4. adrian_b ◴[] No.42066461[source]
Intel can offer a low TCO only for the big customers mentioned by you, who buy 10000+ servers and have the force to negotiate big discounts from Intel, buying the CPUs at prices several times lower that their list prices.

On the other hand, for any small businesses or individual users, who have no choice but to buy at the list prices or more, the TCO for the Intel server CPUs has become unacceptably bad. Before 2017, until the Broadwell Xeons, the TCO for the Intel server CPUs could be very good, even when bought at retail for a single server. However starting with the Skylake Server Xeons, the price for the non-crippled Xeon SKUs has increased so much that they have been no longer a good choice, except for the very big customers who buy them much cheaper than the official prices.

The fact that Intel must discount so much their server CPUs for the big customers is likely to explain a good part of their huge financial losses during the last quarters.