Most active commenters
  • pclmulqdq(4)

←back to thread

499 points baal80spam | 25 comments | | HN request time: 2.456s | source | bottom
Show context
bloody-crow ◴[] No.42055016[source]
Surprising it took so long given how dominant the EPYC CPUs were for years.
replies(8): >>42055051 #>>42055064 #>>42055100 #>>42055513 #>>42055586 #>>42055837 #>>42055949 #>>42055960 #
1. parl_match ◴[] No.42055100[source]
Complicated. Performance per watt was better for Intel, which matters way more when you're running a large fleet. Doesn't matter so much for workstations or gamers, where all that matters is performance. Also, certification, enterprise management story, etc was not there.

Maybe recent EPYC had caught up? I haven't been following too closely since it hasn't mattered to me. But both companies were suggesting an AMD pass by.

Not surprising at all though, anyone who's been following roadmaps knew it was only a matter of time. AMD is /hungry/.

replies(4): >>42055249 #>>42055396 #>>42055438 #>>42056199 #
2. dhruvdh ◴[] No.42055249[source]
> Performance per watt was better for Intel

No, not its not even close. AMD is miles ahead.

This is a Phoronix review for Turin (current generation): https://www.phoronix.com/review/amd-epyc-9965-9755-benchmark...

You can similarly search for phoronix reviews for the Genoa, Bergamo, and Milan generations (the last two generations).

replies(1): >>42055467 #
3. aryonoco ◴[] No.42055396[source]
Performance per Watt was lost by Intel with the introduction of the original Epyc in 2017. AMD overtook in outright performance with Zen 2 in 2019 and hasn't looked back.
4. Hikikomori ◴[] No.42055438[source]
Care to post any proof?
replies(1): >>42055852 #
5. pclmulqdq ◴[] No.42055467[source]
You're thinking strictly about core performance per watt. Intel has been offering a number of accelerators and other features that make perf/watt look at lot better when you can take advantage of them.

AMD is still going to win a lot of the time, but Intel is better than it seems.

replies(3): >>42055522 #>>42056435 #>>42058139 #
6. andyferris ◴[] No.42055522{3}[source]
Are generic web server workloads going to use these features? I would assume the bulk of e.g. EC2 spent its time doing boring non-accelerated “stuff”.
replies(1): >>42055724 #
7. everfrustrated ◴[] No.42055724{4}[source]
Intel does a lot of work developing sdks to take advantage of its extra CPU features and works with open source community to integrate them so they are actually used.

Their acceleration primitives work with many TLS implementations/nginx/SSH amongst many others.

Possibly AMD is doing similar but I'm not aware.

replies(2): >>42056013 #>>42056180 #
8. parl_match ◴[] No.42055852[source]
idk go look at the xeon versus amd equivalent benchmarks. theyve been converging although amd's datacenter offerings were always a little behind their consumer

this is one of those things where there's a lot of money on the line, and people are willing to do the math.

the fact that it took this long should tell you everything you need to know about the reality of the situation

replies(3): >>42056015 #>>42056024 #>>42056114 #
9. pclmulqdq ◴[] No.42056013{5}[source]
AMD is not doing similar stuff yet.
10. Tuna-Fish ◴[] No.42056015{3}[source]
Sorry, but everything about this is wrong.

AMD has had the power efficiency crown in data center since Rome, released in 2019. And their data center CPUs became the best in the world years before they beat Intel in client.

The people who care deeply about power efficiency could and did do the math, and went with AMD. It is notable that they sell much better to the hyperscalers than they sell to small and medium businesses.

> idk go look at the xeon versus amd equivalent benchmarks.

They all show AMD with a strong lead in power efficiency for the past 5 years.

11. p1necone ◴[] No.42056024{3}[source]
Are you looking at userbenchmark? They are not even slightly reliable.
replies(2): >>42056646 #>>42057191 #
12. Hikikomori ◴[] No.42056114{3}[source]
I know what the benchmarks are like, I wish that you would go and update your knowledge. If we take cloud as a comparison it's cheaper to use AMD, think they're doing some math?
13. kkielhofner ◴[] No.42056180{5}[source]
ICC, IPP, QAT, etc are definitely an edge.

In AI world they have OpenVINO, Intel Neural Compressor, and a slew of other implementations that typically offer dramatic performance improvements.

Like we see with AMD trying to compete with Nvidia software matters - a lot.

14. xcv123 ◴[] No.42056199[source]
Outdated info. AMD / TSMC has beat Intel at efficiency for years. Intel has fallen behind. We need them to catch up and provide strong competition.

Intel has just been removed from the Dow index. They are under performing on multiple levels

https://apnews.com/article/dow-intel-nvidia-sherwinwilliams-...

15. kimixa ◴[] No.42056435{3}[source]
But those accelerators are also available for AMD platforms - even if how they're provided is a bit different (often on add-in cards instead of a CPU "tile").

And things like the MI300A mean that isn't really a requirement now either.

replies(1): >>42057867 #
16. wongogue ◴[] No.42056646{4}[source]
He is so much biased against AMD that PC builders and even Intel forums have banned that site.
17. ChoGGi ◴[] No.42057191{4}[source]
Oh thanks for the reminder! I gotta go read their 9800x3d review, I'm always up for a good laugh.

Edit: awww no trash talking it yet, unlike the 7800x3d :)

18. pclmulqdq ◴[] No.42057867{4}[source]
They are not, at the moment. Google "QAT" for one example - I'm not talking about GPUs or other add-in cards at all.
replies(1): >>42060419 #
19. adrian_b ◴[] No.42058139{3}[source]
That is true, but the accelerators are disabled in all cheap SKUs and they are enabled only in very expensive Xeons.

For most users it is like the accelerators do not exist, even if they increase the area and the cost of all Intel Xeon CPUs.

This market segmentation policy is exactly as stupid as the removal of AVX-512 from the Intel consumer CPUs.

All users hate market segmentation and it is an important reason for preferring AMD CPUs, which are differentiated only on quantitative features, like number of cores, clock frequency or cache size, not on qualitative features, like the Intel CPUs, for which you must deploy different program variants, depending on the cost of the CPU, which may provide or not provide the features required for running the program.

The Intel marketing has always hoped that by showing nice features available only in expensive SKUs they will trick the customers into spending more for the top models. However any wise customer has preferred to buy from the competition instead of choosing between cheap crippled SKUs and complete but too expensive SKUs.

replies(2): >>42061418 #>>42061447 #
20. Tuna-Fish ◴[] No.42060419{5}[source]
You might not be, but the parent poster is.

QAT is an integrated offering by Intel, but there are competing products delivered as add-in cards for most of the things it does, and they have more market presence than QAT. As such, QAT provides much less advantage to Intel than Intel marketing makes it seem like. Because yes, Xeon (including QAT) is better than bare Epyc, but Epyc + third party accelerator beats it handily. Especially in cost, the appearance of QAT seems to have spooked the vendors and the prices came down a lot.

replies(2): >>42060894 #>>42060988 #
21. ◴[] No.42060894{6}[source]
22. tecleandor ◴[] No.42060988{6}[source]
I've only used a couple QAT accelerators and I don't know that field much... What relatively-easy-to-use and not-super-expensive accelerators are available around?
23. pclmulqdq ◴[] No.42061418{4}[source]
Wise customers buy the thing that runs their workload with the lowest TCO, and for big customers on some specific workloads, Intel has the best TCO.

Market segmentation sucks, but people buying 10,000+ servers do not do it based on which vendor gives them better vibes. People seem to generally be buying a mix of vendors based on what they are good at.

replies(1): >>42066461 #
24. CoastalCoder ◴[] No.42061447{4}[source]
I think Intel made a strategic mistake in recent years by segmenting its ISA variants. E.g., the many flavors of AVX-512.

Developers can barely be bothered to recompile their code for different ISA variants, let alone optimize it for each one.

So often we just build for 1-2 of the most common, baseline versions of an ISA.

Probably doesn't help that (IIRC) ELF executables for the x86-64 System V ABI have now way to indicate precisely which ISA variants they support. So it's not easy during program-loading time to notice if your going to have a problem with unsupported instructions.

(It's also a good argument for using open source software: you can compile it for your specific hardware target if you want to.)

25. adrian_b ◴[] No.42066461{5}[source]
Intel can offer a low TCO only for the big customers mentioned by you, who buy 10000+ servers and have the force to negotiate big discounts from Intel, buying the CPUs at prices several times lower that their list prices.

On the other hand, for any small businesses or individual users, who have no choice but to buy at the list prices or more, the TCO for the Intel server CPUs has become unacceptably bad. Before 2017, until the Broadwell Xeons, the TCO for the Intel server CPUs could be very good, even when bought at retail for a single server. However starting with the Skylake Server Xeons, the price for the non-crippled Xeon SKUs has increased so much that they have been no longer a good choice, except for the very big customers who buy them much cheaper than the official prices.

The fact that Intel must discount so much their server CPUs for the big customers is likely to explain a good part of their huge financial losses during the last quarters.