←back to thread

283 points walterbell | 5 comments | | HN request time: 1.167s | source
Show context
stevefan1999 ◴[] No.45768818[source]
Legendary Chip Architect, Jim Keller, Says AMD ‘Stupidly Cancelled’ K12 ARM CPU Project After He Left The Company: https://wccftech.com/legendary-chip-architect-jim-keller-say...

Could be a revival but for different purposes

replies(7): >>45769959 #>>45770585 #>>45771421 #>>45772011 #>>45772565 #>>45772778 #>>45773850 #
high_na_euv ◴[] No.45769959[source]
Funny how some of his projects got cancelled like K12 at AMD or Royal Core at INTC and people always act like that was terrible decision, yet AMD is up like 100x on stock market and INTC... times gonna tell
replies(4): >>45770095 #>>45770287 #>>45771119 #>>45771786 #
StopDisinfo910 ◴[] No.45771119[source]
Seems completely uncorrelated with what is discussed especially considering Intel didn’t enter the ARM market either.

Would make much more sense to compare with Qualcomm trajectory here as they dominate the high end ARM SoC market.

Basically AMD missed the opportunity to be first mover on a market which is now huge with a project Apple proved to be viable three years after the planned AMD release. Any way you look at it, it seems like a major miss.

The fact that other good decisions in other segments were made at the same time doesn’t change that.

replies(9): >>45771185 #>>45771221 #>>45771239 #>>45771351 #>>45771611 #>>45771841 #>>45772208 #>>45772417 #>>45774680 #
high_na_euv ◴[] No.45771185[source]
Apple has way stronger leverage than AMD when it comes to forcing "new standards" lets say.

AMD cannot go and tell its customers "hey we are changing ISA, go adjust.". Their customers would run to Intel.

Apple could do that and forced its laptops to use it. Developers couldnt afford losing those users, so they adjusted.

replies(3): >>45771244 #>>45772721 #>>45773278 #
pier25 ◴[] No.45772721[source]
> Their customers would run to Intel.

Data centers and hosting companies are probably the biggest customers buying AMD CPUs, no?

If those companies could lower their energy and cooling costs that could be a strong incentive to offer ARM servers.

replies(1): >>45773622 #
1. high_na_euv ◴[] No.45773622[source]
What kind of difference we are talking about?

1% 3% 6% 10% 30%?

replies(1): >>45773947 #
2. pier25 ◴[] No.45773947[source]
No idea but it should be significant. AFAIK cooling and energy are the biggest data center costs.
replies(1): >>45776713 #
3. AnthonyMouse ◴[] No.45776713[source]
AMD servers are already below 3 watts per core. ARM doesn't actually confer any power advantage. Most ARM processors use less power because they're slower. Apple has a slight advantage because they use TSMC's latest process nodes, but it isn't very large and it isn't because of the ISA.
replies(1): >>45786947 #
4. pier25 ◴[] No.45786947{3}[source]
I was looking at EPYC chips from 2-3 years ago and those do consume more like 8-10W per core but you're right. The latest EPYC 9005 are actually quite efficient.
replies(1): >>45787282 #
5. AnthonyMouse ◴[] No.45787282{4}[source]
EPYC 7702(P) is only slightly more than 3W/core (it's 3.125) and that's from 2019.

But the newer ones use even less and they're faster.