Most active commenters
  • hinkley(3)

←back to thread

172 points yatrios | 11 comments | | HN request time: 0.001s | source | bottom
Show context
hinkley ◴[] No.42184326[source]
> Optimization strategies have shifted from simple power, performance, and area (PPA) metrics to system-level metrics, such as performance per watt. “If you go back into the 1990s, 2000s, the road map was very clear,”

Tell me you work for Intel without telling me you work for Intel.

> says Chris Auth, director of advanced technology programs at Intel Foundry.

Yeah that’s what I thought. The breathlessness of Intel figuring out things that everyone else figured out twenty years ago doesn’t bode well for their future recovery. They will continue to be the laughing stock of the industry if they can’t find more self reflection than this.

Whether this is their public facing or internal philosophy hardly matters. Over this sort of time frame most companies come to believe their own PR.

replies(1): >>42184775 #
talldayo ◴[] No.42184775[source]
Intel has had a few bad years, but frankly I feel like they could fall a lot lower. They aren't as down bad as AMD was during the Bulldozer years, or Apple during the PowerPC years, or even Samsung's early Exynos chipsets. The absolute worst thing they've done in the past 5 years was fab on TSMC silicon, which half the industry is guilty of at this point.

You can absolutely shoot your feet off trying to modernize too quickly. Intel will be the laughingstock if 18A never makes it to market and their CPU designs start losing in earnest to their competitors. But right now, in a relative sense, Intel isn't even down for the count.

replies(1): >>42185024 #
buildbot ◴[] No.42185024[source]
Intel has failed pretty badly IMO. Fabbing at TSMC might actually have been a good idea, except that every other component of arrow like is problematic. Huge tile to tile latencies, special chiplets that are not reusable in any other design, removal of hyperthreading, etc etc. Intel’s last gen CPU is in general faster than the new gen due to all the various issues.

And that’s just the current product! The last two gens are unreliable, quickly killing themselves with too high voltage and causing endless BSODs.

The culture and methods of ex-Intel people at the management level is telling as well, from my experiences at my last job at least.

(My opinions are my own, not my current employers & a lot of ex-Intel people are awesome!)

replies(2): >>42185161 #>>42185632 #
1. talldayo ◴[] No.42185161[source]
We'll see, I mostly object to the "vultures circling" narrative that HN seems to be attached to. Intel's current position is not unprecedented, and people have been speculating Intel would have a rough catchup since the "14nm+++" memes were vogue. But they still have their fabs (and wisely spun it out to it's own business) and their chip designs, while pretty faulty, successfully brought x86 to the big.LITTLE core arrangement. They've beat AMD to the punch on a number of key technologies, and while I still think AMD has the better mobile hardware it still feels like the desktop stuff is a toss-up. Server stuff... glances at Gaudi and Xeon, then at Nvidia ...let's not talk about server stuff.

A lot of hopes and promises are riding on 18A being the savior for both Intel Foundry Services and the Intel chips wholesale. If we get official confirmation that it's been cancelled so Intel can focus on something else then it will signal the end of Intel as we know it.

replies(3): >>42185623 #>>42185945 #>>42186096 #
2. Tostino ◴[] No.42185623[source]
I mean they were on 14 NM until just about 2022, those memes didn't come from nowhere. And it's not even that long ago.
replies(1): >>42194280 #
3. phkahler ◴[] No.42185945[source]
>> successfully brought x86 to the big.LITTLE core arrangement.

Really? I thought they said using e-cores would be better than hyper threading. AMD has doubled down on hyper threading - putting a second decoder in each core that doesn't directly benefit single thread perf. So Intels 24 cores are now competitive with (actually losing to) 16 zen 5 cores. And that's without using AVX512 which Arrow Lake doesn't even support.

I was never a fan of big.little for desktop or even laptops.

replies(2): >>42187836 #>>42188036 #
4. selimthegrim ◴[] No.42186096[source]
What else would they focus on?
5. astrange ◴[] No.42187836[source]
It's working well for Mac laptops, although I'd rather people call it "asymmetric multiprocessing" than "big.LITTLE". Why is it written like that anyway?

(Wikipedia seems to want me to call it "heterogeneous computing", but that doesn't make sense - surely that term should mean running on CPU+GPU at the same time, or multiple different ISAs.)

Of course, it might've worked fine if they used symmetric CPU cores as well. Hard to tell.

replies(2): >>42188046 #>>42195831 #
6. hinkley ◴[] No.42188036[source]
In nearly every generation of Intel chip where I needed to care about whether hyperthreading was a net positive, it was either proven to be a net reduction in throughput, or a single digit improvement but greatly increased jitter. Even if you manage to get more instructions per cycle with it on, the variability causes grief for systems you have or want telemetry on. I kind of wonder why they keep trying.

I don’t know AMD well enough to say whether it works better for them.

replies(1): >>42194163 #
7. hinkley ◴[] No.42188046{3}[source]
I thought big.little was an Arm thing. The little Rockwell chips I have running armbian have them.
8. adrian_b ◴[] No.42194163{3}[source]
Whether SMT a.k.a. hyperthreading is useful or not depends greatly on the application.

There is one important application for which SMT is almost always beneficial: the compilation of a big software project, where hundreds or thousands of files are compiled concurrently. Depending on the CPU, the project building time without SMT is usually at least 20% greater than with SMT, for some CPUs even up to 30% greater.

For applications that spend much time in executing carefully optimized loops, SMT is usually detrimental. For instance, on a Zen 3 CPU running multithreaded GeekBench 6 with SMT disabled improves the benchmark results by a few percent.

9. adrian_b ◴[] No.42194280[source]
Their last 14 nm launch has been Rocket Lake, the desktop CPU of the year 2020/2021.

The next 3 years, 2021/2022, 2022/2023 and 2023/2024, have been dominated by "10 nm" rebranded as "Intel 7" products, which have used the Golden Cove/Raptor Cove and Gracemont CPU core micro-architectures.

Now their recent products are split between those made internally with the Intel 4/Intel 3 CMOS processes (which use rather obsolete CPU cores, which are very similar to the cores made with Intel 7, except that the new manufacturing process provides more cores per package and a lower power consumption per core) and those made at TSMC with up-to-date CPU cores, where the latter include all the higher end models for laptop and desktop CPUs (the cheapest models for the year 2024/2025 remain some rebranded older CPU models based on refreshes of Meteor Lake, Raptor Lake and Alder Lake N).

10. jlokier ◴[] No.42195831{3}[source]
> Why is it written like that anyway?

Because it's an ARM trademrk, and that's how they want it written:

https://www.arm.com/company/policies/trademarks/arm-trademar...

> (Wikipedia seems to want me to call it "heterogeneous computing", but that doesn't make sense - surely that term should mean running on CPU+GPU at the same time, or multiple different ISAs.)

According to Wikipedia, it means running with different architectures, which doesn't necessarily mean instruction set architectures.

They do actually have different ISAs though. On both Apple Silicon and x86, some vector instructions are only available on the performance cores, so some tasks can only run on the performance cores. The issue is alluded to on Wikipedia:

*> In practice, a big.LITTLE system can be surprisingly inflexible. [...] Another is that the CPUs no longer have equivalent abilities, and matching the right software task to the right CPU becomes more difficult. Most of these problems are being solved by making the electronics and software more flexible.

replies(1): >>42196950 #
11. astrange ◴[] No.42196950{4}[source]
> They do actually have different ISAs though. On both Apple Silicon and x86, some vector instructions are only available on the performance cores, so some tasks can only run on the performance cores.

No, there's no difference on Apple Silicon. You'll never need to know which kind of core you're running on. (Except of course that some of them are slower.)