←back to thread

128 points ksec | 1 comments | | HN request time: 0.256s | source
Show context
dragontamer ◴[] No.42751521[source]
Triple decoder is one unique effect. The fact that Intel managed to get them lined up for small loops to do 9x effective instruction issue is basically miraculous IMO. Very well done.

Another unique effect is L2 shared between 4 cores. This means that thread communications across those 4 cores has much lower latencies.

I've had lots of debates with people online about this design vs Hyperthreading. It seems like the overall discovery from Intel is that highly threaded tasks use less resources (cache, ROPs, etc. etc).

Big cores (P cores or AMD Zen5) obviously can split into 2 hyperthread, but what if that division is still too big? E cores are 4 threads of support in roughly the same space as 1 Pcore.

This is because L2 cache is shared/consolidated, and other resources (ROP buffers, register files, etc. etc.) are just all so much smaller on the Ecore.

It's an interesting design. I'd still think that growing the cores to 4way SMT (like Xeon Phi) or 8way SMT (POWER10) would be a more conventional way to split up resources though. But obviously I don't work at Intel or can make these kinds of decisions.

replies(8): >>42751667 #>>42751930 #>>42752001 #>>42752140 #>>42752196 #>>42752200 #>>42753025 #>>42753142 #
Dwedit ◴[] No.42751930[source]
I see "ROP" and immediately think of Return Oriented Programming and exploits...
replies(1): >>42751949 #
1. dragontamer ◴[] No.42751949[source]
Lulz, I got some wires crossed. The CPU resource I meant to say was ROB: Reorder Buffer.

I don't know why I wrote ROP. You're right, ROP means return oriented programming. A completely different thing.