←back to thread

154 points rbanffy | 1 comments | | HN request time: 0.199s | source
Show context
1024bees ◴[] No.45075917[source]
It's nice to see a microarchitecture take a risk, and getting perspective on how this design performs with respect to performance, power and area would be interesting.

Very unlikely to me that this design would have comparable "raw" performance to a design that implements something closer to tomasulo's algorithm. The assumption that the latency of a load will be a l1 hit is a load bearing abstraction; I can imagine scenarios where this acts as a "double jeopardy" causing scheduling to lock up because the latency was mispredicted, but one could also speculate that isn't important because the workload is already memory bound.

There's an intuition in computer architecture that designs that lean on "static" instruction scheduling mechanisms are less performant than more dynamic mechanisms for general purpose compute, but we've had decades of compiler development since itanium "proved" this. Efficient computer (or whatever their name is) is doing something cool too, it's exciting to see where this will go

replies(4): >>45076340 #>>45078044 #>>45079822 #>>45081255 #
1. acdha ◴[] No.45076340[source]
> we've had decades of compiler development since itanium "proved" this.

I think an equally large change is the enormous rise of open source and supply chain focus. When Itanium came out, there was tons of code businesses ran which had been compiled years ago, lots of internal reimplementation of what would now be library code, and places commonly didn’t upgrade for years because that was also often a licensing purchase. Between open source and security, it’s a lot more reasonable now to think people will be running optimized binaries from day one and in many cases the common need to support both x86 and ARM will have flushed out a lot of compatibility warts along with encouraging use of libraries rather than writing as many things on their own.