←back to thread

154 points rbanffy | 1 comments | | HN request time: 0s | source
Show context
1024bees ◴[] No.45075917[source]
It's nice to see a microarchitecture take a risk, and getting perspective on how this design performs with respect to performance, power and area would be interesting.

Very unlikely to me that this design would have comparable "raw" performance to a design that implements something closer to tomasulo's algorithm. The assumption that the latency of a load will be a l1 hit is a load bearing abstraction; I can imagine scenarios where this acts as a "double jeopardy" causing scheduling to lock up because the latency was mispredicted, but one could also speculate that isn't important because the workload is already memory bound.

There's an intuition in computer architecture that designs that lean on "static" instruction scheduling mechanisms are less performant than more dynamic mechanisms for general purpose compute, but we've had decades of compiler development since itanium "proved" this. Efficient computer (or whatever their name is) is doing something cool too, it's exciting to see where this will go

replies(4): >>45076340 #>>45078044 #>>45079822 #>>45081255 #
1. bri3d ◴[] No.45078044[source]
> we've had decades of compiler development since itanium "proved" this

Sure, but until someone doesn't do "The assumption that the latency of a load will be a l1 hit," they're in trouble for most of what we think of as "general purpose" computing.

I think you get it, but there's this overall trope that the issue with Itanium was purely compiler-related: that we didn't have the algorithms or compute resource to parallelize enough of a single program's control flow to correctly fill the dispatch slots in a bundle. I really disagree with this notion: this might have been _a_ problem, but it wasn't _the_ problem.

Even an amazing compiler which can successfully resolve all data dependencies inside of a single program and produce a binary containing ideal instruction bundling has no idea what's in dcache in the case of an interrupt/context switch, and therefore every load and all of its dependencies risks a stall (or in this case, replay) for a statically scheduled architecture, while a modern out-of-order architecture can happily keep going, even speculatively taking both sides of branches.

The modern approach to optimize datacenter computing is to aggressively pack in context switches, with many execution contexts (processes, user groups/containers, whatever) per guest domain and many guest domains per hypervisor.

Basically: I have yet to see someone successfully use the floor plan they took back from not doing out-of-order to effectively fill in for memory latency in a "general purpose" datacenter computing scenario. Most designers just add more cores, which only makes the problem worse (even adding more cache would be better than more cores!).

VLIW and this kind of design have a place: I could see a design like this being useful in place of Cortex-A or even Cortex-X in a lot of edge compute use cases, and of course GPUs and DSPs already rely almost exclusively on some variety of "static" scheduling already. But as a stated competitor to something like Neoverse/Graviton/Veyron in the datacenter space, the "load-bearing load" (I like your description!) seems like it's going to be a huge problem.