Most active commenters
  • IshKebab(6)
  • bee_rider(4)
  • daneel_w(4)
  • bri3d(3)

135 points rbanffy | 42 comments | | HN request time: 1.882s | source | bottom
1. bee_rider ◴[] No.45075394[source]
The static schedule part seems really interesting. They note that it only works for some instructions, but I wonder if it would be possible to have a compiler report “this section of the code can be statically scheduled.” In that case, could this have a benefit for real-time operation? Or maybe some specialized partially real-time application—mark a segment of the program as desiring static scheduling, and don’t allow memory loads, etc, inside there.
replies(3): >>45075641 #>>45075901 #>>45078349 #
2. daneel_w ◴[] No.45075405[source]
"L2$, L3$, I$, D$". Well, OK.
replies(3): >>45075423 #>>45075590 #>>45075855 #
3. bee_rider ◴[] No.45075423[source]
Hmm?
replies(1): >>45075965 #
4. bri3d ◴[] No.45075462[source]
Interesting idea. It's like putting a VLIW compilation pass into the scheduler, but without an intermediate microcode cache like NV Denver did. Without handling memory dependencies / cache hazards, I'm not so sure how well it will do in general-purpose use cases. They don't have the same code locality / second-layer icache problem that Denver had, but data loads are still going to be a mess.

I guess the notion is that data cache misses will basically lead to what could be called "instruction amplification," where an instruction will miss its scheduled time slot and have to be replayed, possibly repeatedly, until its dependencies are available. The article asserts that this is the rough equivalent of leaving execution ports unoccupied in a "traditional" OoO architecture, but I'm not so sure. I'm curious about how well this works in practice; I would worry that cache misses would rapidly multiply into a cascading failure case where the entire pipeline basically stalls and the architecture reverts to in-order level performance - just like most general-purpose VLIW architectures.

5. dlcarrier ◴[] No.45075524[source]
It's interesting that a high-performance computing core has added instructions for bit manipulation. They're really common on low-power embedded cores, where bit manipulating inputs and outputs is more common. They can save a lot of instructions when needed, though. For example, clearing a bit in a variable, without an express instruction, requires raising two to the power of the bit, inverting the result, anding that with the variable, then writing the result back to the variable. Depending on the language, it looks something like this:

    Variable &=~(2^Bit)
The series of bitwise operators looks more grawlix (https://en.wikipedia.org/wiki/Grawlix) than instructions, as though yelling pejoratives at the bit is what clears it.
replies(4): >>45075580 #>>45075627 #>>45075809 #>>45075920 #
6. bri3d ◴[] No.45075580[source]
The bit manipulation instructions are a required part of the RVA23 baseline standard, so we're likely to see them in almost all general purpose RISC-V cores in the future.
replies(1): >>45075929 #
7. 0x000xca0xfe ◴[] No.45075590[source]
It's just shorthand for "level 2 cache", "level 3 cache", "instruction cache" and "data cache".
replies(1): >>45075847 #
8. 0x000xca0xfe ◴[] No.45075627[source]
Bit manipulation instructions are great for high-performance code, too, because they allow conditional computing without branching.

Some real-world examples in simdjson: https://arxiv.org/pdf/1902.08318

9. usrusr ◴[] No.45075641[source]
What would the CPU do with the parts not marked as "can be statically scheduled"? I read it as they try it anyways and may get some stalling ("replay") if the schedule was overoptimistic. Not sure how a compiler marking sections could be of help?
replies(1): >>45075909 #
10. Findecanor ◴[] No.45075809[source]
M68K also had single-bit instructions. Way back when I wrote M68K assembly, I used them a lot.

I'd think there are quite a few data structures and algorithms where there can be benefits of using powers of two, or to count bits in a word.

RISC-V without the B(itmanip) extension is otherwise quite spartan. B also contains many instructions that other ISAs have in their base set, such as address calculation, and/or/xor not, rol/ror, and even some zero/sign-extension ops.

11. gchadwick ◴[] No.45075819[source]
Andes (Condor is owned by Andes) seems to get relatively little press Vs other RISCV outfits. My sense is they've been quietly building a very solid RISCV CPU business with a great IP portfolio.

This latest core looks very interesting, can't wait to see it hit silicon and see what it can really do!

replies(1): >>45077712 #
12. daneel_w ◴[] No.45075847{3}[source]
Yes, obviously. It's just the first time I've seen a CPU designer/manufacturer use such relaxed "informality" in a spec sheet.
replies(3): >>45076361 #>>45076706 #>>45079925 #
13. bitwize ◴[] No.45075855[source]
Ooh, I wonder what strings were put in those BASIC variables...
replies(1): >>45075873 #
14. daneel_w ◴[] No.45075873{3}[source]
LET L2$="256 KiB"; LET L3$="8 MiB"
15. IshKebab ◴[] No.45075901[source]
I don't think that would help - the set of instructions that have dynamic latencies is basically fixed. Anything memory-related (loads, stores, cache management, fences, etc.) and complex maths (division, sqrt, transcendental functions, etc.)

So you know what code can be statically scheduled just from the instructions already.

replies(1): >>45076005 #
16. IshKebab ◴[] No.45075909{3}[source]
Stalling and replay are not the same btw. Stalling is when you wait a bit before continuing, replay is when you try an operation multiple times.
replies(1): >>45077825 #
17. 1024bees ◴[] No.45075917[source]
It's nice to see a microarchitecture take a risk, and getting perspective on how this design performs with respect to performance, power and area would be interesting.

Very unlikely to me that this design would have comparable "raw" performance to a design that implements something closer to tomasulo's algorithm. The assumption that the latency of a load will be a l1 hit is a load bearing abstraction; I can imagine scenarios where this acts as a "double jeopardy" causing scheduling to lock up because the latency was mispredicted, but one could also speculate that isn't important because the workload is already memory bound.

There's an intuition in computer architecture that designs that lean on "static" instruction scheduling mechanisms are less performant than more dynamic mechanisms for general purpose compute, but we've had decades of compiler development since itanium "proved" this. Efficient computer (or whatever their name is) is doing something cool too, it's exciting to see where this will go

replies(3): >>45076340 #>>45078044 #>>45079822 #
18. ◴[] No.45075920[source]
19. Arnavion ◴[] No.45075929{3}[source]
Since RVA22 actually. B = Zba + Zbb + Zbs and RVA22 requires those individually.
20. daneel_w ◴[] No.45075965{3}[source]
Cache is pronounced like cash, which the $ symbol is supposed to allude to.
replies(3): >>45076054 #>>45077433 #>>45078249 #
21. bee_rider ◴[] No.45076005{3}[source]
I’m just spitballing. But, what if we had some system that went:

1) load some model and set the system into “ready” mode

2) wait for an event from a sensor

3) when the event occurs, trigger some response

4) do other stuff; book keeping, update the model, etc,

5) reset the system to “ready” mode and goto 2

Is it possible we might want some hard time bounds on steps 2 and 3, but be fine with 1, 4, and 5 taking however long? (Assuming the device can be inactive while it is getting ready). Then, we could make sure steps 2 and 3 don’t include any non-static instructions.

replies(1): >>45076810 #
22. bee_rider ◴[] No.45076054{4}[source]
Yes, they are obviously caches. I just didn’t understand your comment.
23. acdha ◴[] No.45076340[source]
> we've had decades of compiler development since itanium "proved" this.

I think an equally large change is the enormous rise of open source and supply chain focus. When Itanium came out, there was tons of code businesses ran which had been compiled years ago, lots of internal reimplementation of what would now be library code, and places commonly didn’t upgrade for years because that was also often a licensing purchase. Between open source and security, it’s a lot more reasonable now to think people will be running optimized binaries from day one and in many cases the common need to support both x86 and ARM will have flushed out a lot of compatibility warts along with encouraging use of libraries rather than writing as many things on their own.

24. DiabloD3 ◴[] No.45076361{4}[source]
I've been seeing it more and more, especially with vendors that don't speak a western language on their spec sheets.

Everyone can tell what L1$ means, but what would L1 缓存 mean?

25. Findecanor ◴[] No.45076706{4}[source]
I follow RISC-V and see it all the time.

CPU manufacturers also aren't using Unicode, using the letter u instead of µ (micro), and the letter A instead of Å (the unit Ångström).

26. IshKebab ◴[] No.45076741[source]
Very interesting design. I guess replaying loads is the really awkward bit. Also how do variable-latency arithmetic instructions work?
27. IshKebab ◴[] No.45076810{4}[source]
Not sure what you're getting at tbh... Do you know about interrupts?
28. xxpor ◴[] No.45077433{4}[source]
Wow, how have I never put 2 and 2 together on that.
replies(1): >>45078132 #
29. pclmulqdq ◴[] No.45077712[source]
Andes has won a lot of sockets already with their lower-power cores. They have almost become the #1 choice for RISC-V cores.
30. usrusr ◴[] No.45077825{4}[source]
So the difference is block everything until the dependency is available and then continue immediately, vs give up on time slots already reserved for downstream dependencies while continuing with those parts in the current schedule that are not blocked and copy the blocked parts at the end of the queue? Sounds like a trade-off that can go one way or the other? But yeah, I was using the term "stalling" in a browser sense, as the superset of both. No idea how incorrect that is.
replies(1): >>45078218 #
31. bri3d ◴[] No.45078044[source]
> we've had decades of compiler development since itanium "proved" this

Sure, but until someone doesn't do "The assumption that the latency of a load will be a l1 hit," they're in trouble for most of what we think of as "general purpose" computing.

I think you get it, but there's this overall trope that the issue with Itanium was purely compiler-related: that we didn't have the algorithms or compute resource to parallelize enough of a single program's control flow to correctly fill the dispatch slots in a bundle. I really disagree with this notion: this might have been _a_ problem, but it wasn't _the_ problem.

Even an amazing compiler which can successfully resolve all data dependencies inside of a single program and produce a binary containing ideal instruction bundling has no idea what's in dcache in the case of an interrupt/context switch, and therefore every load and all of its dependencies risks a stall (or in this case, replay) for a statically scheduled architecture, while a modern out-of-order architecture can happily keep going, even speculatively taking both sides of branches.

The modern approach to optimize datacenter computing is to aggressively pack in context switches, with many execution contexts (processes, user groups/containers, whatever) per guest domain and many guest domains per hypervisor.

Basically: I have yet to see someone successfully use the floor plan they took back from not doing out-of-order to effectively fill in for memory latency in a "general purpose" datacenter computing scenario. Most designers just add more cores, which only makes the problem worse (even adding more cache would be better than more cores!).

VLIW and this kind of design have a place: I could see a design like this being useful in place of Cortex-A or even Cortex-X in a lot of edge compute use cases, and of course GPUs and DSPs already rely almost exclusively on some variety of "static" scheduling already. But as a stated competitor to something like Neoverse/Graviton/Veyron in the datacenter space, the "load-bearing load" (I like your description!) seems like it's going to be a huge problem.

32. robinsonb5 ◴[] No.45078132{5}[source]
You're not alone - it took me way longer than it should have done to figure that one out!
33. IshKebab ◴[] No.45078218{5}[source]
Yeah I think even traditional OoO designs use replay for missed loads rather than stalling. The performance would be too bad if it actually stalled for every load.

I think stalling is used for rarer more awkward things like changing privilege modes or writing certain CSRs (e.g. satp) where you don't want to have to maintain speculative state.

replies(1): >>45079141 #
34. starkruzr ◴[] No.45078249{4}[source]
leading to the unfortunate abbreviation sometimes drawn on blackboards, "$hit"
replies(1): >>45079661 #
35. clamchowder ◴[] No.45078349[source]
(author here) they try for all instructions, just that it's a prediction w/replay because inevitably some instructions like memory loads are variable latency. It's not like Nvidia where fixed latency instructions are statically scheduled, then memory loads/other variable latency stuff is handled dynamically via scoreboarding.
36. monocasa ◴[] No.45079141{6}[source]
Traditional OoO designs don't stall for loads per se, but will stall for a full ROB that has a chain of dependencies waiting on the results of the load.
replies(1): >>45080863 #
37. grg0 ◴[] No.45079661{5}[source]
It is an apt abbreviation if you visualize shit tightly packed in a container. And when you thrash the cache, shit hit the fan (and spills to VRAM.)
38. jasonwatkinspdx ◴[] No.45079822[source]
This is still using a Tomasulo like algorithm, it's just been shifted from the backend to the front end. And instructions don't lock up on an L1 miss. Instead the results of that instruction are marked as poisoned, and the front end replays the their microps forward in the execution stream once the L1 miss is resolved. As the article points out, this replay is likely to fill out otherwise unused execution slots on general purpose code, as OoO cpus rarely sustain their full execution width.

It's a smart idea, and has some parallels to the Mill CPU design. The backend is conceptually similar to a statically scheduled VLIW core, and the front end races ahead using it's matrix scorecard trying to queue up as much as it can for it vs the presence of unpredictable latencies.

replies(1): >>45080091 #
39. jasonwatkinspdx ◴[] No.45079925{4}[source]
The slides are for Hot Chips, which is a very engineering focused venue. It's not your normal marketing stuff.
40. quantummagic ◴[] No.45080091{3}[source]
> Mill CPU design

There were some fascinating concepts being explored in that project. It's a shame nothing came of it.

41. IshKebab ◴[] No.45080863{7}[source]
Good point, but I guess that's the sort of delay that you can't avoid. If there's literally no work to do until a load is available you have to wait. This design can't avoid that either.