Most active commenters
  • Suppafly(3)

←back to thread

366 points pabs3 | 14 comments | | HN request time: 0s | source | bottom
Show context
Manfred ◴[] No.41365540[source]
> At least in the context of x86 emulation, among all 3 architectures we support, RISC-V is the least expressive one.

RISC was explained to me as a reduced instruction set computer in computer science history classes, but I see a lot of articles and proposed new RISC-V profiles about "we just need a few more instructions to get feature parity".

I understand that RISC-V is just a convenient alternative to other platforms for most people, but does this also mean the RISC dream is dead?

replies(7): >>41365583 #>>41365644 #>>41365687 #>>41365974 #>>41366364 #>>41370373 #>>41370588 #
gary_0 ◴[] No.41365687[source]
As I've heard it explained, RISC in practise is less about "an absolutely minimalist instruction set" and more about "don't add any assembly programmer conveniences or other such cleverness, rely on compilers instead of frontend silicon when possible".

Although as I recall from reading the RISC-V spec, RISC-V was rather particular about not adding "combo" instructions when common instruction sequences can be fused by the frontend.

My (far from expert) impression of RISC-V's shortcomings versus x86/ARM is more that the specs were written starting with the very basic embedded-chip stuff, and then over time more application-cpu extensions were added. (The base RV32I spec doesn't even include integer multiplication.) Unfortunately they took a long time to get around to finishing the bikeshedding on bit-twiddling and simd/vector extensions, which resulted in the current functionality gaps we're talking about.

So I don't think those gaps are due to RISC fundamentalism; there's no such thing.

replies(2): >>41365919 #>>41369318 #
1. Suppafly ◴[] No.41369318[source]
>and more about "don't add any assembly programmer conveniences or other such cleverness, rely on compilers instead of frontend silicon when possible"

What are the advantages of that?

replies(3): >>41369498 #>>41369725 #>>41370346 #
2. Retr0id ◴[] No.41369498[source]
It shifts implementation complexity from hardware onto software. It's not an inherent advantage, but an extra compiler pass is generally cheaper than increased silicon die area, for example.

On a slight tangent, from a security perspective, if your silicon is "too clever" in a way that introduces security bugs, you're screwed. On the other hand, software can be patched.

replies(1): >>41370589 #
3. adgjlsfhk1 ◴[] No.41369725[source]
complexity that the compiler removes doesn't have to be handled by the CPU at runtime
replies(1): >>41371035 #
4. Closi ◴[] No.41370346[source]
Instructions can be completed in one clock cycle, which removes a lot of complexity compared to instructions that require multiple clock cycles.

Removed complexity means you can fit more stuff into the same amount of silicon, and have it be quicker with less power.

replies(1): >>41370620 #
5. flyingpenguin ◴[] No.41370589[source]
I honestly find the lack of compiler/interpreter complexity disheartening.

It often feels like as a community we don't have an interest in making better tools than those we started with.

Communicating with the compiler, and generating code with code, and getting information back from the compiler should all be standard things. In general they shouldn't be used, but if we also had better general access to profiling across our services, we could then have specialists within our teams break out the special tools and improve critical sections.

I understand that many of us work on projects with already absurd build times, but I feel that is a side effect of refusal to improve ci/cd/build tools in a similar way.

If you have ever worked on a modern TypeScript framework app, you'll understand what I mean. You can create decorators and macros talking to the TypeScript compiler and asking it to generate some extra JS or modify what it generates. And the whole framework sits there running partial re-builds and refreshing your browser for you.

It makes things like golang feel like they were made in the 80s.

Freaking golang... I get it, macros and decorators and generics are over-used. But I am making a library to standardize something across all 2,100 developers within my company... I need some meta-programming tools please.

replies(1): >>41376517 #
6. gary_0 ◴[] No.41370620[source]
That's not exactly it; quite a few RISC-style instructions require multiple (sometimes many) clock cycles to complete, such as mul/div, floating point math, and branching instructions can often take more than one clock cycle as well, and then once you throw in pipelining, caches, MMUs, atomics... "one clock cycle" doesn't really mean a lot. Especially since more advanced CPUs will ideally retire multiple instructions per clock.

Sure, addition and moving bits between registers takes one clock cycle, but those kinds of instructions take one clock cycle on CISC as well. And very tiny RISC microcontrollers can take more than one cycle for adds and shifts if you're really stingy with the silicon.

(Memory operations will of course take multiple cycles too, but that's not the CPU's fault.)

replies(2): >>41371005 #>>41377218 #
7. Suppafly ◴[] No.41371005{3}[source]
>quite a few RISC-style instructions require multiple (sometimes many) clock cycles to complete, such as mul/div, floating point math

Which seems like stuff you want support for, but this is seemingly arguing against?

replies(1): >>41371452 #
8. Suppafly ◴[] No.41371035[source]
Sure but that's not necessarily at odds with "programmer conveniences or other such cleverness" is it?
replies(1): >>41375212 #
9. enragedcacti ◴[] No.41371452{4}[source]
It seems contradictory because the "one clock per instruction" is mostly a misconception, at least with respect to anything even remotely modern.

https://retrocomputing.stackexchange.com/a/14509

10. adgjlsfhk1 ◴[] No.41375212{3}[source]
it is in the sense that those are programmer conveniences only for assembly programmers and Riscv's view is that to the extent possible the assembly programmer interface should largely be handled by psuedo-instructions that disappear when your go to machine code rather than making the chip deal with them
11. pjmlp ◴[] No.41376517{3}[source]
I usually talk a lot about Oberon, or Limbo, however their designs were constrained by hardware costs of the 1990's, and how much more the alternatives asked for in resources.

We are three decades away from those days, with more than enough hardware to run those systems, only available in universities or companies with very deep pockets.

Yet, the Go culture hasn't updated themselves, or very reluctantly, with the usual mistakes that were already to be seen when 1.0 came out.

And since they hit gold with CNCF projects, pretty much unavoidable for some work.

12. Closi ◴[] No.41377218{3}[source]
Got it, so it's more about removing microcode.
replies(1): >>41378274 #
13. Symmetry ◴[] No.41378274{4}[source]
The biggest divide is that no more than a single exception can occur in a RISC instruction, but you can have an indefinite number of page faults in something like an x86 rep mov.
replies(1): >>41385881 #
14. FullyFunctional ◴[] No.41385881{5}[source]
That's not even true as you can get lots of exceptions for the same instruction. For example a load can all of these and more (but only one will be reported at a time): instruction fetch page fault, load misaligned, and load page fault.

More characteristic are assumptions about side effects (none for integer, and cumulative flags for FP) and number of register file ports needed.