←back to thread

366 points pabs3 | 2 comments | | HN request time: 0s | source
Show context
Manfred ◴[] No.41365540[source]
> At least in the context of x86 emulation, among all 3 architectures we support, RISC-V is the least expressive one.

RISC was explained to me as a reduced instruction set computer in computer science history classes, but I see a lot of articles and proposed new RISC-V profiles about "we just need a few more instructions to get feature parity".

I understand that RISC-V is just a convenient alternative to other platforms for most people, but does this also mean the RISC dream is dead?

replies(7): >>41365583 #>>41365644 #>>41365687 #>>41365974 #>>41366364 #>>41370373 #>>41370588 #
gary_0 ◴[] No.41365687[source]
As I've heard it explained, RISC in practise is less about "an absolutely minimalist instruction set" and more about "don't add any assembly programmer conveniences or other such cleverness, rely on compilers instead of frontend silicon when possible".

Although as I recall from reading the RISC-V spec, RISC-V was rather particular about not adding "combo" instructions when common instruction sequences can be fused by the frontend.

My (far from expert) impression of RISC-V's shortcomings versus x86/ARM is more that the specs were written starting with the very basic embedded-chip stuff, and then over time more application-cpu extensions were added. (The base RV32I spec doesn't even include integer multiplication.) Unfortunately they took a long time to get around to finishing the bikeshedding on bit-twiddling and simd/vector extensions, which resulted in the current functionality gaps we're talking about.

So I don't think those gaps are due to RISC fundamentalism; there's no such thing.

replies(2): >>41365919 #>>41369318 #
Suppafly ◴[] No.41369318[source]
>and more about "don't add any assembly programmer conveniences or other such cleverness, rely on compilers instead of frontend silicon when possible"

What are the advantages of that?

replies(3): >>41369498 #>>41369725 #>>41370346 #
Closi ◴[] No.41370346{3}[source]
Instructions can be completed in one clock cycle, which removes a lot of complexity compared to instructions that require multiple clock cycles.

Removed complexity means you can fit more stuff into the same amount of silicon, and have it be quicker with less power.

replies(1): >>41370620 #
gary_0 ◴[] No.41370620{4}[source]
That's not exactly it; quite a few RISC-style instructions require multiple (sometimes many) clock cycles to complete, such as mul/div, floating point math, and branching instructions can often take more than one clock cycle as well, and then once you throw in pipelining, caches, MMUs, atomics... "one clock cycle" doesn't really mean a lot. Especially since more advanced CPUs will ideally retire multiple instructions per clock.

Sure, addition and moving bits between registers takes one clock cycle, but those kinds of instructions take one clock cycle on CISC as well. And very tiny RISC microcontrollers can take more than one cycle for adds and shifts if you're really stingy with the silicon.

(Memory operations will of course take multiple cycles too, but that's not the CPU's fault.)

replies(2): >>41371005 #>>41377218 #
1. Suppafly ◴[] No.41371005{5}[source]
>quite a few RISC-style instructions require multiple (sometimes many) clock cycles to complete, such as mul/div, floating point math

Which seems like stuff you want support for, but this is seemingly arguing against?

replies(1): >>41371452 #
2. enragedcacti ◴[] No.41371452[source]
It seems contradictory because the "one clock per instruction" is mostly a misconception, at least with respect to anything even remotely modern.

https://retrocomputing.stackexchange.com/a/14509