←back to thread

366 points pabs3 | 1 comments | | HN request time: 0.32s | source
Show context
Manfred ◴[] No.41365540[source]
> At least in the context of x86 emulation, among all 3 architectures we support, RISC-V is the least expressive one.

RISC was explained to me as a reduced instruction set computer in computer science history classes, but I see a lot of articles and proposed new RISC-V profiles about "we just need a few more instructions to get feature parity".

I understand that RISC-V is just a convenient alternative to other platforms for most people, but does this also mean the RISC dream is dead?

replies(7): >>41365583 #>>41365644 #>>41365687 #>>41365974 #>>41366364 #>>41370373 #>>41370588 #
gary_0 ◴[] No.41365687[source]
As I've heard it explained, RISC in practise is less about "an absolutely minimalist instruction set" and more about "don't add any assembly programmer conveniences or other such cleverness, rely on compilers instead of frontend silicon when possible".

Although as I recall from reading the RISC-V spec, RISC-V was rather particular about not adding "combo" instructions when common instruction sequences can be fused by the frontend.

My (far from expert) impression of RISC-V's shortcomings versus x86/ARM is more that the specs were written starting with the very basic embedded-chip stuff, and then over time more application-cpu extensions were added. (The base RV32I spec doesn't even include integer multiplication.) Unfortunately they took a long time to get around to finishing the bikeshedding on bit-twiddling and simd/vector extensions, which resulted in the current functionality gaps we're talking about.

So I don't think those gaps are due to RISC fundamentalism; there's no such thing.

replies(2): >>41365919 #>>41369318 #
Suppafly ◴[] No.41369318[source]
>and more about "don't add any assembly programmer conveniences or other such cleverness, rely on compilers instead of frontend silicon when possible"

What are the advantages of that?

replies(3): >>41369498 #>>41369725 #>>41370346 #
Retr0id ◴[] No.41369498[source]
It shifts implementation complexity from hardware onto software. It's not an inherent advantage, but an extra compiler pass is generally cheaper than increased silicon die area, for example.

On a slight tangent, from a security perspective, if your silicon is "too clever" in a way that introduces security bugs, you're screwed. On the other hand, software can be patched.

replies(1): >>41370589 #
flyingpenguin ◴[] No.41370589[source]
I honestly find the lack of compiler/interpreter complexity disheartening.

It often feels like as a community we don't have an interest in making better tools than those we started with.

Communicating with the compiler, and generating code with code, and getting information back from the compiler should all be standard things. In general they shouldn't be used, but if we also had better general access to profiling across our services, we could then have specialists within our teams break out the special tools and improve critical sections.

I understand that many of us work on projects with already absurd build times, but I feel that is a side effect of refusal to improve ci/cd/build tools in a similar way.

If you have ever worked on a modern TypeScript framework app, you'll understand what I mean. You can create decorators and macros talking to the TypeScript compiler and asking it to generate some extra JS or modify what it generates. And the whole framework sits there running partial re-builds and refreshing your browser for you.

It makes things like golang feel like they were made in the 80s.

Freaking golang... I get it, macros and decorators and generics are over-used. But I am making a library to standardize something across all 2,100 developers within my company... I need some meta-programming tools please.

replies(1): >>41376517 #
1. pjmlp ◴[] No.41376517[source]
I usually talk a lot about Oberon, or Limbo, however their designs were constrained by hardware costs of the 1990's, and how much more the alternatives asked for in resources.

We are three decades away from those days, with more than enough hardware to run those systems, only available in universities or companies with very deep pockets.

Yet, the Go culture hasn't updated themselves, or very reluctantly, with the usual mistakes that were already to be seen when 1.0 came out.

And since they hit gold with CNCF projects, pretty much unavoidable for some work.