←back to thread

1401 points alankay | 1 comments | | HN request time: 0.001s | source

This request originated via recent discussions on HN, and the forming of HARC! at YC Research. I'll be around for most of the day today (though the early evening).
Show context
sebastianconcpt ◴[] No.11940121[source]
Hi Alan,

1. what do you think about the hardware we are using as foundation of computing today? I remember you mentioning about how cool was the architecture of the Burroughs B5000 [1] being prepared to run on the metal the higher level programming languages. What do hardware vendors should do to make hardware that is more friendly to higher level programming? Would that help us to be less depending on VM's while still enjoying silicon kind of performance?

2. What software technologies do you feel we're missing?

[1] https://en.wikipedia.org/wiki/Burroughs_large_systems

replies(1): >>11940224 #
alankay1 ◴[] No.11940224[source]
If you start with "desirable process" you can eventually work your way back to the power plug in the wall. If you start with something already plugged in, you might miss a lot of truly desirable processes.

Part of working your way back to reality can often require new hardware to be made or -- in the case of the days of microcode -- to shape the hardware.

There are lots of things vendors could do. For example: Intel could make its first level caches large enough to make real HLL emulators (and they could look at what else would help). Right now a plug-in or available FPGA could be of great use in many areas. From another direction, one could think of much better ways to organize memory architectures, especially for multi-core chips where they are quite starved.

And so on. We've gone very far down the road of "not very good" matchups, and of vendors getting programmers to make their CPUs useful rather than the exact opposite approach. This is too large a subject for today's AMA.

replies(3): >>11940320 #>>11944027 #>>11945386 #
elcritch ◴[] No.11944027[source]
Have you looked into the various Haskell/OCaml to hardware translators people have been coming up with the past few years?

It seems like it's been growing and several FPGA's are near that PnP status. In particular the notion of developing compile time proved RTS using continuation passing would be sweet.

Even with newer hardware it seems we're still stuck in either dynamic mutable languages or functional static ones. Any thoughts on how we could design systems incorporating the best of both using modern hardware capacities? Like... Say reconfigurable hierarchical element system where each node was an object/actor? Going out on a bit of a limb with that last one!

replies(1): >>11945254 #
alankay1 ◴[] No.11945254{3}[source]
Without commenting on Haskell, et al., I think it's important to start with "good models of processes" and let these interact with the best we can do with regard to languages and hardware in the light of these good models.

I don't think the "stuckness" in languages is other than like other kinds of human "stuckness" that come from being so close that it's hard to think of any other kinds of things.

replies(1): >>11948555 #
1. elcritch ◴[] No.11948555{4}[source]
Thanks! That helps reaffirm my thinking that "good models of processes" are important, even though implementations will always have limitations. Good to know I'm not completely off base...

A good example for me has been virtual memory pattern, where from a processes point-of-view you model memory as an ideal unlimited virtual space. Then you let the kernel implementation (and hardware) deal with the practical (and difficult details). Microsoft's Orleans implementation of the actor model has a similar approach that they call "virtual actors" that is interesting as well.

My own stuckness has been an idea of implementing processes using hierarchical state machines, especially for programming systems of IoT type devices. But I haven't been able to figure out how to incorporate type check theorems into it.