←back to thread

1401 points alankay | 1 comments | | HN request time: 0s | source

This request originated via recent discussions on HN, and the forming of HARC! at YC Research. I'll be around for most of the day today (though the early evening).
Show context
erring ◴[] No.11944170[source]
In a recent talk, Ivan Sutherland spoke in the lines of, “Imagine that the hardware we used today had time as a first-class concept. What would computing be like?” [1]

To expand on Sutherland's point: Today's hardware does not concern itself with reflecting the realities of programming. The Commodore Amiga, which had a blitter chip that enabled high-speed bitmap writes with straightforward software implementation, brought about a whole new level in game programming. Lisp machines, running Lisp in silicon, famously enabled an incredibly powerful production environment. Evidence is mounting that the fundamental concepts we need for a new computing have to be ingrained in silicon, and programmers, saved from the useless toil of reimplementing the essentials, should be comfortable working in the (much “higher” and simpler) hardware level. Today, instead of striving for better infrastructure of this sort, we are toiling away at building bits of the perpetually rotting superstructure in slightly better ways.

The more radical voices in computer architecture and language design keep asserting in their various ways that a paradigm shift in how we do infrastructure will have to involve starting over with computing as we know it. Do you agree? Is it impossible to have time as a first-class concept in computing with anything short of a whole new system of computing, complete with a fundamentally new hardware design, programming environment and supporting pedagogy? Or can we get there by piling up better abstractions on top of the von Neumann baggage?

[1] This is from memory. Apologies for a possible misquotation, and corrections most welcome.

replies(2): >>11945608 #>>11972902 #
1. alankay1 ◴[] No.11945608[source]
Sure!