←back to thread

1401 points alankay | 9 comments | | HN request time: 0.002s | source | bottom

This request originated via recent discussions on HN, and the forming of HARC! at YC Research. I'll be around for most of the day today (though the early evening).
1. nostrademons ◴[] No.11939944[source]
What turning points in the history of computing (products that won in the marketplace, inventions that were ignored, technical decisions where the individual/company/committee could've explored a different alternative, etc.) do you wish had gone another way?
replies(1): >>11945435 #
2. alankay ◴[] No.11945435[source]
Just to pick three (and maybe not even at the top of my list if I were to write it and sort it), are

(a) Intel and Motorola, etc. getting really interested in the Parc HW architectures that allowed Very High Level Languages to be efficiently implemented. Not having this in the 80s brought "not very good ideas from the 50s and 60s" back into programming, and was one of the big factors in:

(b) the huge propensity of "we know how to program" etc., that was the other big factor preventing the best software practices from the 70s from being the start of much better programming, operating systems, etc. in the 1980s, rather the reversion to weak methods (from which we really haven't recovered).

(c) The use of "best ideas about destiny of computing" e.g. in the ARPA community, rather than weak gestures e.g. the really poorly conceived WWW vs the really important and needed ideas of Engelbart.

replies(1): >>11957106 #
3. jonathanlocke ◴[] No.11957106[source]
I get (a) and (b) completely. On (c), I felt this way about NCSA Mosaic in 1993 when I first saw it and I'm relieved to hear you say this because although I definitely misunderstood a major technology shift for a few years, maybe I wasn't wrong in my initial reaction that it was stupid.
replies(1): >>11957620 #
4. mmiller ◴[] No.11957620{3}[source]
I didn't begin to get it until the industry started trying to use browsers for applications in the late '90s/early 2000's. I took one look at the "stateful" architecture they were trying to use, and I said to myself, "This is a hack." I learned shortly thereafter about criticism of it saying the same thing, "This is an attempt to impose statefulness on an inherently stateless architecture." I kept wondering why the industry wasn't using X11, which already had the ability to carry out full GUI interactions remotely. Why reject a real-time interactive architecture that's designed for network use for one that insisted on page refreshes to update the display? The whole thing felt like a step backward. The point where it clobbered me over the head was when I tried to use a web application framework to make a complex web form application work. I got it to work, and the customer was very pleased, but I was ashamed of the code I wrote, because I felt like I had to write it like I was a contortionist. I was fortunate in that I'd had prior experience with other platforms where the architecture was more sane, so that I didn't think this was a "good design." After that experience, I left the industry. I've been trying to segue into a different, more sane way of working with computers since. I don't think any of my past experience really qualifies, with the exception of some small aspects and experiences. The key is not to get discouraged once you've witnessed works that put your own to shame, but to realize that the difference in quality matters, that it was done by people rather like yourself who had the opportunity to put focus and attention on it, and that one should aspire to meet or exceed it, because anything else is a waste of time.
replies(2): >>11959400 #>>12010226 #
5. ontouchstart ◴[] No.11959400{4}[source]
How can we bring back X11 and good old interactive architecture to the generation of programmers growing up with AngularJS and ReactJS?

Or shall we reboot good ideas with IoT?

replies(2): >>11964693 #>>11965446 #
6. mmiller ◴[] No.11964693{5}[source]
My reference to X11 was mostly rhetorical, to tell the story. I learned at some point that the reason X11 wasn't adopted, at least in the realm of business apps. I was in, was that it was considered a security risk. Customers had the impression that http was "safe." That has since been proven false, as there have been many exploits of web servers, but I think by the time those vulnerabilities came to light, X11 was already considered passe. It's like how stand-alone PCs were put on the internet, and then people discovered they could be cracked so easily. I think a perceived weakness was that X11 didn't have a "request-respond" protocol that worked cleanly over a network for starting a session. One could have easily been devised, but as I recall, that never happened. In order to start a remote session of some tool I wanted to use, I always had to login to a server, using rlogin or telnet, type out the name of the executable, and tell it to "display" to my terminal address. It was possible to do this even without logging in. I'd seen students demonstrate that when I was in school. While they were logged in, they could start up an executable somewhere and tell it to "display" to someone else's terminal. The thing was, it could do this without the "receiver's" permission. It was pretty open that way. (That would have been another thing to implement in a protocol: don't "display" without permission, or at least without request from the same address.) Http didn't have this problem, since I don't think it's possible to direct a browser to go somewhere without a corresponding, prior request from that browser.

X11 was not the best designed GUI framework, from what I understand. I'd heard some complaints about it over the years, but at least it was designed to work over a network, which no other GUI framework of the time I knew about could. It could have been improved upon to create a safer network standard, if some effort had been put into it.

As Alan Kay said elsewhere on this thread, it's difficult to predict what will become popular next, even if something is improved to a point where it could reasonably be used as a substitute for something of lower quality. So, I don't know how to "bring X11 back." As he also said, the better ideas which ultimately became popularly adopted were ones that didn't have competitors already in the marketplace. So, in essence, the concept seemed new and interesting enough to enough people that the only way to get access to it was to adopt the better idea. In the case of X11, by the time the internet was privatized, and had become popular, there were already other competing GUIs, and web browsers became the de facto way people experienced the internet in a way that they felt was simple enough for them to use. I remember one technologist describing the browser as being like a consumer "radio" for the internet. That's a pretty good analogy.

Leaving that aside, it's been interesting to me to see that thick clients have actually made a comeback, taking a huge chunk out of the web. What was done with them is what I just suggested should've been done with X11: The protocol was (partly) improved. In typical fashion, the industry didn't quite get what should happen. They deliberately broke aspects of the OS that once allowed more user control, and they made using software a curated service, to make existing thick client technology safer to use. The thinking was, not without some rationale, that allowing user control led to lots and lots of customer support calls, because people are curious, and usually don't know what they're doing. The thing was, the industry didn't try to help people understand what was possible. Back when X11 was an interesting and productive way you could use Unix, the industry hadn't figured out how to make computers appealing to most consumers, and so in order to attract any buyers, they were forced into providing some help in understanding what they could do with the operating system, and/or the programming language that came with it. The learning curve was a bit steeper, but that also had the effect of limiting the size of the market. As the market has discovered, the path of least resistance is to make the interface simple, and low-hassle, and utterly powerless from a computational standpoint, essentially turning a computer into a device, like a Swiss Army knife.

I think a better answer than IoT is education, helping people to understand that there is something to be had with this new idea. It doesn't just involve learning to use the technology. As Alan Kay has said, in a phrase that I think deserves to be explored deeply, "The music is not in the piano."

It's not an easy thing to do, but it's worth doing, and even educators like Alan continue to explore how to do this.

This is just my opinion, as it comes out of my own personal experience, but I think it's borne out in the experience of many of the people who have participated in this AMA: I think an important place to start in all of this is helping people to even hear that "music," and an important thing to realize is you don't even need a computer to teach people how to hear it. It's just that the computer is the best thing that's been invented so far for expressing it.

replies(1): >>11964831 #
7. ontouchstart ◴[] No.11964831{6}[source]
I had similar experience as yours and was comfortable coding web pages via cgi-bin with vi. :-)

That is why now I am very interested in containers and microservices in both local and network senses.

As a "consumer", I am also very comfortable to communicate with people via message apps like WeChat and passing wikipedia and GitHub links around. Some of them are JavaScript "web apps" written and published in GitHub by typing on my iPhone. Here is an example:

http://bigdata-mindstorms.github.io/d3-playground/ontouchsta...

Hope I can help more people to "hear the music" and _make_ and _share_ their own.

8. mmiller ◴[] No.11965446{5}[source]
This is not "bringing X11 back," but it's an improvement on JS.

https://news.ycombinator.com/item?id=11965253

9. jonathanlocke ◴[] No.12010226{4}[source]
I don't think networked X11 is quite the web we'd want (it's really outdated), but it does seem better than browsers, which as you point out are so bad you want to stab your eyes out. Unfortunately, now that the web has scaled up to this enormous size, people can't un-see it and it does seem like it's seriously polluted our thinking about how the Internet should interact with end users.

Maybe the trick is something close to this: we need an Internet where it's very easy to do not only WYSIWYG document composition and publishing (which is what the web originally was, minus the WYSIWYG), but really deliver any kind of user experience we want (like VR, for example). It should be based on a network OS (an abstract, extensible microkernel on steroids) where user experiences of the network are actually programs with their own microkernel systems (sort of like an updated take on postscript). The network OS can security check the interpreters and quota and deal out resources and the microkernels that deliver user experiences like documents can be updated as what we want to do changes over time. I think we'd have something more in this direction (although I'm sure I missed any number of obvious problems) if we were to actually pass Alan Kay's OS-101 class as an industry.

We actually sort of very briefly started heading in this direction with Marimba's "Castanet" back at the beginning of Java and I was WILDLY excited to see us trying something less dumb than the browser. Unfortunately, it would seem that economic pressures pushed Marimba into becoming a software deployment provider, which is really not what I think they were originally trying to do. Castanet should have become the OS of the web. I think Java still has the potential to create something much better than the web because a ubiquitous and very mature virtual machine is a very powerful thing, but I don't see anyone trying go there. There's this mentality of "nobody would install something better." And yet we installed Netscape and even IE...

BTW, I do think the security problems of running untrusted code are potentially solvable (at least so much as any network security problems are) using a proper messaging microkernel architecture with the trusted resource-accessing code running in one process and the untrusted code running in another. The problem with the Java sandbox (so far as I understand all that) is that it's in-process. The scary code runs with the trusted code. In theory, Java is controlled enough to protect us from the scary code, but in practice, people are really smart and one tiny screw-up in the JVM or the JDK and bad code gets permissions it shouldn't have. A lot of these errors could be controlled or eliminated by separating the trusted code from the untrusted code as in Windows NT (even if only by making the protocol for resource permissions really clear).