←back to thread

55 points todsacerdoti | 6 comments | | HN request time: 0.001s | source | bottom
Show context
gelstudios ◴[] No.26594310[source]
These branches of computing history are really interesting.

> Domain/OS uses a single-level storage mechanism, whereby a program gains access to an object by mapping object pages directly into the process's address space.

It sounds similar in that respect to IBM i, and seems like an evolutionary branch that died off. What ever happened to this paradigm?

replies(4): >>26595328 #>>26596640 #>>26596892 #>>26598572 #
skissane ◴[] No.26596640[source]
> It sounds similar in that respect to IBM i, and seems like an evolutionary branch that died off

Even on IBM i it is in decline. Originally everything ran in the single-level store address space, but then they introduced additional non-single-level address spaces (teraspaces). And one of the major things that teraspaces are used for, is to run PASE, which is IBM i's AIX binary compatibility subsystem. And IBM appears to have a preference to ship new stuff in PASE. The single-level store environment is still used by "classic" apps (such as RPG and COBOL), but newer stuff – especially anything written in newer languages such as Java, Python, etc – runs outside of the single-level store in a PASE teraspace.

replies(1): >>26596895 #
anyfoo ◴[] No.26596895[source]
But is that just to accommodate the newer, more mainstream stuff, or because it's actually technically better?
replies(1): >>26597074 #
skissane ◴[] No.26597074{3}[source]
I think it is mainly about making it easier to port code from more mainstream platforms (AIX/Unix/Linux), which reduces engineering costs. Porting open source code is a low cost way to get new functionality and features, and makes the environment seem more familiar and modern to newcomers who are familiar with Linux – and commercial Unixes such as AIX are pretty close to Linux. When detractors call it a "legacy" platform, their sales team can now respond "it's not legacy, it runs node.js!"

But one thing I think it demonstrates is a problem with non-mainstream operating system architectures. Even if a non-mainstream operating system architecture is technically superior, sooner or later you want to port software to it from a mainstream operating system, which means you need a compatibility layer implementing a more mainstream operating system architecture. And before you know it most of the code is running in the compatibility layer, because that's where all the new applications are coming from and there is no way you can keep up with that pace yourself. And then you have to ask what is the point of the innovative non-mainstream architecture if so much of the software you run doesn't actually use it. So eventually it leads you to moving off the non-mainstream architecture and on to a more mainstream one.

Is IBM i technically superior? It is a weird mixture of (a) advanced concepts like single-level store, an object-oriented operating system and bytecode virtual machine (b) legacy crud like EBCDIC, RPG, block mode terminals, 10 character limit on object names and a single-level filesystem (c) a severe lack of extensibility and openness in which a lot of OS concepts (e.g object types) are closed up so only IBM engineering can extend them (or possibly ISVs who pay big $$$$ for NDA manuals) (d) the completely different worlds of POSIX/AIX/Java grafted on the side, and increasingly taking over the rest. I grant that (a) could be said to be technically superior, but (b) and (c) clearly are not.

replies(1): >>26597189 #
1. anyfoo ◴[] No.26597189{4}[source]
But that's entirely my point, yeah. I don't know if a single-level store address space is better, but if the reason for its decline on IBM i is merely that mainstream software doesn't mesh well with it, I feel like it doesn't tell me much about the paradigm itself.

By the way, I'd argue about whether all of b) is technically inferior or not. Object name limits certainly are, but I got to really know data entry with block mode terminals long, long after its hey day (I've certainly come across it back then, but I was rarely a user). I feel that it can be enormously efficient for data entry and maintenance tasks. Many a person who had to move from intensive use of a block mode data entry terminal to performing the same tasks with a web app got quite annoyed at the clumsiness of it all.

The web was not created for "business apps" but for hypertext document retrieval, the other uses got bolted on and it still very much shows. It's sad, because proper terminal emulation well used to be a ubiquitous feature of the Internet, before browsers took over almost entirely.

replies(1): >>26597322 #
2. skissane ◴[] No.26597322[source]
> but I got to really know data entry with block mode terminals long, long after its hey day (I've certainly come across it back then, but I was rarely a user)

I don't think block mode terminals are necessarily inferior. I see some big problems with 5250 though. The biggest is EBCDIC.

Another big problem is character-at-a-time interfaces let you build things like text editors (vim and emacs), spreadsheets (like Lotus 123), etc. Sure you can build a text editor for a block mode terminal (SEU on IBM i, XEDIT on z/VM, ISPF EDIT on z/OS) but there are just certain features and interaction styles that vim and emacs support that block mode terminals can't do as nicely (example: interactive search). Lotus 123 was actually ported to 3270 (to run under MVS and VM/CMS), I've never used it (I would love it if someone could find a copy so I could!) but from what I've heard it was pretty clunky compared to the MS-DOS / PC version.

Sometimes I think that block mode terminals could have exposed some kind of byte code to enable running some interactivity in the client. Actually real 3270s and 5250s generally had some kind of CPU in them (like an 8080) so I can't see why they couldn't have done that. And of course terminal emulators could do that. Then you could have these more flexible interaction styles that character mode terminals support even in a block mode terminal.

replies(1): >>26597487 #
3. anyfoo ◴[] No.26597487[source]
> I don't think block mode terminals are necessarily inferior. I see some big problems with 5250 though. The biggest is EBCDIC.

Oh yeah I agree, the actual implementation details in this case are icky.

> Another big problem is character-at-a-time interfaces let you build things like text editors (vim and emacs), spreadsheets (like Lotus 123)

That's true, but at the same time block mode allows for highly standardized and always latency free data entry and manipulation. I wonder if this is just a case for different technologies for different use cases.

> Sometimes I think that block mode terminals could have exposed some kind of byte code to enable running some interactivity in the client.

Hmm, it helps to preserve the zero latency aspect (if done correctly), but at the same time opens up the door for shoddy implementation and non-standard UX.

And then I'm sure people would come up with all sorts of "UI libraries" for terminals that they think are very clever, but just make everything fragmented and clumsy again, just like I often wish that a web site was just a plain old HTML page with maybe a standard web form, instead of whatever crazy js-backed UI the web framework du jour came up with...

replies(2): >>26597668 #>>26604024 #
4. skissane ◴[] No.26597668{3}[source]
> That's true, but at the same time block mode allows for highly standardized and always latency free data entry and manipulation. I wonder if this is just a case for different technologies for different use cases.

Other vendors – such as DEC and HP – had dual-mode ASCII terminals that normally operated in character-at-a-time mode, but had an escape sequence you could use to switch them into block mode. Maybe that's the best of both worlds. However, in practice, few apps used the block mode, even "data entry" style apps which could use it often didn't. Part of that was that using block mode basically tied you to a single brand of terminal, whereas manually generating forms using character mode was more portable. A lot of clone terminals and emulators emulate DEC VT terminals but few of those clones and emulators included the block mode functions.

replies(1): >>26597801 #
5. anyfoo ◴[] No.26597801{4}[source]
Ah, I can totally imagine that being the case, yeah. Sigh, looks like there's no way out, we'll keep inventing ourselves into half-baked solutions on top of existing things.
6. kragen ◴[] No.26604024{3}[source]
> at the same time block mode allows for highly standardized and always latency free data entry and manipulation. ... block mode terminals could have exposed some kind of byte code to enable running some interactivity in the client. ... it helps to preserve the zero latency aspect (if done correctly), but at the same time opens up the door for shoddy implementation and non-standard UX.

I've often had the same thought: wasn't it a terrible waste of an Intel 8080 to build a stupid VT-100 around it? An 8080 could run CP/M and Turbo Pascal and SuperCalc! Wouldn't it have been great if some computer company had had the foresight to take their terminals in that direction instead?

And it turns out that actually happened. Sort of.

My closest brush with this direction of evolution was the HP 2640 and 2645 terminals normally used on HP 3000s; although they supported a block mode, they were commonly used in a CLI sort of way, but with scrollback and local editing. So you could, as I understand it, tell the line-mode editor to spit out, say, ten lines, which it did with line numbers attached; then you could use the terminal's cursor keys to go up and edit those lines, and hitting RETURN would send the modified line to the editor, complete with the line number, and then the editor would replace the line with the edited version. And of course this also gave you the equivalent of less(1) (with a limited buffer) and the ability to edit and resend previous commands (but without tab-completion). To achieve these feats, the 2640, introduced in 01974, used the Intel 8008, a slower one-chip clone of the Datapoint 2200 terminal's CPU board, and the 2645 used its successor, the same 8080 the VT-100 would use.

The Datapoint corporation (originally CTC) had been selling "programmable terminals" the whole time, starting in 01971, and unlike the 2640 or the IBM 5250, it was user-programmable, with either assembly https://history-computer.com/Library/2200_Programmers_Man_Au... or PL/B (see below). (It also had tape drives, a source-code editor, an assembler, and a primitive OS.) In 01981 they sold US$450 million of terminals, which I guess must have been about 100,000 terminals, making it a Fortune 500 company.

Datapoint's terminals had a bytecode interpreter for PL/B, which was what passed for a high-level programming language at the time. https://en.wikipedia.org/wiki/Programming_Language_for_Busin...

But what about spreadsheets? Because obviously you could do useful calculations with an 8080 or even an 8008. Also, graphics! Well, so in 01978 HP shipped the 2647A terminal, which had a BASIC interpreter, so you could do calculations, and it had graphics so you could plot functions. But spreadsheets as we know them hadn't been invented yet.

In the HP 3000 division that made the 2640, there was an HP employee who'd previously built the Breakout machine for Atari when his day job was designing HP scientific calculators. When the calculator division moved to Oregon, he switched over to the HP 3000 division, but at home, he'd already built a cheaper video terminal. Then he'd added a 6502 microprocessor to it, and wrote a BASIC for it based on the HP Basic manual he read at work. His name was Steve Wozniak, and that was the Apple I. He was selling it for about 20% of the price of a 2647A, in 01976, two years before the 2647A. http://www.foundersatwork.com/steve-wozniak.html. And the Apple had graphics, too! In fact, even before the 2647A shipped, Wozniak had started selling the "Apple ][" with the Atari employee who'd stolen his Breakout bonus, a Transcendental Meditation instructor and scam artist named Steve Jobs.

Apple's BASIC, although it was basically a command-line system, had the same screen-editing feature as the HP 2640 terminal: you could move the cursor up to a line of BASIC and edit it with the arrow keys, and on hitting RETURN it would change the program in memory. (I know AppleSoft BASIC did this; I think Wozniak's Integer BASIC did too, but I never used it, so I'm not sure. Microsoft later cloned the feature in their BASICs for the IBM PC.)

So, getting back to spreadsheets, what we know today as the spreadsheet was invented by Bricklin and Frankston as VisiCalc, shipped on the Apple ][ in 01979. As Wozniak said in the article I linked above:

> In the Homebrew Computer Club, we felt it was going to affect every home in the country. But we felt it for the wrong reasons. We felt that everybody was technical enough to really use it and write their own programs and solve their problems that way. Even when we started Apple, we had very mistaken ideas about where the market was going to be to be that big. We didn't foresee the VisiCalc spreadsheet.

Frankston and Bricklin originally thought about implementing it on the DEC Programmable Data Terminal, which embedded a PDP-11 (LSI-11) into an Intel-8080-driven VT-100 terminal. The PDT was introduced in 01978, and in 01981 the PDT had shipped over 2600 units, with a base price of US$4800: http://www.bitsavers.org/pdf/datapro/programmable_terminals/... Fortunately, they ended up on the Apple. Frankston credits the highly usable user interface they ended up with to the rapid feedback loop of experimenting with prototypes in Wozniak's Integer BASIC on the Apple ][: https://rmf.vc/implementingvisicalc

Datapoint, as I said, was selling tens or hundreds of thousands of terminals a year by 01980—but then, for reasons I don't understand, it collapsed by about 01984. I suspect the high prices (https://oldcomputers.net/datapoint-2200.html gives the 01972 price as US$7800, three times the price of a 2640 and about US$50k today, and I imagine this continued to affect their sales channels until their death) allowed them to be eclipsed by Apple (1 million units sold in 01983, 6 million total of the Apple II series) and Commodore (about 15 million 64s sold, 2 million per year around 01983). But maybe having to program them in PL/B or 8008 assembly was a big disadvantage compared to BASIC, 6502 assembly, Z80 assembly, or especially 8086 assembly. I've never seen a Datapoint terminal in real life.

In the 01980s it became commonplace to replace both block-mode terminals like the 5250 and character-mode terminals like the VT-100 with IBM-compatible PCs, which were inspired by the 01970s personal computer hobbyists like Wozniak. Typically the PCs were running entire database applications talking to a fileserver, instead of sending blocks or forms to an application server.

So I think that's the way it shook out: there was a slippery slope from "running some interactivity in the client" to "running the whole application in the client, where it could be fully interactive, relegating the server to file storage". Full of, yes, shoddy implementation and non-standard UX. I think this slippery slope is because it's kind of a pain to split an interactive application into two parts running on different computers, requiring careful attention to protocol design and the Fallacies of Distributed Systems. So terminals grew up into PCs. It's kind of a Planet of the Apes ending.

But then the internet started to take off...