←back to thread

358 points ofalkaed | 3 comments | | HN request time: 0s | source

Just curious and who knows, maybe someone will adopt it or develop something new based on its ideas.
Show context
Animats ◴[] No.45556053[source]
- Photon, the graphical interface for QNX. Oriented more towards real time (widgets included gauges) but good enough to support two different web browsers. No delays. This was a real time operating system.

- MacOS 8. Not the Linux thing, but Copeland. This was a modernized version of the original MacOS, continuing the tradition of no command line. Not having a command line forces everyone to get their act together about how to install and configure things. Probably would have eased the tradition to mobile. A version was actually shipped to developers, but it had to be covered up to justify the bailout of Next by Apple to get Steve Jobs.

- Transaction processing operating systems. The first one was IBM's Customer Information Control System. A transaction processor is a kind of OS where everything is like a CGI program - load program, do something, exit program. Unix and Linux are, underneath, terminal oriented time sharing systems.

- IBM MicroChannel. Early minicomputer and microcomputer designers thought "bus", where peripherals can talk to memory and peripherals look like memory to the CPU. Mainframes, though, had "channels", simple processors which connected peripherals to the CPU. Channels could run simple channel programs, and managed device access to memory. IBM tried to introduce that with the PS2, but they made it proprietary and that failed in the marketplace. Today, everything has something like channels, but they're not a unified interface concept that simplifies the OS.

- CPUs that really hypervise properly. That is, virtual execution environments look just like real ones. IBM did that in VM, and it worked well because channels are a good abstraction for both a real machine and a VM. Storing into device registers to make things happen is not. x86 has added several layers below the "real machine" layer, and they're all hacks.

- The Motorola 680x0 series. Should have been the foundation of the microcomputer era, but it took way too long to get the MMU out the door. The original 68000 came out in 1978, but then Motorola fell behind.

- Modula. Modula 2 and 3 were reasonably good languages. Oberon was a flop. DEC was into Modula, but Modula went down with DEC.

- XHTML. Have you ever read the parsing rules for HTML 5, where the semantics for bad HTML were formalized? Browsers should just punt at the first error, display an error message, and render the rest of the page in Times Roman. Would it kill people to have to close their tags properly?

- Word Lens. Look at the world through your phone, and text is translated, standalone, on the device. No Internet connection required. Killed by Google in favor of hosted Google Translate.

replies(21): >>45556101 #>>45556208 #>>45556229 #>>45556271 #>>45556669 #>>45556854 #>>45556998 #>>45557079 #>>45557129 #>>45557453 #>>45557719 #>>45557765 #>>45558292 #>>45558373 #>>45558638 #>>45558754 #>>45559205 #>>45560537 #>>45561793 #>>45565845 #>>45582965 #
eterm ◴[] No.45556229[source]
> Would it kill people to have to close their tags properly

It would kill the approachability of the language.

One of the joys of learning HTML when it tended to be hand-written was that if you made a mistake, you'd still see something just with distorted output.

That was a lot more approachable for a lot of people who were put off "real" programming languages because they were overwhelmed by terrible error messages any time they missed a bracket or misspelled something.

If you've learned to program in the last decade or two, you might not even realise just how bad compiler errors tended to be in most languages.

The kind of thing where you could miss a bracket on line 47 but end up with a compiler error complaining about something 20 lines away.

Rust ( in particular ) got everyone to bring up their game with respect to meaningful compiler errors.

But in the days of XHTML? Error messages were arcane, you had to dive in to see what the problem actually was.

replies(3): >>45556403 #>>45558774 #>>45562563 #
bazoom42 ◴[] No.45556403[source]
If you forget a closing quote on an attribute in html, all content until next quote is ignored and not rendered - even if it is the rest of the page. I dont think this is more helpful than an error message. It was just simpler to implement.
replies(1): >>45556930 #
eterm ◴[] No.45556930[source]
Let's say you forget to close a <b></b> element.

What happens?

Even today, after years of better error messages, the strict validator at https://validator.w3.org/check says:

    Error Line 22, Column 4: end tag for "b" omitted, but OMITTAG NO was specified 
What is line 22?

    </p>

It's up to you to go hunting back through the document, to find the un-closed 'b' tag.

Back in the day, the error messages were even more misleading than this, often talking about "Extra content at end of document" or similar.

Compare that to the very visual feedback of putting this exact document into a browser.

You get more bold text than you were expecting, the bold just runs into the next text.

That's a world of difference, especially for people who prefer visual feedback to reading and understanding errors in text form.

Try it for yourself, save this document to a .html file and put it through the XHTML validator.

    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
        "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
    <?xml-stylesheet href="http://www.w3.org/StyleSheets/TR/W3C-WD.css" type="text/css"?>
    <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">

    <head>
      <title>test XHTML 1.0 Strict document</title>
      <link rev="made" href="mailto:gerald@w3.org" />
    </head>

    <body>

    <p>
    This is a test XHTML 1.0 Strict document.
    </p>

    <p>
    See: <a href="./">W3C Markup Validation Service: Tests</a>
    <b>huh
    Well, isn't that good

    </p>

    <hr />

    <address>
      <a href="https://validator.w3.org/check?uri=referer">valid HTML</a><br />
      <a href="../../feedback.html">Gerald Oskoboiny</a>
    </address>

    </body>

    </html>
replies(2): >>45557204 #>>45557296 #
1. chrismorgan ◴[] No.45557204{3}[source]
For reference, observe what happens if you try opening this malformed document in a browser: save it with a .xhtml extension, or serve it with MIME type application/xhtml+xml.

Firefox displays naught but the error:

  XML Parsing Error: mismatched tag. Expected: </b>.
  Location: file:///tmp/x.xhtml
  Line Number 22, Column 3:
  </p>
  --^
Chromium displays this banner on top of the document up to the error:

  This page contains the following errors:
  error on line 22 at column 5: Opening and ending tag mismatch: b line 19 and p
  Below is a rendering of the page up to the first error.
replies(1): >>45557527 #
2. eterm ◴[] No.45557527[source]
Thanks for showing these. We can see Firefox matches the same style of accurate but unhelpful error message.

Chromium is much more helpful in the error message, directing the user to both line 19 and 22. It also made the user-friendly choice to render up to the error.

In the context of XHTML, we should also keep in mind that Chrome post-dates XHTML by almost a decade.

replies(1): >>45557583 #
3. chrismorgan ◴[] No.45557583[source]
If, on the other hand, you have some sorts of XSLT errors, Firefox gives you a reasonably helpful error message in the dev tools, whereas Chromium gives you a blank document and nothing else… unless you ran it in a terminal. I’m still a little surprised that I managed to discover that it was emitting XSLT errors to stdout or stderr (don’t remember which).

Really, neither has particularly great handling of errors in anything XML. None of it is better than minimally maintained, a lot of it has simply been unmaintained for a decade or more.