←back to thread

116 points vgr-land | 9 comments | | HN request time: 0.001s | source | bottom

A few weeks ago a friend sent me grug-brain XSLT (1) which inspired me to redo my personal blog in XSLT.

Rather than just build my own blog on it, I wrote it up for others to use and I've published it on GitHub https://github.com/vgr-land/vgr-xslt-blog-framework (2)

Since others have XSLT on the mind, now seems just as good of a time as any to share it with the world. Evidlo@ did a fine job explaining the "how" xslt works (3)

The short version on how to publish using this framework is:

1. Create a new post in HTML wrapped in the XML headers and footers the framework expects.

2. Tag the post so that its unique and the framework can find it on build

3. Add the post to the posts.xml file

And that's it. No build system to update menus, no RSS file to update (posts.xml is the rss file). As a reusable framework, there are likely bugs lurking in CSS, but otherwise I'm finding it perfectly usable for my needs.

Finally, it'd be a shame if XSLT is removed from the HTML spec (4), I've found it quite eloquent in its simplicity.

(1) https://news.ycombinator.com/item?id=44393817

(2) https://github.com/vgr-land/vgr-xslt-blog-framework

(3) https://news.ycombinator.com/item?id=44988271

(4) https://news.ycombinator.com/item?id=44952185

(Aside - First time caller long time listener to hn, thanks!)

1. b_e_n_t_o_n ◴[] No.45009540[source]
I guess I just don't get the point. In order for the page to load it needed to make four round trips on the server sequentially which ended up loading slower than my bloated javascript spa framework blog on a throttled connection. I don't really see how this is preferential to html, especially when there is a wealth of tools for building static blogs. Is it the no-build aspect of it?
replies(2): >>45009634 #>>45011470 #
2. riehwvfbk ◴[] No.45009634[source]
It did make all those requests, but only because the author set up caching incorrectly. If the cache headers were to be corrected, site.xsl, pages.xml, and posts.xml would only need to be downloaded once.
replies(1): >>45009969 #
3. b_e_n_t_o_n ◴[] No.45009969[source]
The cache headers are correct, you can't indefinitely cache those because they might change. Maybe you could get away with a short cache time but you can't cache them indefinitely like you can a javascript bundle.

Not to mention on a more involved site, each page will probably include a variety of components. You could end up with deeper nesting than just 4, and each page could reveal unique components further increasing load times.

I don't see much future in an architecture that inherently waterfalls in the worst way.

replies(1): >>45021207 #
4. Mikhail_Edoshin ◴[] No.45011470[source]
The appeal of XML is semantic. I think about things in a certain way. I write the text the way I think, inventing XML elements and structure as I go. Then I transform it into whatever. This obscures the semantic, but the transformation is transient, merely to present this to the user.

To do this dynamically I serve the content as I wrote it with a single processing instruction that refers to a stylesheet. This is elegant, isn't it? It is less efficient than a static site, but not that different from a typical HTML: HTML, CSS, JS. It is also trivial to change it to build statically (or to embed all the resources and XSLT into individual XML files, although this would be strange.)

And if browsers supported alternative stylesheets it would be trivial to provide alternative renderings at the cost of one processing instruction per rendering. Why don't they? Isn't this puzzling? I think it is even in the specification.

replies(1): >>45011555 #
5. b_e_n_t_o_n ◴[] No.45011555[source]
I get it, but if we're building things for others to use the elegance of our solutions doesn't matter. What matters is things like the efficiency, the experience of using it, not writing it. And I think browsers should serve the end user, not the developer. If we sacrifice some elegance for security that seems like a win for the user. Even if we lose some of the elegance of the abstraction, that's not what it's about.

Of course everyone is free to create things they want with their own abstractions, but let's not pretend that it's an optimal solution. Elegance and optimal are often at odds.

6. riehwvfbk ◴[] No.45021207{3}[source]
There are cache times other than 0 and infinity. Ideally the XSLT would change rarely, as would things like nav menus. So "relatively short" could mean several minutes to an hour. And with ETags the resource could be revalidated before expiry and never have to be re-downloaded.
replies(1): >>45022850 #
7. b_e_n_t_o_n ◴[] No.45022850{4}[source]
ETags still require a round trip. You could cache for longer but now you have to deal with the complexities and struggles of caching.
replies(1): >>45025398 #
8. riehwvfbk ◴[] No.45025398{5}[source]
With HTTP/2 multiplexing all of those requests can be made in a batch without round trips. And complexity of caching? An Etag done right is content-based. There's no logic to worry about.

It's really unfortunate that this style of architecture lost the battle. It's elegant. Data cleanly separated from presentation, small digestible entities, and it all kind of makes sense. But what killed it was the verbosity of XML, as well as its extreme pedantry that results in lack of robustness where a single error would kill the entire transform. Also transformation-based systems notoriously lack proper tools for debugging early on. Lastly, typically buggy implementations of pipelining in HTTP/1.1 made it so that you actually had to make those round trips. But conceptually we had all the pieces to make it work well back in the early 2000s.

replies(1): >>45031845 #
9. b_e_n_t_o_n ◴[] No.45031845{6}[source]
Hm - how would multiplexing help here? Does the browser read ahead and process cached assets to find dependencies before firing off etag requests in a batch? I'd be surprised if that was the case.